WorldWideScience

Sample records for real-time 3d virtual

  1. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  2. [Real time 3D echocardiography

    Science.gov (United States)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  3. Development of real-time motion capture system for 3D on-line games linked with virtual character

    Science.gov (United States)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  4. Design of a 3D virtual geographic interface for access to geoinformatin in real time

    DEFF Research Database (Denmark)

    Bodum, Lars

    2004-01-01

    as VR Media Lab. The Centre for 3D GeoInformation was opened in 2001 and the main purpose of this facility is to extrude the region from 2D to 3D. Through the means of traditional geoinformation such as building footprints, geocoding, building and dwelling register and a DTM the region will be build...

  5. Real time 3D photometry

    Science.gov (United States)

    Fernandez-Balbuena, A. A.; Vazquez-Molini, D.; García-Botella, A.; Romo, J.; Serrano, Ana

    2017-09-01

    The photometry and radiometry measurement is a well-developed field. The necessity of measuring optical systems performance involves the use of several techniques like Gonio-photometry. The Gonio photometers are a precise measurement tool that is used in the lighting area like office, luminaire head car lighting, concentrator /collimator measurement and all the designed and fabricated optical systems that works with light. There is one disadvantage in this kind of measurements that obtain the intensity polar curves and the total flux of the optical system. In the industry, there are good Gonio photometers that are precise and reliable but they are very expensive and the measurement time is long. In industry the cost can be of minor importance but measuring time that is around 30 minutes is of major importance due to trained staff cost. We have designed a system to measure photometry in real time; it consists in a curved screen to get a huge measurement angle and a CCD. The system to be measured projects light onto the screen and the CCD records a video of the screen obtaining an image of the projected profile. A complex calibration permits to trace screen data (x,y,z) to intensity polar curve (I,αγ). This intensity is obtained in candels (cd) with an image + processing time below one second.

  6. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  7. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  8. Real-Time 3D Profile Measurement Using Structured Light

    International Nuclear Information System (INIS)

    Xu, L; Zhang, Z J; Ma, H; Yu, Y J

    2006-01-01

    The paper builds a real-time system of 3D profile measurement using structured-light imaging. It allows a hand-held object to rotate free in the space-time coded light field, which is projected by the projector. The surface of measured objects with projected coded light is imaged; the system shows surface reconstruction results of objects online. This feedback helps user to adjust object's pose in the light field according to the dismissed or error data, which would achieve the integrality of data used in reconstruction. This method can acquire denser data cloud and have higher reconstruction accuracy and efficiency. According to the real-time requirements, the paper presents the non-restricted light plane modelling which suits stripe structured light system, designs the three-frame stripes space-time coded pattern, and uses the advance ICP algorithms to acquire 3D data alignment from multiple view

  9. VERSE - Virtual Equivalent Real-time Simulation

    Science.gov (United States)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  10. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  11. PRIMAS: a real-time 3D motion-analysis system

    Science.gov (United States)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  12. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  13. Real-time quasi-3D tomographic reconstruction

    Science.gov (United States)

    Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.

    2018-06-01

    Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.

  14. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  15. Simulation Study of Real Time 3-D Synthetic Aperture Sequential Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    in the main system. The real-time imaging capability is achieved using a synthetic aperture beamforming technique, utilizing the transmit events to generate a set of virtual elements that in combination can generate an image. The two core capabilities in combination is named Synthetic Aperture Sequential......This paper presents a new beamforming method for real-time three-dimensional (3-D) ultrasound imaging using a 2-D matrix transducer. To obtain images with sufficient resolution and contrast, several thousand elements are needed. The proposed method reduces the required channel count from...... Beamforming (SASB). Simulations are performed to evaluate the image quality of the presented method in comparison to Parallel beamforming utilizing 16 receive beamformers. As indicators for image quality the detail resolution and Cystic resolution are determined for a set of scatterers at a depth of 90mm...

  16. Real-time 3D human capture system for mixed-reality art and entertainment.

    Science.gov (United States)

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  17. Future enhancements to 3D printing and real time production

    Science.gov (United States)

    Landa, Joseph; Jenkins, Jeffery; Wu, Jerry; Szu, Harold

    2014-05-01

    The cost and scope of additive printing machines range from several hundred to hundreds of thousands of dollars. For the extra money, one can get improvements in build size, selection of material properties, resolution, and consistency. However, temperature control during build and fusing predicts outcome and protects the IP by large high cost machines. Support material options determine geometries that can be accomplished which drives cost and complexity of printing heads. Historically, 3D printers have been used for design and prototyping efforts. Recent advances and cost reduction sparked new interest in developing printed products and consumables such as NASA who is printing food, printing consumer parts (e.g. cell phone cases, novelty toys), making tools and fixtures in manufacturing, and recursively print a self-similar printer (c.f. makerbot). There is a near term promise of the capability to print on demand products at the home or office... directly from the printer to use.

  18. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil

    2013-10-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D system. The archaeologist is fully immersed in a large-scale visualization of on-going excavations. Massive 3D datasets are seamlessly rendered in real-time with field recorded GIS data, 3D artifact scans and digital photography. Dynamic content can be visualized and cultural analytics can be performed on archaeological datasets collected through a rigorous digital archaeological methodology. The virtual collaborative environment provides a menu driven query system and the ability to annotate, markup, measure, and manipulate any of the datasets. These features enable researchers to re-experience and analyze the minute details of an archaeological site\\'s excavation. It enhances their visual capacity to recognize deep patterns and structures and perceive changes and reoccurrences. As a complement and development from previous work in the field of 3D immersive archaeological environments, ArtifactVis2 provides a GIS based immersive environment that taps directly into archaeological datasets to investigate cultural and historical issues of ancient societies and cultural heritage in ways not possible before. © 2013 IEEE.

  19. Computer Tool for Automatically Generated 3D Illustration in Real Time from Archaeological Scanned Pieces

    Directory of Open Access Journals (Sweden)

    Luis López

    2012-11-01

    Full Text Available The graphical documentation process of archaeological pieces requires the active involvement of a professional artist to recreate beautiful illustrations using a wide variety of expressive techniques. Frequently, the artist’s work is limited by the inconvenience of working only with the photographs of the pieces he is going to illustrate. This paper presents a software tool that allows the easy generation of illustrations in real time from 3D scanned models. The developed interface allows the user to simulate very elaborate artistic styles through the creation of diagrams by using the available virtual lights. The software processes the diagrams to render an illustration from any given angle or position. Among the available virtual lights, there are well known techniques as silhouettes enhancement, hatching or toon shading.

  20. Real-time virtual EAST physical experiment system

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dan, E-mail: lidan@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Xiao, B.J., E-mail: bjxiao@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui (China); Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Yang, Fei, E-mail: fyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Department of Computer Science, Anhui Medical University, Hefei, Anhui (China)

    2014-05-15

    Graphical abstract: - Highlights: • 3D model of experimental advanced superconducting tokamak is established. • Interaction behavior is created that the users can get information from database. • The system integrates data acquisition, plasma shape visualization and simulation. • Browser-oriented system is web-based and more interactive, immersive and convenient. • The system provides the framework for virtual physical experimental environment. - Abstract: As a large fusion reaction device, experimental advanced superconducting tokamak (EAST)’s internal structure is complicated and not easily accessible. Moreover, various diagnostic systems and complicated configuration bring about the inconveniency to the scientists who are unfamiliar with the system but interested in the data. We propose a virtual system to display the 3D model of EAST facility and enable people to view its inner structure and get access to the information of its components in various view sights. We would also provide most of the diagnostic configuration details together with their signal names and physical properties. Compared to the previous ways of viewing information by reference to collected drawings and videos, virtual EAST system is more interactive and immersive. We constructed the browser-oriented virtual EAST physical experiment system, integrated real-time experiment data acquisition, plasma shape visualization and experiment result simulation in order to reproduce physical experiments in a web browser. This system used B/S (Browser/Server) structure in combination with the technology of virtual reality – VRML (Virtual Reality Modeling Language) and Java 3D. In order to avoid the bandwidth limit across internet, we balanced the rendering speed and the precision of the virtual model components. Any registered user can view the experimental information visually and efficiently by logining the system through a web browser. The establishment of the system provides the

  1. Real-time virtual EAST physical experiment system

    International Nuclear Information System (INIS)

    Li, Dan; Xiao, B.J.; Xia, J.Y.; Yang, Fei

    2014-01-01

    Graphical abstract: - Highlights: • 3D model of experimental advanced superconducting tokamak is established. • Interaction behavior is created that the users can get information from database. • The system integrates data acquisition, plasma shape visualization and simulation. • Browser-oriented system is web-based and more interactive, immersive and convenient. • The system provides the framework for virtual physical experimental environment. - Abstract: As a large fusion reaction device, experimental advanced superconducting tokamak (EAST)’s internal structure is complicated and not easily accessible. Moreover, various diagnostic systems and complicated configuration bring about the inconveniency to the scientists who are unfamiliar with the system but interested in the data. We propose a virtual system to display the 3D model of EAST facility and enable people to view its inner structure and get access to the information of its components in various view sights. We would also provide most of the diagnostic configuration details together with their signal names and physical properties. Compared to the previous ways of viewing information by reference to collected drawings and videos, virtual EAST system is more interactive and immersive. We constructed the browser-oriented virtual EAST physical experiment system, integrated real-time experiment data acquisition, plasma shape visualization and experiment result simulation in order to reproduce physical experiments in a web browser. This system used B/S (Browser/Server) structure in combination with the technology of virtual reality – VRML (Virtual Reality Modeling Language) and Java 3D. In order to avoid the bandwidth limit across internet, we balanced the rendering speed and the precision of the virtual model components. Any registered user can view the experimental information visually and efficiently by logining the system through a web browser. The establishment of the system provides the

  2. Towards real-time 3D ultrasound planning and personalized 3D printing for breast HDR brachytherapy treatment

    International Nuclear Information System (INIS)

    Poulin, Eric; Gardi, Lori; Fenster, Aaron; Pouliot, Jean; Beaulieu, Luc

    2015-01-01

    Two different end-to-end procedures were tested for real-time planning in breast HDR brachytherapy treatment. Both methods are using a 3D ultrasound (3DUS) system and a freehand catheter optimization algorithm. They were found fast and efficient. We demonstrated a proof-of-concept approach for personalized real-time guidance and planning to breast HDR brachytherapy treatments

  3. Virtual timers in hierarchical real-time systems

    NARCIS (Netherlands)

    Heuvel, van den M.M.H.P.; Holenderski, M.J.; Cools, W.A.; Bril, R.J.; Lukkien, J.J.; Zhu, D.

    2009-01-01

    Hierarchical scheduling frameworks (HSFs) provide means for composing complex real-time systems from welldefined subsystems. This paper describes an approach to provide hierarchically scheduled real-time applications with virtual event timers, motivated by the need for integrating priority

  4. V-Man Generation for 3-D Real Time Animation. Chapter 5

    Science.gov (United States)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  5. Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

    Directory of Open Access Journals (Sweden)

    Mitéran Johel

    2007-01-01

    Full Text Available Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP, allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented in an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics and using a reliable method.

  6. LandSIM3D: modellazione in real time 3D di dati geografici

    Directory of Open Access Journals (Sweden)

    Lambo Srl Lambo Srl

    2009-03-01

    Full Text Available LandSIM3D: realtime 3D modelling of geographic data LandSIM3D allows to model in 3D an existing landscape in a few hours only and geo-referenced offering great landscape analysis and understanding tools. 3D projects can then be inserted into the existing landscape with ease and precision. The project alternatives and impact can then be visualized and studied into their immediate environmental. The complex evolution of the landscape in the future can also be simulated and the landscape model can be manipulated interactively and better shared with colleagues. For that reason, LandSIM3D is different from traditional 3D imagery solutions, normally reserved for computer graphics experts. For more information about LandSIM3D, go to www.landsim3d.com.

  7. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments

    International Nuclear Information System (INIS)

    Szoke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-01-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation’s lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers. IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry. This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. (paper)

  8. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    Science.gov (United States)

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  9. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-01-01

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation

  10. A Flattened Hierarchical Scheduler for Real-Time Virtual Machines

    OpenAIRE

    Drescher, Michael Stuart

    2015-01-01

    The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates it...

  11. On the Feasibility of Real-Time 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, A.; Kosta, S.; Kyriazis, N.

    2018-01-01

    This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationally weak one...

  12. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ

    CSIR Research Space (South Africa)

    Henriques, R

    2010-05-01

    Full Text Available QuickPALM in conjunction with the acquisition of control features provides a complete solution for the acquisition, reconstruction and visualization of 3D PALM or STORM images, achieving resolutions of ~40 nm in real time. This software package...

  13. A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications

    DEFF Research Database (Denmark)

    Grest, Daniel; Krüger, Volker; Petersen, Thomas

    2009-01-01

    This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar P...

  14. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    Science.gov (United States)

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  15. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil; Knabb, Kyle; Defanti, Connor; Weber, Philip P.; Schulze, Jü rgen P.; Prudhomme, Andrew; Kuester, Falko; Levy, Thomas E.; Defanti, Thomas A.

    2013-01-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D

  16. A Spatial Reference Grid for Real-Time Autonomous Underwater Modeling using 3-D Sonar

    Energy Technology Data Exchange (ETDEWEB)

    Auran, P.G.

    1996-12-31

    The offshore industry has recognized the need for intelligent underwater robotic vehicles. This doctoral thesis deals with autonomous underwater vehicles (AUVs) and concentrates on a data representation for real-time image formation and analysis. Its main objective is to develop a 3-D image representation suitable for autonomous perception objectives underwater, assuming active sonar as the main sensor for perception. The main contributions are: (1) A dynamical image representation for 3-D range data, (2) A basic electronic circuit and software system for 3-D sonar sampling and amplitude thresholding, (3) A model for target reliability, (4) An efficient connected components algorithm for 3-D segmentation, (5) A method for extracting general 3-D geometrical representations from segmented echo clusters, (6) Experimental results of planar and curved target modeling. 142 refs., 120 figs., 10 tabs.

  17. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  18. Multithreaded real-time 3D image processing software architecture and implementation

    Science.gov (United States)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  19. 3D Elevation Program—Virtual USA in 3D

    Science.gov (United States)

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  20. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  1. Real-time tracking for virtual environments using scaat kalman filtering and unsynchronised cameras

    DEFF Research Database (Denmark)

    Rasmussen, Niels Tjørnly; Störring, Morritz; Moeslund, Thomas B.

    2006-01-01

    This paper presents a real-time outside-in camera-based tracking system for wireless 3D pose tracking of a user’s head and hand in a virtual environment. The system uses four unsynchronised cameras as sensors and passive retroreflective markers arranged in rigid bodies as targets. In order to ach...

  2. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  3. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  4. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  5. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  6. Computer Tool for Automatically Generated 3D Illustration in Real Time from Archaeological Scanned Pieces

    OpenAIRE

    Luis López; Germán Arroyo; Domingo Martín

    2012-01-01

    The graphical documentation process of archaeological pieces requires the active involvement of a professional artist to recreate beautiful illustrations using a wide variety of expressive techniques. Frequently, the artist’s work is limited by the inconvenience of working only with the photographs of the pieces he is going to illustrate. This paper presents a software tool that allows the easy generation of illustrations in real time from 3D scanned models. The developed interface allows the...

  7. Handheld real-time volumetric 3-D gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Haefner, Andrew, E-mail: ahaefner@lbl.gov [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Luke, Paul; Amman, Mark [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2017-06-11

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  8. Real-Time 3d Reconstruction from Images Taken from AN Uav

    Science.gov (United States)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  9. 3D real-time monitoring system for LHD plasma heating experiment

    International Nuclear Information System (INIS)

    Emoto, M.; Narlo, J.; Kaneko, O.; Komori, A.; Iima, M.; Yamaguchi, S.; Sudo, S.

    2001-01-01

    The JAVA-based real-time monitoring system has been in use at the National Institute for Fusion Science, Japan, since the end of March 1988 to maintain stable operations. This system utilizes JAVA technology to realize its platform-independent nature. The main programs are written as JAVA applets and provide human-friendly interfaces. In order to enhance the system's easy-recognition nature, a 3D feature is added. Since most of the system is written mainly in JAVA language, we adopted JAVA3D technology, which was easy to incorporate into the current running systems. With this 3D feature, the operator can more easily find the malfunctioning parts of complex instruments, such as LHD vacuum vessels. This feature is also helpful for recognizing physical phenomena. In this paper, we present an example in which the temperature increases of a vacuum vessel after NBI are visualized

  10. Real-time tracking with a 3D-Flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-06-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was thought to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc., with respect to the CAM approach. The report describes real-time track finding using new computing approach technique based on the 3D-Flow array processor system. This system consists of a fixed interconnection architecture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  11. Real-time tracking with a 3D-flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-01-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was though to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc. with respect to the CAM approach. This report describes real-time track finding using a new computing approach technique based on the 3D-flow array processor system. This system consists of a fixed interconnection architexture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  12. The Idaho Virtualization Laboratory 3D Pipeline

    Directory of Open Access Journals (Sweden)

    Nicholas A. Holmer

    2014-05-01

    Full Text Available Three dimensional (3D virtualization and visualization is an important component of industry, art, museum curation and cultural heritage, yet the step by step process of 3D virtualization has been little discussed. Here we review the Idaho Virtualization Laboratory’s (IVL process of virtualizing a cultural heritage item (artifact from start to finish. Each step is thoroughly explained and illustrated including how the object and its metadata are digitally preserved and ultimately distributed to the world.

  13. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2001-01-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  14. A real-time 3D scanning system for pavement distortion inspection

    International Nuclear Information System (INIS)

    Li, Qingguang; Yao, Ming; Yao, Xun; Xu, Bugao

    2010-01-01

    Pavement distortions, such as rutting and shoving, are the common pavement distress problems that need to be inspected and repaired in a timely manner to ensure ride quality and traffic safety. This paper introduces a real-time, low-cost inspection system devoted to detecting these distress features using high-speed 3D transverse scanning techniques. The detection principle is the dynamic generation and characterization of the 3D pavement profile based on structured light triangulation. To improve the accuracy of the system, a multi-view coplanar scheme is employed in the calibration procedure so that more feature points can be used and distributed across the field of view of the camera. A sub-pixel line extraction method is applied for the laser stripe location, which includes filtering, edge detection and spline interpolation. The pavement transverse profile is then generated from the laser stripe curve and approximated by line segments. The second-order derivatives of the segment endpoints are used to identify the feature points of possible distortions. The system can output the real-time measurements and 3D visualization of rutting and shoving distress in a scanned pavement

  15. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)

    2001-07-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  16. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  17. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  18. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  19. Demo: Distributed Real-Time Generative 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, Ammar; Kosta, Sokol; Kyriazis, Nikolaos

    2018-01-01

    computations locally. The network connection takes the place of a GPGPU accelerator and sharing resources with a larger workstation becomes the acceleration mechanism. The unique properties of a generative optimizer are examined and constitute a challenging use-case, since the requirement for real......This work demonstrates a real-time 3D hand tracking application that runs via computation offloading. The proposed framework enables the application to run on low-end mobile devices such as laptops and tablets, despite the fact that they lack the sufficient hardware to perform the required...

  20. Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Glückstad, J.

    2005-01-01

    The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture...... for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated. (C) 2005 Optical Society of America....

  1. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    Science.gov (United States)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  2. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    Science.gov (United States)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  3. 3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.

    Science.gov (United States)

    Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali

    2017-07-28

    Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.

  4. Monitoring tumor motion by real time 2D/3D registration during radiotherapy.

    Science.gov (United States)

    Gendrin, Christelle; Furtado, Hugo; Weber, Christoph; Bloch, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Bergmann, Helmar; Stock, Markus; Fichtinger, Gabor; Georg, Dietmar; Birkfellner, Wolfgang

    2012-02-01

    In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  6. Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano

    Science.gov (United States)

    Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.

    2012-04-01

    Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic

  7. GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy

    Science.gov (United States)

    Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.

    2012-06-01

    Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.

  8. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    Science.gov (United States)

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  9. Real-time microscopic 3D shape measurement based on optimized pulse-width-modulation binary fringe projection

    Science.gov (United States)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-07-01

    In recent years, tremendous progress has been made in 3D measurement techniques, contributing to the realization of faster and more accurate 3D measurement. As a representative of these techniques, fringe projection profilometry (FPP) has become a commonly used method for real-time 3D measurement, such as real-time quality control and online inspection. To date, most related research has been concerned with macroscopic 3D measurement, but microscopic 3D measurement, especially real-time microscopic 3D measurement, is rarely reported. However, microscopic 3D measurement plays an important role in 3D metrology and is indispensable in some applications in measuring micro scale objects like the accurate metrology of MEMS components of the final devices to ensure their proper performance. In this paper, we proposed a method which effectively combines optimized binary structured patterns with a number-theoretical phase unwrapping algorithm to realize real-time microscopic 3D measurement. A slight defocusing of our optimized binary patterns can considerably alleviate the measurement error based on four-step phase-shifting FPP, providing the binary patterns with a comparable performance to ideal sinusoidal patterns. The static measurement accuracy can reach 8 μm, and the experimental results of a vibrating earphone diaphragm reveal that our system can successfully realize real-time 3D measurement of 120 frames per second (FPS) with a measurement range of 8~\\text{mm}× 6~\\text{mm} in lateral and 8 mm in depth.

  10. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  11. Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Alonzo, C.A.

    2006-01-01

    The Generalized Phase Contrast (GPC) method of optical 3D manipulation has previously been used for controlled spatial manipulation of live biological specimen in real-time. These biological experiments were carried out over a time-span of several hours while an operator intermittently optimized...... the optical system. Here we present GPC-based optical micromanipulation in a microfluidic system where trapping experiments are computer-automated and thereby capable of running with only limited supervision. The system is able to dynamically detect living yeast cells using a computer-interfaced CCD camera......, and respond to this by instantly creating traps at positions of the spotted cells streaming at flow velocities that would be difficult for a human operator to handle. With the added ability to control flow rates, experiments were also carried out to confirm the theoretically predicted axially dependent...

  12. Real-time 3D vectorcardiography: an application for didactic use

    International Nuclear Information System (INIS)

    Daniel, G; Lissa, G; Redondo, D Medina; Vasquez, L; Zapata, D

    2007-01-01

    The traditional approach to teach the physiological basis of electrocardiography, based only on textbooks, turns out to be insufficient or confusing for students of biomedical sciences. The addition of laboratory practice to the curriculum enables students to approach theoretical aspects from a hands-on experience, resulting in a more efficient and deeper knowledge of the phenomena of interest. Here, we present the development of a PC-based application meant to facilitate the understanding of cardiac bioelectrical phenomena by visualizing in real time the instantaneous 3D cardiac vector. The system uses 8 standard leads from a 12-channel electrocardiograph. The application interface has pedagogic objectives, and facilitates the observation of cardiac depolarization and repolarization and its temporal relationship with the ECG, making it simpler to interpret

  13. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    Science.gov (United States)

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  14. 3D VISUALIZATION FOR VIRTUAL MUSEUM DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    M. Skamantzari

    2016-06-01

    Full Text Available The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.

  15. IPS – A SYSTEM FOR REAL-TIME NAVIGATION AND 3D MODELING

    Directory of Open Access Journals (Sweden)

    D. Grießbach

    2012-07-01

    Full Text Available fdaReliable navigation and 3D modeling is a necessary requirement for any autonomous system in real world scenarios. German Aerospace Center (DLR developed a system providing precise information about local position and orientation of a mobile platform as well as three-dimensional information about its environment in real-time. This system, called Integral Positioning System (IPS can be applied for indoor environments and outdoor environments. To achieve high precision, reliability, integrity and availability a multi-sensor approach was chosen. The important role of sensor data synchronization, system calibration and spatial referencing is emphasized because the data from several sensors has to be fused using a Kalman filter. A hardware operating system (HW-OS is presented, that facilitates the low-level integration of different interfaces. The benefit of this approach is an increased precision of synchronization at the expense of additional engineering costs. It will be shown that the additional effort is leveraged by the new design concept since the HW-OS methodology allows a proven, flexible and fast design process, a high re-usability of common components and consequently a higher reliability within the low-level sensor fusion. Another main focus of the paper is on IPS software. The DLR developed, implemented and tested a flexible and extensible software concept for data grabbing, efficient data handling, data preprocessing (e.g. image rectification being essential for thematic data processing. Standard outputs of IPS are a trajectory of the moving platform and a high density 3D point cloud of the current environment. This information is provided in real-time. Based on these results, information processing on more abstract levels can be executed.

  16. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    Science.gov (United States)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  17. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  18. An inexpensive underwater mine countermeasures simulator with real-time 3D after action review

    Directory of Open Access Journals (Sweden)

    Robert Stone

    2016-10-01

    Full Text Available This paper presents the results of a concept capability demonstration pilot study, the aim of which was to investigate how inexpensive gaming software and hardware technologies could be exploited in the development and evaluation of a simulator prototype for training Royal Navy mine clearance divers, specifically focusing on the detection and accurate reporting of the location and condition of underwater ordnance. The simulator was constructed using the Blender open source 3D modelling toolkit and game engine, and featured not only an interactive 3D editor for underwater scenario generation by instructors, but also a real-time, 3D After Action Review (AAR system for formative assessment and feedback. The simulated scenarios and AAR architecture were based on early human factors observations and briefings conducted at the UK's Defence Diving School (DDS, an organisation that provides basic military diving training for all Royal Navy and Army (Royal Engineers divers. An experimental pilot study was undertaken to determine whether or not basic navigational and mine detection components of diver performance could be improved as a result of exposing participants to the AAR system, delivered between simulated diving scenarios. The results suggest that the provision of AAR was accompanied by significant performance improvements in the positive identification of simulated underwater ordnance (in contrast to non-ordnance objects and on participants' description of their location, their immediate in-water or seabed context and their structural condition. Only marginal improvements were found with participants' navigational performance in terms of their deviation accuracies from a pre-programmed expert search path. Overall, this project contributes to the growing corpus of evidence supporting the development of simulators that demonstrate the value of exploiting open source gaming software and the significance of adopting established games design

  19. Monitoring the effects of doxorubicin on 3D-spheroid tumor cells in real-time

    Directory of Open Access Journals (Sweden)

    Baek N

    2016-11-01

    Full Text Available NamHuk Baek,1,* Ok Won Seo,1,* MinSung Kim,1 John Hulme,2 Seong Soo A An2 1Department of R & D, NanoEntek Inc., Seoul, Republic of Korea; 2Department of BioNano Technology Gachon University, Gyeonggi-do, Republic of Korea *These authors contributed equally to this work Abstract: Recently, increasing numbers of cell culture experiments with 3D spheroids presented better correlating results in vivo than traditional 2D cell culture systems. 3D spheroids could offer a simple and highly reproducible model that would exhibit many characteristics of natural tissue, such as the production of extracellular matrix. In this paper numerous cell lines were screened and selected depending on their ability to form and maintain a spherical shape. The effects of increasing concentrations of doxorubicin (DXR on the integrity and viability of the selected spheroids were then measured at regular intervals and in real-time. In total 12 cell lines, adenocarcinomic alveolar basal epithelial (A549, muscle (C2C12, prostate (DU145, testis (F9, pituitary epithelial-like (GH3, cervical cancer (HeLa, HeLa contaminant (HEp2, embryo (NIH3T3, embryo (PA317, neuroblastoma (SH-SY5Y, osteosarcoma U2OS, and embryonic kidney cells (293T, were screened. Out of the 12, 8 cell lines, NIH3T3, C2C12, 293T, SH-SY5Y, A549, HeLa, PA317, and U2OS formed regular spheroids and the effects of DXR on these structures were measured at regular intervals. Finally, 5 cell lines, A549, HeLa, SH-SY5Y, U2OS, and 293T, were selected for real-time monitoring and the effects of DXR treatment on their behavior were continuously recorded for 5 days. A potential correlation regarding the effects of DXR on spheroid viability and ATP production was measured on days 1, 3, and 5. Cytotoxicity of DXR seemed to occur after endocytosis, since the cellular activities and ATP productions were still viable after 1 day of the treatment in all spheroids, except SH-SY5Y. Both cellular activity and ATP production were

  20. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    Science.gov (United States)

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  1. Introduction to programmable shader in real time 3D computer graphics

    International Nuclear Information System (INIS)

    Uemura, Syuhei; Kirii, Keisuke; Matsumura, Makoto; Matsumoto, Kenichiro

    2004-01-01

    Nevertheless the visualization of large-scale data had played the important role which influences informational usefulness in the basic field of science, the high-end graphics system or the exclusive system needed to be used. On the other hand, in recent years, the progress speed of the capability of the video game console or the graphics board for PC has a remarkable thing reflecting the expansion tendency of TV game market in and outside the country. Especially, the ''programmable shader'' technology in which the several graphics chip maker has started implementation is the innovative technology which can also be called change of generation of real-time 3D graphics, and the scope of the visual expression technique has spread greatly. However, it cannot say that the development/use environment of software which used programmable shader are fully generalized, and the present condition is that the grope of the applied technology to overly the ultra high-speed/quality visualization of large-scale data is not prograssing. We provide the outline of programmable shader technology and consider the possibility of the application to large-scale data visualization. (author)

  2. An inkjet-printed buoyant 3-D lagrangian sensor for real-time flood monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad

    2014-06-01

    A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \\\\lambda 0 dipole that is uniquely implemented on all the faces of the cube to achieve a near isotropic radiation pattern. The sensor has been designed to operate both in the air as well as water (half immersed) for real-time flood monitoring. The sensor weighs 1.8 gm and measures 13 mm$\\\\,\\\\times\\\\,$ 13 mm$\\\\,\\\\times\\\\,$ 13 mm, and each side of the cube corresponds to only $0.1 \\\\lambda 0 (at 2.4 GHz). The printed circuit board is also inkjet-printed on paper substrate to make the sensor light weight and buoyant. Issues related to the bending of inkjet-printed tracks and integration of the transmitter chip in the cube are discussed. The Lagrangian sensor is designed to operate in a wireless sensor network and field tests have confirmed that it can communicate up to a distance of 100 m while in the air and up to 50 m while half immersed in water. © 1963-2012 IEEE.

  3. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    Science.gov (United States)

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. © 2016. Published by The Company of Biologists Ltd.

  4. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S [University Medical Center Utrecht, Utrecht (Netherlands); Senneville, B Denis de [University Medical Center Utrecht, Utrecht (Netherlands); Mathematical Institute of Bordeaux, University of Bordeaux, Talence Cedex (France)

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  5. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    Science.gov (United States)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  6. 3D super-virtual refraction interferometry

    KAUST Repository

    Lu, Kai; AlTheyab, Abdullah; Schuster, Gerard T.

    2014-01-01

    Super-virtual refraction interferometry enhances the signal-to-noise ratio of far-offset refractions. However, when applied to 3D cases, traditional 2D SVI suffers because the stationary positions of the source-receiver pairs might be any place

  7. 3D virtual table in anatomy education

    DEFF Research Database (Denmark)

    Dahl, Mads Ronald; Simonsen, Eivind Ortind

    The ‘Anatomage’ is a 3D virtual human anatomy table, with touchscreen functionality, where it is possible to upload CT-scans and digital. Learning the human anatomy terminology requires time, a very good memory, anatomy atlas, books and lectures. Learning the 3 dimensional structure, connections...

  8. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    Science.gov (United States)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  9. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  10. Interactive Scientific Visualization in 3D Virtual Reality Model

    Directory of Open Access Journals (Sweden)

    Filip Popovski

    2016-11-01

    Full Text Available Scientific visualization in technology of virtual reality is a graphical representation of virtual environment in the form of images or animation that can be displayed with various devices such as Head Mounted Display (HMD or monitors that can view threedimensional world. Research in real time is a desirable capability for scientific visualization and virtual reality in which we are immersed and make the research process easier. In this scientific paper the interaction between the user and objects in the virtual environment аrе in real time which gives a sense of reality to the user. Also, Quest3D VR software package is used and the movement of the user through the virtual environment, the impossibility to walk through solid objects, methods for grabbing objects and their displacement are programmed and all interactions between them will be possible. At the end some critical analysis were made on all of these techniques on various computer systems and excellent results were obtained.

  11. Synthetic biology's tall order: Reconstruction of 3D, super resolution images of single molecules in real-time

    CSIR Research Space (South Africa)

    Henriques, R

    2010-08-31

    Full Text Available -to-use reconstruction software coupled with image acquisition. Here, we present QuickPALM, an Image plugin, enabling real-time reconstruction of 3D super-resolution images during acquisition and drift correction. We illustrate its application by reconstructing Cy5...

  12. 3D Virtual Reality for Teaching Astronomy

    Science.gov (United States)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  13. Real time determination of dose radiation through artificial intelligence and virtual reality

    International Nuclear Information System (INIS)

    Freitas, Victor G.G.; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    In the last years, a virtual environment of Argonauta research reactor, sited in the Instituto de Engenharia Nuclear (Brazil), has been developed. Such environment, called here Argonauta Virtual (AV), is a 3D model of the reactor hall, in which virtual people (avatar) can navigate. In AV, simulations of nuclear sources and doses are possible. In a recent work, a real time monitoring system (RTMS) was developed to provide (by means of Ethernet TCP/IP) the information of area detectors situated in the reactor hall. Extending the scope of AV, this work is intended to provide a continuous determination of gamma radiation dose in the reactor hall, based in several monitored parameters. To accomplish that a module based in artificial neural network (ANN) was developed. The ANN module is able to predict gamma radiation doses using as inputs: the avatar position (from virtual environment), the reactor power (from RTMS) and information of fixed area detectors (from RTMS). The ANN training data has been obtained by measurements of gamma radiation doses in a mesh of points, with previously defined positions, for different power levels. Through the use of ANN it is possible to estimate, in real time, the dose received by a person at any position in Argonauta reactor hall. Such approach allows tasks simulations and training of people inside the AV system, without exposing them to radiation effects. (author)

  14. Real time determination of dose radiation through artificial intelligence and virtual reality

    International Nuclear Information System (INIS)

    Freitas, Victor Goncalves Gloria

    2009-01-01

    In the last years, a virtual environment of Argonauta research reactor, sited in the Instituto de Engenharia Nuclear (Brazil), has been developed. Such environment, called here Argonauta Virtual (AV), is a 3D model of the reactor hall, in which virtual people (avatar) can navigate. In AV, simulations of nuclear sources and doses are possible. In a recent work, a real time monitoring system (RTMS) was developed to provide (by means of Ethernet TCP/I P) the information of area detectors situated in the reactor hall. Extending the scope of AV, this work is intended to provide a continuous determination of gamma radiation dose in the reactor hall, based in several monitored parameters. To accomplish that a module based in artificial neural network (ANN) was developed. The ANN module is able to predict gamma radiation doses using as inputs: the avatar position (from virtual environment), the reactor power (from RTMS) and information of fixed area detectors (from RTMS). The ANN training data has been obtained by measurements of gamma radiation doses in a mesh of points, with previously defined positions, for different power levels. Through the use of ANN it is possible to estimate, in real time, the dose received by a person at any position in Argonauta reactor hall. Such approach allows tasks simulations and training of people inside the AV system, without exposing them to radiation effects. (author)

  15. 3D Assessment of Features Associated With Transvalvular Aortic Regurgitation After TAVR: A Real-Time 3D TEE Study.

    Science.gov (United States)

    Shibayama, Kentaro; Mihara, Hirotsugu; Jilaihawi, Hasan; Berdejo, Javier; Harada, Kenji; Itabashi, Yuji; Siegel, Robert; Makkar, Raj R; Shiota, Takahiro

    2016-02-01

    This study of 3-dimensional (3D) transesophageal echocardiography (TEE) aimed to demonstrate features associated with transvalvular aortic regurgitation (AR) after transcatheter aortic valve replacement (TAVR) and to confirm the fact that a gap between the native aortic annulus and prosthesis is associated with paravalvular AR. The mechanism of AR after TAVR, particularly that of transvalvular AR, has not been evaluated adequately. All patients with severe aortic stenosis who underwent TAVR with the Sapien device (Edwards Lifesciences, Irvine, California) had 3D TEE of the pre-procedural native aortic annulus and the post-procedural prosthetic valve. In the 201 patients studied, the total AR was mild in 67 patients (33%), moderate in 21 patients (10%), and severe in no patients. There were 20 patients with transvalvular AR and 82 patients with paravalvular AR. Fourteen patients had both transvalvular and paravalvular AR. Patients with transvalvular AR had larger prosthetic expansion (p prosthetic shape at the prosthetic commissure level (p prosthetic commissures in relation to the native commissures, than the patients without transvalvular AR. Age (odds ratio [OR]: 1.05; 95% confidence interval [CI]: 1.01 to 1.09; p 3D TEE successfully demonstrated the features associated with transvalvular AR, such as large prosthetic expansion, elliptical prosthetic shape, and anti-anatomical position of prosthesis. Additionally, effective area oversizing was associated with paravalvular AR. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  16. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    Science.gov (United States)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  17. Virtual VMASC: A 3D Game Environment

    Science.gov (United States)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  18. A multi-frequency electrical impedance tomography system for real-time 2D and 3D imaging

    Science.gov (United States)

    Yang, Yunjie; Jia, Jiabin

    2017-08-01

    This paper presents the design and evaluation of a configurable, fast multi-frequency Electrical Impedance Tomography (mfEIT) system for real-time 2D and 3D imaging, particularly for biomedical imaging. The system integrates 32 electrode interfaces and the current frequency ranges from 10 kHz to 1 MHz. The system incorporates the following novel features. First, a fully adjustable multi-frequency current source with current monitoring function is designed. Second, a flexible switching scheme is developed for arbitrary sensing configuration and a semi-parallel data acquisition architecture is implemented for high-frame-rate data acquisition. Furthermore, multi-frequency digital quadrature demodulation is accomplished in a high-capacity Field Programmable Gate Array. At last, a 3D imaging software, visual tomography, is developed for real-time 2D and 3D image reconstruction, data analysis, and visualization. The mfEIT system is systematically tested and evaluated from the aspects of signal to noise ratio (SNR), frame rate, and 2D and 3D multi-frequency phantom imaging. The highest SNR is 82.82 dB on a 16-electrode sensor. The frame rate is up to 546 fps at serial mode and 1014 fps at semi-parallel mode. The evaluation results indicate that the presented mfEIT system is a powerful tool for real-time 2D and 3D imaging.

  19. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    Science.gov (United States)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  20. Holovideo: Real-time 3D range video encoding and decoding on GPU

    Science.gov (United States)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  1. A Smartphone Interface for a Wireless EEG Headset with Real-Time 3D Reconstruction

    DEFF Research Database (Denmark)

    Stopczynski, Arkadiusz; Larsen, Jakob Eg; Stahlhut, Carsten

    2011-01-01

    We demonstrate a fully functional handheld brain scanner consisting of a low-cost 14-channel EEG headset with a wireless connec- tion to a smartphone, enabling minimally invasive EEG monitoring in naturalistic settings. The smartphone provides a touch-based interface with real-time brain state...

  2. An Evolutionary Real-Time 3D Route Planner for Aircraft

    Institute of Scientific and Technical Information of China (English)

    郑昌文; 丁明跃; 周成平

    2003-01-01

    A novel evolutionary route planner for aircraft is proposed in this paper. In the new planner, individual candidates are evaluated with respect to the workspace, thus the computation of the configuration space is not required. By using problem-specific chromosome structure and genetic operators, the routes are generated in real time,with different mission constraints such as minimum route leg length and flying altitude, maximum turning angle, maximum climbing/diving angle and route distance constraint taken into account.

  3. Real-time 3-D SAFT-UT system evaluation and validation

    International Nuclear Information System (INIS)

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors

  4. Real-time 3-D SAFT-UT system evaluation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  5. Optimal transcostal high-intensity focused ultrasound with combined real-time 3D movement tracking and correction

    International Nuclear Information System (INIS)

    Marquet, F; Aubry, J F; Pernot, M; Fink, M; Tanter, M

    2011-01-01

    Recent studies have demonstrated the feasibility of transcostal high intensity focused ultrasound (HIFU) treatment in liver. However, two factors limit thermal necrosis of the liver through the ribs: the energy deposition at focus is decreased by the respiratory movement of the liver and the energy deposition on the skin is increased by the presence of highly absorbing bone structures. Ex vivo ablations were conducted to validate the feasibility of a transcostal real-time 3D movement tracking and correction mode. Experiments were conducted through a chest phantom made of three human ribs immersed in water and were placed in front of a 300 element array working at 1 MHz. A binarized apodization law introduced recently in order to spare the rib cage during treatment has been extended here with real-time electronic steering of the beam. Thermal simulations have been conducted to determine the steering limits. In vivo 3D-movement detection was performed on pigs using an ultrasonic sequence. The maximum error on the transcostal motion detection was measured to be 0.09 ± 0.097 mm on the anterior–posterior axis. Finally, a complete sequence was developed combining real-time 3D transcostal movement correction and spiral trajectory of the HIFU beam, allowing the system to treat larger areas with optimized efficiency. Lesions as large as 1 cm in diameter have been produced at focus in excised liver, whereas no necroses could be obtained with the same emitted power without correcting the movement of the tissue sample.

  6. 3D super-virtual refraction interferometry

    KAUST Repository

    Lu, Kai

    2014-08-05

    Super-virtual refraction interferometry enhances the signal-to-noise ratio of far-offset refractions. However, when applied to 3D cases, traditional 2D SVI suffers because the stationary positions of the source-receiver pairs might be any place along the recording plane, not just along a receiver line. Moreover, the effect of enhancing the SNR can be limited because of the limitations in the number of survey lines, irregular line geometries, and azimuthal range of arrivals. We have developed a 3D SVI method to overcome these problems. By integrating along the source or receiver lines, the cross-correlation or the convolution result of a trace pair with the source or receiver at the stationary position can be calculated without the requirement of knowing the stationary locations. In addition, the amplitudes of the cross-correlation and convolution results are largely strengthened by integration, which is helpful to further enhance the SNR. In this paper, both synthetic and field data examples are presented, demonstrating that the super-virtual refractions generated by our method have accurate traveltimes and much improved SNR.

  7. A real-time virtual delivery system for photon radiotherapy delivery monitoring

    Directory of Open Access Journals (Sweden)

    Feng Shi

    2014-03-01

    Full Text Available Purpose: Treatment delivery monitoring is important for radiotherapy, which enables catching dosimetric error at the earliest possible opportunity. This project develops a virtual delivery system to monitor the dose delivery process of photon radiotherapy in real-time using GPU-based Monte Carlo (MC method.Methods: The simulation process consists of 3 parallel CPU threads. A thread T1 is responsible for communication with a linac, which acquires a set of linac status parameters, e.g. gantry angles, MLC configurations, and beam MUs every 20 ms. Since linac vendors currently do not offer interface to acquire data in real time, we mimic this process by fetching information from a linac dynalog file at the set frequency. Instantaneous beam fluence map (FM is calculated based. A FM buffer is also created in T1 and the instantaneous FM is accumulated to it. This process continues, until a ready signal is received from thread T2 on which an in-house developed MC dose engine executes on GPU. At that moment, the accumulated FM is transferred to T2 for dose calculations, and the FM buffer in T1 is cleared. Once the dose calculation finishes, the resulting 3D dose distribution is directed to thread T3, which displays it in three orthogonal planes in color wash overlaid on the CT image. This process continues to monitor the 3D dose distribution in real-time.Results: An IMRT and a VMAT cases used in our patient-specific QA are studied. Maximum dose differences between our system and treatment planning system are 0.98% and 1.58% for the IMRT and VMAT cases, respectively. The update frequency is >10Hz and the relative uncertainty level is 2%.Conclusion: By embedding a GPU-based MC code in a novel data/work flow, it is possible to achieve real-time MC dose calculations to monitor delivery process.------------------------------Cite this article as: Shi F, Gu X, Graves YJ, Jiang S, Jia X. A real-time virtual delivery system for photon radiotherapy delivery

  8. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    Science.gov (United States)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  9. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    Science.gov (United States)

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  10. An inkjet-printed buoyant 3-D lagrangian sensor for real-time flood monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad; Claudel, Christian G.; Shamim, Atif

    2014-01-01

    A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \\lambda 0 dipole

  11. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Science.gov (United States)

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  12. Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment

    Science.gov (United States)

    Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.

    2011-09-01

    This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.

  13. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation.

    Science.gov (United States)

    Nicodème, F; Lin, Z; Pandolfino, J E; Kahrilas, P J

    2013-09-01

    Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. © 2013 John Wiley & Sons Ltd.

  14. Further development of synthetic aperture real-time 3D scanning with a rotating phased array

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Tomov, Borislav Gueorguiev; Gran, Fredrik

    2003-01-01

    with an f-number of 1 is used to transmit to create spherical waves, (2) virtual receive elements are synthesized to decrease noise and grating lobes, (3) the compression filter for the FM pulses was modified to suppress the range lobes (4) additional hardware for synchronization is built....

  15. Real-time 3-dimensional virtual reality navigation system with open MRI for breast-conserving surgery

    International Nuclear Information System (INIS)

    Tomikawa, Morimasa; Konishi, Kozo; Ieiri, Satoshi; Hong, Jaesung; Uemura, Munenori; Hashizume, Makoto; Shiotani, Satoko; Tokunaga, Eriko; Maehara, Yoshihiko

    2011-01-01

    We report here the early experiences using a real-time three-dimensional (3D) virtual reality navigation system with open magnetic resonance imaging (MRI) for breast-conserving surgery (BCS). Two patients with a non-palpable MRI-detected breast tumor underwent BCS under the guidance of the navigation system. An initial MRI for the breast tumor using skin-affixed markers was performed immediately prior to excision. A percutaneous intramammary dye marker was applied to delineate an excision line, and the computer software '3D Slicer' generated a real-time 3D virtual reality model of the tumor and the puncture needle in the breast. Under guidance by the navigation system, marking procedures were performed without any difficulties. Fiducial registration errors were 3.00 mm for patient no.1, and 4.07 mm for patient no.2. The real-time 3D virtual reality navigation system with open MRI is feasible for safe and accurate excision of non-palpable MRI-detected breast tumors. (author)

  16. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  17. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    Science.gov (United States)

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  18. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    International Nuclear Information System (INIS)

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-01-01

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  19. 3D printing and milling a real-time PCR device for infectious disease diagnostics.

    Science.gov (United States)

    Mulberry, Geoffrey; White, Kevin A; Vaidya, Manjusha; Sugaya, Kiminobu; Kim, Brian N

    2017-01-01

    Diagnosing infectious diseases using quantitative polymerase chain reaction (qPCR) offers a conclusive result in determining the infection, the strain or type of pathogen, and the level of infection. However, due to the high-cost instrumentation involved and the complexity in maintenance, it is rarely used in the field to make a quick turnaround diagnosis. In order to provide a higher level of accessibility than current qPCR devices, a set of 3D manufacturing methods is explored as a possible option to fabricate a low-cost and portable qPCR device. The key advantage of this approach is the ability to upload the digital format of the design files on the internet for wide distribution so that people at any location can simply download and feed into their 3D printers for quick manufacturing. The material and design are carefully selected to minimize the number of custom parts that depend on advanced manufacturing processes which lower accessibility. The presented 3D manufactured qPCR device is tested with 20-μL samples that contain various concentrations of lentivirus, the same type as HIV. A reverse-transcription step is a part of the device's operation, which takes place prior to the qPCR step to reverse transcribe the target RNA from the lentivirus into complementary DNA (cDNA). This is immediately followed by qPCR which quantifies the target sequence molecules in the sample during the PCR amplification process. The entire process of thermal control and time-coordinated fluorescence reading is automated by closed-loop feedback and a microcontroller. The resulting device is portable and battery-operated, with a size of 12 × 7 × 6 cm3 and mass of only 214 g. By uploading and sharing the design files online, the presented low-cost qPCR device may provide easier access to a robust diagnosis protocol for various infectious diseases, such as HIV and malaria.

  20. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  1. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  2. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  3. 3D printing and milling a real-time PCR device for infectious disease diagnostics

    Science.gov (United States)

    Mulberry, Geoffrey; White, Kevin A.; Vaidya, Manjusha; Sugaya, Kiminobu

    2017-01-01

    Diagnosing infectious diseases using quantitative polymerase chain reaction (qPCR) offers a conclusive result in determining the infection, the strain or type of pathogen, and the level of infection. However, due to the high-cost instrumentation involved and the complexity in maintenance, it is rarely used in the field to make a quick turnaround diagnosis. In order to provide a higher level of accessibility than current qPCR devices, a set of 3D manufacturing methods is explored as a possible option to fabricate a low-cost and portable qPCR device. The key advantage of this approach is the ability to upload the digital format of the design files on the internet for wide distribution so that people at any location can simply download and feed into their 3D printers for quick manufacturing. The material and design are carefully selected to minimize the number of custom parts that depend on advanced manufacturing processes which lower accessibility. The presented 3D manufactured qPCR device is tested with 20-μL samples that contain various concentrations of lentivirus, the same type as HIV. A reverse-transcription step is a part of the device’s operation, which takes place prior to the qPCR step to reverse transcribe the target RNA from the lentivirus into complementary DNA (cDNA). This is immediately followed by qPCR which quantifies the target sequence molecules in the sample during the PCR amplification process. The entire process of thermal control and time-coordinated fluorescence reading is automated by closed-loop feedback and a microcontroller. The resulting device is portable and battery-operated, with a size of 12 × 7 × 6 cm3 and mass of only 214 g. By uploading and sharing the design files online, the presented low-cost qPCR device may provide easier access to a robust diagnosis protocol for various infectious diseases, such as HIV and malaria. PMID:28586401

  4. Real-time 3D echo in patient selection for cardiac resynchronization therapy.

    Science.gov (United States)

    Kapetanakis, Stamatis; Bhan, Amit; Murgatroyd, Francis; Kearney, Mark T; Gall, Nicholas; Zhang, Qing; Yu, Cheuk-Man; Monaghan, Mark J

    2011-01-01

    this study investigated the use of 3-dimensional (3D) echo in quantifying left ventricular mechanical dyssynchrony (LVMD), its interhospital agreement, and potential impact on patient selection. assessment of LVMD has been proposed as an improvement on conventional criteria in selecting patients for cardiac resynchronization therapy (CRT). Three-dimensional echo offers a reproducible assessment of left ventricular (LV) structure, function, and LVMD and may be useful in selecting patients for this intervention. we studied 187 patients at 2 institutions. Three-dimensional data from baseline and longest follow-up were quantified for volume, left ventricular ejection fraction (LVEF), and systolic dyssynchrony index (SDI). New York Heart Association (NYHA) functional class was assessed independently. Several outcomes from CRT were considered: 1) reduction in NYHA functional class; 2) 20% relative increase in LVEF; and 3) 15% reduction in LV end-systolic volume. Sixty-two cases were shared between institutions to analyze interhospital agreement. there was excellent interhospital agreement for 3D-derived LV end-diastolic and end- systolic volumes, EF, and SDI (variability: 2.9%, 1%, 7.1%, and 7.6%, respectively). Reduction in NYHA functional class was found in 78.9% of patients. Relative improvement in LVEF of 20% was found in 68% of patients, but significant reduction in LV end-systolic volume was found in only 41.5%. The QRS duration was not predictive of any of the measures of outcome (area under the curve [AUC]: 0.52, 0.58, and 0.57 for NYHA functional class, LVEF, and LV end-systolic volume), whereas SDI was highly predictive of improvement in these parameters (AUC: 0.79, 0.86, and 0.66, respectively). For patients not fulfilling traditional selection criteria (atrial fibrillation, QRS duration <120 ms, or undergoing device upgrade), SDI had similar predictive value. A cutoff of 10.4% for SDI was found to have the highest accuracy for predicting improvement following

  5. 3D Flow visualization in virtual reality

    Science.gov (United States)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  6. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    Science.gov (United States)

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  7. 3D Markov Process for Traffic Flow Prediction in Real-Time

    Directory of Open Access Journals (Sweden)

    Eunjeong Ko

    2016-01-01

    Full Text Available Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1 a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2 the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.

  8. A nonlinear 3D real-time model for simulation of BWR nuclear power plants

    International Nuclear Information System (INIS)

    Ercan, Y.

    1982-02-01

    A nonlinear transient model for BWR nuclear power plants which consists of a 3D-core (subdivided into a number of superboxes, and with parallel flow and subcooled boiling), a top plenum, steam removal and feed water systems and main coolant recirculation pumps is given. The model describes the local core and global plant transient situation as dependent on both the inherent core dynamics and external control actions, i.e., disturbances such as motions of control rod banks, changes of mass flow rates of coolant, feed water and steam outlet. The case of a pressure-controlled reactor operation is also considered. The model which forms the basis for the digital code GARLIC-B (Er et al. 82) is aimed to be used on an on-site process computer in parallel to the actual reactor process (or even in predictive mode). Thus, special measures had to be taken into account in order to increase the computational speed and reduce the necessary computer storage. This could be achieved by - separating the neutron and power kinetics from the xenon-iodine dynamics, - treating the neutron kinetics and most of the thermodynamics and hydrodynamics in a pseudostationary way, - developing a special coupling coefficient concept to describe the neutron diffusion, calculating the coupling coefficients from a basic neutron kinetics code, - combining coarse mesh elements into superboxes, taking advantage of the symmetry properties of the core and - applying a sparse matrix technique for solving the resulting algebraic power equation system. (orig.) [de

  9. A Real-Time Java Virtual Machine for Avionics (Preprint)

    National Research Council Canada - National Science Library

    Armbruster, Austin; Pla, Edward; Baker, Jason; Cunei, Antonio; Flack, Chapman; Pizlo, Filip; Vitek, Jan; Proch zka, Marek; Holmes, David

    2006-01-01

    ...) in the DARPA Program Composition for Embedded System (PCES) program. Within the scope of PCES, Purdue University and the Boeing Company collaborated on the development of Ovm, an open source implementation of the RTSJ virtual machine...

  10. Virtual hand: a 3D tactile interface to virtual environments

    Science.gov (United States)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  11. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.

    Science.gov (United States)

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-03-31

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.

  12. A 3D virtual reality ophthalmoscopy trainer.

    Science.gov (United States)

    Wilson, Andrew S; O'Connor, Jake; Taylor, Lewis; Carruthers, David

    2017-12-01

    Performing eye examinations is an important clinical skill that medical students often find difficult to become proficient in. This paper describes the development and evaluation of an innovative 3D virtual reality (VR) training application to support learning these skills. The VR ophthalmoscope was developed by a clinical team and technologist using the unity game engine, smartphone and virtual reality headset. It has a series of tasks that include performing systematic eye examinations, identifying common eye pathologies and a knowledge quiz. As part of their clinical training, 15 fourth-year medical students were surveyed for their views on this teaching approach. The Technology Acceptance Model was used to evaluate perceived usefulness and ease of use. Data were also collected on the usability of the app, together with the students' written comments about it. Users agreed that the teaching approach improved their understanding of ophthalmoscopy (n = 14), their ability to identify landmarks in the eye (n = 14) and their ability to recognise abnormalities (n = 15). They found the app easy to use (n = 15), the teaching approach informative (n = 13) and that it would increase students' confidence when performing these tasks in future (n = 15). Performing eye examinations is an important clinical skill DISCUSSION: The evaluation showed that a VR app can successfully simulate the processes involved in performing eye examinations. The app was highly rated for all elements of perceived usefulness, ease of use and usability. Medical students stated that they would like to be taught other medical skills in this way in future. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  13. Ice crystallization in porous building materials: assessing damage using real-time 3D monitoring

    Science.gov (United States)

    Deprez, Maxim; De Kock, Tim; De Schutter, Geert; Cnudde, Veerle

    2017-04-01

    Frost action is one of the main causes of deterioration of porous building materials in regions at middle to high latitudes. Damage will occur when the internal stresses due to ice formation become larger than the strength of the material. Hence, the sensitivity of the material to frost damage is partly defined by the structure of the solid body. On the other hand, the size, shape and interconnection of pores manages the water distribution in the building material and, therefore, the characteristics of the pore space control potential to form ice crystals (Ruedrich et al., 2011). In order to assess the damage to building materials by ice crystallization, lot of effort was put into identifying the mechanisms behind the stress build up. First of all, volumetric expansion of 9% (Hirschwald, 1908) during the transition of water to ice should be mentioned. Under natural circumstances, however, water saturation degrees within natural rocks or concrete cannot reach a damaging value. Therefore, linear growth pressure (Scherer, 1999), as well as several mechanisms triggered by water redistribution during freezing (Powers and Helmuth, 1953; Everett, 1961) are more likely responsible for damage due to freezing. Nevertheless, these theories are based on indirect observations and models and, thus, direct evidence that reveals the exact damage mechanism under certain conditions is still lacking. To obtain this proof, in-situ information needs to be acquired while a freezing process is performed. X-ray computed tomography has proven to be of great value in material research. Recent advances at the Ghent University Centre for Tomography (UGCT) have already allowed to dynamically 3D image crack growth in natural rock during freeze-thaw cycles (De Kock et al., 2015). A great potential to evaluate the different stress build-up mechanisms can be found in this imaging technique consequently. It is required to cover a range of materials with different petrophysical properties to achieve

  14. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    Science.gov (United States)

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better

  15. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    International Nuclear Information System (INIS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-01-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  16. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    Energy Technology Data Exchange (ETDEWEB)

    Dubart, Philippe; Hautot, Felix [AREVA Group, 1 route de la Noue, Gif sur Yvette (France); Morichi, Massimo; Abou-Khalil, Roger [AREVA Group, Tour AREVA-1, place Jean Millier, Paris (France)

    2015-07-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  17. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    International Nuclear Information System (INIS)

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo; Abou-Khalil, Roger

    2015-01-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  18. Semi- and virtual 3D dosimetry in clinical practice

    DEFF Research Database (Denmark)

    Korreman, S. S.

    2013-01-01

    In this review, 3D dosimetry is divided in three categories; "true" 3D, semi-3D and virtual 3D. Virtual 3D involves the use of measurement arrays either before or after beam entry in the patient/phantom, whereas semi-3D involves use of measurement arrays in phantoms mimicking the patient. True 3D...... involves the measurement of dose in a volume mimicking the patient.There are different advantages and limitations of all three categories and of systems within these categories. Choice of measurement method in a given case depends on the aim of the measurement, and examples are given of verification...... measurements with various aims....

  19. Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    NARCIS (Netherlands)

    van Welbergen, H.; van Basten, B.J.H.; Egges, A.; Ruttkay, Z.M.; Overmars, M.H.

    2010-01-01

    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in

  20. Pulsed cavitational ultrasound for non-invasive chordal cutting guided by real-time 3D echocardiography.

    Science.gov (United States)

    Villemain, Olivier; Kwiecinski, Wojciech; Bel, Alain; Robin, Justine; Bruneval, Patrick; Arnal, Bastien; Tanter, Mickael; Pernot, Mathieu; Messas, Emmanuel

    2016-10-01

    Basal chordae surgical section has been shown to be effective in reducing ischaemic mitral regurgitation (IMR). Achieving this section by non-invasive mean can considerably decrease the morbidity of this intervention on already infarcted myocardium. We investigated in vitro and in vivo the feasibility and safety of pulsed cavitational focused ultrasound (histotripsy) for non-invasive chordal cutting guided by real-time 3D echocardiography. Experiments were performed on 12 sheep hearts, 5 in vitro on explanted sheep hearts and 7 in vivo on beating sheep hearts. In vitro, the mitral valve (MV) apparatus including basal and marginal chordae was removed and fixed on a holder in a water tank. High-intensity ultrasound pulses were emitted from the therapeutic device (1-MHz focused transducer, pulses of 8 µs duration, peak negative pressure of 17 MPa, repetition frequency of 100 Hz), placed at a distance of 64 mm under 3D echocardiography guidance. In vivo, after sternotomy, the same therapeutic device was applied on the beating heart. We analysed MV coaptation and chordae by real-time 3D echocardiography before and after basal chordal cutting. After sacrifice, the MV apparatus were harvested for anatomical and histological post-mortem explorations to confirm the section of the chordae. In vitro, all chordae were completely cut after a mean procedure duration of 5.5 ± 2.5 min. The procedure duration was found to increase linearly with the chordae diameter. In vivo, the central basal chordae of the anterior leaflet were completely cut. The mean procedure duration was 20 ± 9 min (min = 14, max = 26). The sectioned chordae was visible on echocardiography, and MV coaptation remained normal with no significant mitral regurgitation. Anatomical and histological post-mortem explorations of the hearts confirmed the section of the chordae. Histotripsy guided by 3D echo achieved successfully to cut MV chordae in vitro and in vivo in beating heart. We hope that this technique will

  1. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Rilling, M [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada); Goulet, M [Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Beaulieu, L; Archambault, L [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Thibault, S [Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada)

    2016-06-15

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm{sup 3} plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D{sub 50} of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second

  2. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    International Nuclear Information System (INIS)

    Rilling, M; Goulet, M; Beaulieu, L; Archambault, L; Thibault, S

    2016-01-01

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm"3 plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D_5_0 of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second-generational real-time 3D

  3. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  4. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  5. Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback

    Science.gov (United States)

    Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.

    2014-01-01

    Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200

  6. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-05-01

    To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D

  8. Interactive Mapping on Virtual Terrain Models Using RIMS (Real-time, Interactive Mapping System)

    Science.gov (United States)

    Bernardin, T.; Cowgill, E.; Gold, R. D.; Hamann, B.; Kreylos, O.; Schmitt, A.

    2006-12-01

    Recent and ongoing space missions are yielding new multispectral data for the surfaces of Earth and other planets at unprecedented rates and spatial resolution. With their high spatial resolution and widespread coverage, these data have opened new frontiers in observational Earth and planetary science. But they have also precipitated an acute need for new analytical techniques. To address this problem, we have developed RIMS, a Real-time, Interactive Mapping System that allows scientists to visualize, interact with, and map directly on, three-dimensional (3D) displays of georeferenced texture data, such as multispectral satellite imagery, that is draped over a surface representation derived from digital elevation data. The system uses a quadtree-based multiresolution method to render in real time high-resolution (3 to 10 m/pixel) data over large (800 km by 800 km) spatial areas. It allows users to map inside this interactive environment by generating georeferenced and attributed vector-based elements that are draped over the topography. We explain the technique using 15 m ASTER stereo-data from Iraq, P.R. China, and other remote locations because our particular motivation is to develop a technique that permits the detailed (10 m to 1000 m) neotectonic mapping over large (100 km to 1000 km long) active fault systems that is needed to better understand active continental deformation on Earth. RIMS also includes a virtual geologic compass that allows users to fit a plane to geologic surfaces and thereby measure their orientations. It also includes tools that allow 3D surface reconstruction of deformed and partially eroded surfaces such as folded bedding planes. These georeferenced map and measurement data can be exported to, or imported from, a standard GIS (geographic information systems) file format. Our interactive, 3D visualization and analysis system is designed for those who study planetary surfaces, including neotectonic geologists, geomorphologists, marine

  9. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing.

    Science.gov (United States)

    Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee

    2012-05-01

    Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.

  10. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    Science.gov (United States)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  11. Embryonic staging using a 3D virtual reality system

    NARCIS (Netherlands)

    C.M. Verwoerd-Dikkeboom (Christine); A.H.J. Koning (Anton); P.J. van der Spek (Peter); N. Exalto (Niek); R.P.M. Steegers-Theunissen (Régine)

    2008-01-01

    textabstractBACKGROUND: The aim of this study was to demonstrate that Carnegie Stages could be assigned to embryos visualized with a 3D virtual reality system. METHODS: We analysed 48 3D ultrasound scans of 19 IVF/ICSI pregnancies at 7-10 weeks' gestation. These datasets were visualized as 3D

  12. 3D imaging, 3D printing and 3D virtual planning in endodontics.

    Science.gov (United States)

    Shah, Pratik; Chong, B S

    2018-03-01

    The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.

  13. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    International Nuclear Information System (INIS)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn

    2014-01-01

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  14. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  15. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  16. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  17. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  18. Induced tauopathy in a novel 3D-culture model mediates neurodegenerative processes: a real-time study on biochips.

    Directory of Open Access Journals (Sweden)

    Diana Seidel

    Full Text Available Tauopathies including Alzheimer's disease represent one of the major health problems of aging population worldwide. Therefore, a better understanding of tau-dependent pathologies and consequently, tau-related intervention strategies is highly demanded. In recent years, several tau-focused therapies have been proposed with the aim to stop disease progression. However, to develop efficient active pharmaceutical ingredients for the broad treatment of Alzheimer's disease patients, further improvements are necessary for understanding the detailed neurodegenerative processes as well as the mechanism and side effects of potential active pharmaceutical ingredients (API in the neuronal system. In this context, there is a lack of suitable complex in vitro cell culture models recapitulating major aspects of taupathological degenerative processes in sufficient time and reproducible manner.Herewith, we describe a novel 3D SH-SY5Y cell-based, tauopathy model that shows advanced characteristics of matured neurons in comparison to monolayer cultures without the need of artificial differentiation promoting agents. Moreover, the recombinant expression of a novel highly pathologic fourfold mutated human tau variant lead to a fast and emphasized degeneration of neuritic processes. The neurodegenerative effects could be analyzed in real time and with high sensitivity using our unique microcavity array-based impedance spectroscopy measurement system. We were able to quantify a time- and concentration-dependent relative impedance decrease when Alzheimer's disease-like tau pathology was induced in the neuronal 3D cell culture model. In combination with the collected optical information, the degenerative processes within each 3D-culture could be monitored and analyzed. More strikingly, tau-specific regenerative effects caused by tau-focused active pharmaceutical ingredients could be quantitatively monitored by impedance spectroscopy.Bringing together our novel complex 3

  19. Implementation of a 3D Virtual Drummer

    NARCIS (Netherlands)

    Magnenat-ThalmannThalmann, M.; Kragtwijk, M.; Nijholt, Antinus; Thalmann, D.; Zwiers, Jakob

    2001-01-01

    We describe a system for the automatic generation of a 3D animation of a drummer playing along with a given piece of music. The input, consisting of a sound wave, is analysed to determine which drums are struck at what moments. The Standard MIDI File format is used to store the recognised notes.

  20. Dynamic 3D echocardiography in virtual reality.

    NARCIS (Netherlands)

    A.E. van den Bosch (Annemien); A.H.J. Koning (Anton); F.J. Meijboom (Folkert); J.S. Vletter-McGhie (Jackie); M.L. Simoons (Maarten); P.J. van der Spek (Peter); A.J.J.C. Bogers (Ad)

    2005-01-01

    textabstractBACKGROUND: This pilot study was performed to evaluate whether virtual reality is applicable for three-dimensional echocardiography and if three-dimensional echocardiographic 'holograms' have the potential to become a clinically useful tool. METHODS: Three-dimensional echocardiographic

  1. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    Science.gov (United States)

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  2. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    International Nuclear Information System (INIS)

    Reichelt, Stephan; Leister, Norbert

    2013-01-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  3. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    Directory of Open Access Journals (Sweden)

    Wilbert A. McClay

    2015-09-01

    Full Text Available Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  4. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  5. "Eyes On The Solar System": A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K. J.

    2011-10-01

    NASA's Jet Propulsion Laboratory is using videogame technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that runs inside a Web browser, was released worldwide late last year (solarsystem.nasa.gov/eyes). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft and NASA/ESA missions in action. Key scientific results illustrated with video presentations and supporting imagery are imbedded contextually into the solar system. The presentation will include a detailed demonstration of the software along with a description/discussion of how this technology can be adapted for education and public outreach, as well as a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," which can be viewed at climate.nasa.gov/Eyes.html.

  6. Dynamic 3D echocardiography in virtual reality

    Directory of Open Access Journals (Sweden)

    Simoons Maarten L

    2005-12-01

    Full Text Available Abstract Background This pilot study was performed to evaluate whether virtual reality is applicable for three-dimensional echocardiography and if three-dimensional echocardiographic 'holograms' have the potential to become a clinically useful tool. Methods Three-dimensional echocardiographic data sets from 2 normal subjects and from 4 patients with a mitral valve pathological condition were included in the study. The three-dimensional data sets were acquired with the Philips Sonos 7500 echo-system and transferred to the BARCO (Barco N.V., Kortrijk, Belgium I-space. Ten independent observers assessed the 6 three-dimensional data sets with and without mitral valve pathology. After 10 minutes' instruction in the I-Space, all of the observers could use the virtual pointer that is necessary to create cut planes in the hologram. Results The 10 independent observers correctly assessed the normal and pathological mitral valve in the holograms (analysis time approximately 10 minutes. Conclusion this report shows that dynamic holographic imaging of three-dimensional echocardiographic data is feasible. However, the applicability and use-fullness of this technology in clinical practice is still limited.

  7. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  8. An Overview on Base Real-Time Hard Shadow Techniques in Virtual Environments

    Directory of Open Access Journals (Sweden)

    Mohd Shahrizal Sunar

    2012-03-01

    Full Text Available Shadows are elegant to create a realistic scene in virtual environments variety type of shadow techniques encourage us to prepare an overview on all base shadow techniques. Non real-time and real-time techniques are big subdivision of shadow generation. In non real-time techniques ray tracing, ray casting and radiosity are well known and are described deeply. Radiosity is implemented to create very realistic shadow on non real-time scene. Although traditional radiosity algorithm is difficult to implement, we have proposed a simple one. The proposed pseudo code is easier to understand and implement. Ray tracing is used to prevent of collision of movement objects. Projection shadow, shadow volume and shadow mapping are used to create real-time shadow in virtual environments. We have used projection shadow for some objects are static and have shadow on flat surface. Shadow volume is used to create accurate shadow with sharp outline. Shadow mapping that is the base of most recently techniques is reconstructed. The reconstruct algorithm gives some new idea to propose another algorithm based on shadow mapping.

  9. Game-Like Language Learning in 3-D Virtual Environments

    Science.gov (United States)

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  10. Virtual reality myringotomy simulation with real-time deformation: development and validity testing.

    Science.gov (United States)

    Ho, Andrew K; Alsaffar, Hussain; Doyle, Philip C; Ladak, Hanif M; Agrawal, Sumit K

    2012-08-01

    Surgical simulation is becoming an increasingly common training tool in residency programs. The first objective was to implement real-time soft-tissue deformation and cutting into a virtual reality myringotomy simulator. The second objective was to test the various implemented incision algorithms to determine which most accurately represents the tympanic membrane during myringotomy. Descriptive and face-validity testing. A deformable tympanic membrane was developed, and three soft-tissue cutting algorithms were successfully implemented into the virtual reality myringotomy simulator. The algorithms included element removal, direction prediction, and Delaunay cutting. The simulator was stable and capable of running in real time on inexpensive hardware. A face-validity study was then carried out using a validated questionnaire given to eight otolaryngologists and four senior otolaryngology residents. Each participant was given an adaptation period on the simulator, was blinded to the algorithm being used, and was presented the three algorithms in a randomized order. A virtual reality myringotomy simulator with real-time soft-tissue deformation and cutting was successfully developed. The simulator was stable, ran in real time on inexpensive hardware, and incorporated haptic feedback and stereoscopic vision. The Delaunay cutting algorithm was found to be the most realistic algorithm representing the incision during myringotomy (P virtual reality myringotomy simulator is being developed and now integrates a real-time deformable tympanic membrane that appears to have face validity. Further development and validation studies are necessary before the simulator can be studied with respect to training efficacy and clinical impact. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  11. A real-time vision-based hand gesture interaction system for virtual EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.R., E-mail: wangkr@mail.ustc.edu.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J.; Xia, J.Y. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Luo, W.L. [709th Research Institute, Shipbuilding Industry Corporation (China)

    2016-11-15

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  12. A real-time vision-based hand gesture interaction system for virtual EAST

    International Nuclear Information System (INIS)

    Wang, K.R.; Xiao, B.J.; Xia, J.Y.; Li, Dan; Luo, W.L.

    2016-01-01

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  13. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    Science.gov (United States)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  14. Exploring the educational potential of 3D virtual environments

    Directory of Open Access Journals (Sweden)

    Francesc Marc ESTEVE MON

    2013-12-01

    Full Text Available 3D virtual environments are advanced technology systems, with some potentialities in the teaching and learning process.In recent years, different institutions have promoted the acquisition of XXI century skills. Competences such as initiative, teamwork, creativity, flexibility or digital literacy.Multi-user virtual environments, sometimes called virtual worlds or 3D simulators, are immersive, interactive, customizable, accessible and programmable systems. This kind of environments allow to design educational complex activities to develop these key competences. For this purpose it’s necessary to set an appropriate teaching strategy to put this knowledge and skills into action, and design suitable mechanisms for registration and systematization. This paper analyzes the potential of these environments and presents two experiences in 3D virtual environments: (1 to develop teamwork and self-management skills, and (2 to assess digital literacy in preservice teachers.

  15. The value of applying nitroglycerin in 3D coronary MR angiography with real-time navigation technique

    International Nuclear Information System (INIS)

    Hackenbroch, M.; Meyer, C.; Schmiedel, A.; Hofer, U.; Flacke, S.; Kovacs, A.; Schild, H.; Sommer, T.; Tiemann, K.; Skowasch, D.

    2004-01-01

    Purpose: Nitroglycerin administration results in dilation of epicardial coronary vessels and in an increase in coronary blood flow, and has been suggested to improve MR coronary angiography. This study evaluates systematically whether administration of nitroglycerin improves the visualization of coronary arteries and, as a result, the detection of coronary artery stenosis during free breathing 3D coronary MR angiography. Materials and Methods: Coronary MR angiography was performed in 44 patients with suspected coronary artery disease at a 1.5 Tesla System (Intera, Philips Medical Systems) (a) with and (b) without continuous administration of intravenous nitroglycerin at a dose rate of 2.5 mg/h, using an ECG gated gradient echo sequence with real-time navigator correction (turbo field echo, in-plane resolution 0.70 x 0.79 mm 2 , acquisition window 80 ms). Equivalent segments of the coronary arteries in the sequences with and without nitroglycerin were evaluated for visualized vessel length and diameter, qualitative assessment of visualization using a four point grading scale and detection of stenoses >50%. Catheter coronary angiography was used as a gold-standard. Results: No significant differences were found between scans with and without nitroglycerin as to average length of the contiguously visualized vessel length (p>0.05) and diameter (p>0.05). There was also no significant difference in the coronary MR angiography with and without nitroglycerin in the average qualitative assessment score of the visualization of LM, proximal LAD, proximal CX, and proximal and distal RCA (2.1±0.8 and 2.2±0.7; p> 0.05). Sensitivity (77% [17/22] vs. 82% [18/22] p>0.05) and specificity (72% [13/18] vs. 72% [13/18] p>0.05) for the detection of coronary artery stenosis also did not differ significantly between scans with and without intravenous administration of nitroglycerin. Conclusion: Administration of nitroglycerin does not improve visualization of the coronary arteries and

  16. 3D Virtual Dig: a 3D Application for Teaching Fieldwork in Archaeology

    Directory of Open Access Journals (Sweden)

    Paola Di Giuseppantonio Di Franco

    2012-12-01

    Full Text Available Archaeology is a material, embodied discipline; communicating this experience is critical to student success. In the context of lower-division archaeology courses, the present study examines the efficacy of 3D virtual and 2D archaeological representations of digs. This presentation aims to show a 3D application created to teach the archaeological excavation process to freshmen students. An archaeological environment was virtually re-created in 3D, and inserted in a virtual reality software application that allows users to work with the reconstructed excavation area. The software was tested in class for teaching the basics of archaeological fieldwork. The application interface is user-friendly and especially easy for 21st century students. The study employed a pre-survey, post-test, and post-survey design, used to understand the students' previous familiarity with archaeology, and test their awareness after the use of the application. Their level of knowledge was then compared with that of those students who had accessed written material only. This case-study demonstrates how a digital approach to laboratory work can positively affect student learning. Increased abilities to complete ill-defined problems (characteristic of the high-order thinking in the field, can, in fact, be demonstrated. 3D Virtual reconstruction serves, then, as an important bridge from traditional coursework to fieldwork.

  17. Three-dimensional (3D) real-time conformal brachytherapy - a novel solution for prostate cancer treatment Part I. Rationale and method

    International Nuclear Information System (INIS)

    Fijalkowski, M.; Bialas, B.; Maciejewski, B.; Bystrzycka, J.; Slosarek, K.

    2005-01-01

    Recently, the system for conformal real-time high-dose-rate brachytherapy has been developed and dedicated in general for the treatment of prostate cancer. The aim of this paper is to present the 3D-conformal real-time brachytherapy technique introduced to clinical practice at the Institute of Oncology in Gliwice. Equipment and technique of 3D-conformal real time brachytherapy (3D-CBRT) is presented in detail and compared with conventional high-dose-rate brachytherapy. Step-by-step procedures of treatment planning are described, including own modifications. The 3D-CBRT offers the following advantages: (1) on-line continuous visualization of the prostate and acquisition of the series of NS images during the entire procedure of planning and treatment; (2) high precision of definition and contouring the target volume and the healthy organs at risk (urethra, rectum, bladder) based on 3D transrectal continuous ultrasound images; (3) interactive on-line dose optimization with real-time corrections of the dose-volume histograms (DVHs) till optimal dose distribution is achieved; (4) possibility to overcome internal prostate motion and set-up inaccuracies by stable positioning of the prostate with needles fixed to the template; (5) significant shortening of overall treatment time; (6) cost reduction - the treatment can be provided as an outpatient procedure. The 3D- real time CBRT can be advertised as an ideal conformal boost dose technique integrated or interdigitated with pelvic conformal external beam radiotherapy or as a monotherapy for prostate cancer. (author)

  18. 3D natural emulation design approach to virtual communities

    OpenAIRE

    DiPaola, Steve

    2010-01-01

    The design goal for OnLive’s Internet-based Virtual Community system was to develop avatars and virtual communities where the participants sense a tele-presence – that they are really there in the virtual space with other people. This collective sense of "being-there" does not happen over the phone or with teleconferencing; it is a new and emerging phenomenon, unique to 3D virtual communities. While this group presence paradigm is a simple idea, the design and technical issues needed to begin...

  19. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    Science.gov (United States)

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  20. Realistic terrain visualization based on 3D virtual world technology

    Science.gov (United States)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  1. Intelligent web agents for a 3D virtual community

    Science.gov (United States)

    Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar

    2003-08-01

    In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.

  2. Novel interactive virtual showcase based on 3D multitouch technology

    Science.gov (United States)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  3. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  4. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    Science.gov (United States)

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  5. Real time determination of dose radiation through artificial intelligence and virtual reality; Determinacao de dose de radiacao, em tempo real, atraves de inteligencia artificial e realidade virtual

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Victor Goncalves Gloria

    2009-07-01

    In the last years, a virtual environment of Argonauta research reactor, sited in the Instituto de Engenharia Nuclear (Brazil), has been developed. Such environment, called here Argonauta Virtual (AV), is a 3D model of the reactor hall, in which virtual people (avatar) can navigate. In AV, simulations of nuclear sources and doses are possible. In a recent work, a real time monitoring system (RTMS) was developed to provide (by means of Ethernet TCP/I P) the information of area detectors situated in the reactor hall. Extending the scope of AV, this work is intended to provide a continuous determination of gamma radiation dose in the reactor hall, based in several monitored parameters. To accomplish that a module based in artificial neural network (ANN) was developed. The ANN module is able to predict gamma radiation doses using as inputs: the avatar position (from virtual environment), the reactor power (from RTMS) and information of fixed area detectors (from RTMS). The ANN training data has been obtained by measurements of gamma radiation doses in a mesh of points, with previously defined positions, for different power levels. Through the use of ANN it is possible to estimate, in real time, the dose received by a person at any position in Argonauta reactor hall. Such approach allows tasks simulations and training of people inside the AV system, without exposing them to radiation effects. (author)

  6. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  7. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  8. Experiencing 3D interactions in virtual reality and augmented reality

    NARCIS (Netherlands)

    Martens, J.B.; Qi, W.; Aliakseyeu, D.; Kok, A.J.F.; Liere, van R.; Hoven, van den E.; Ijsselsteijn, W.; Kortuem, G.; Laerhoven, van K.; McClelland, I.; Perik, E.; Romero, N.; Ruyter, de B.

    2004-01-01

    We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical

  9. Cognitive Aspects of Collaboration in 3d Virtual Environments

    Science.gov (United States)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  10. COGNITIVE ASPECTS OF COLLABORATION IN 3D VIRTUAL ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    V. Juřík

    2016-06-01

    Full Text Available Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators’ actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators’ responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators’ strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  11. Visual simultaneous localization and mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    International Nuclear Information System (INIS)

    Hautot, F.; Dubart, P.; Chagneau, B.; Bacri, C.O.; Abou-Khalil, R.

    2017-01-01

    New developments in the field of robotics and computer vision enable to merge sensors to allow fast real-time localization of radiological measurements in the space/volume with near real-time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarios and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations. This paper will present new progresses in merging RGB-D camera based on SLAM (Simultaneous Localization and Mapping) systems and nuclear measurement in motion methods in order to detect, locate, and evaluate the activity of radioactive sources in 3-dimensions

  12. Visual Simultaneous Localization And Mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    Science.gov (United States)

    Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger

    2017-09-01

    New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations

  13. Secure environment for real-time tele-collaboration on virtual simulation of radiation treatment planning.

    Science.gov (United States)

    Ntasis, Efthymios; Maniatis, Theofanis A; Nikita, Konstantina S

    2003-01-01

    A secure framework is described for real-time tele-collaboration on Virtual Simulation procedure of Radiation Treatment Planning. An integrated approach is followed clustering the security issues faced by the system into organizational issues, security issues over the LAN and security issues over the LAN-to-LAN connection. The design and the implementation of the security services are performed according to the identified security requirements, along with the need for real time communication between the collaborating health care professionals. A detailed description of the implementation is given, presenting a solution, which can directly be tailored to other tele-collaboration services in the field of health care. The pilot study of the proposed security components proves the feasibility of the secure environment, and the consistency with the high performance demands of the application.

  14. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  15. Virtual 3D planning of tracheostomy placement and clinical applicability of 3D cannula design : A three-step study

    NARCIS (Netherlands)

    de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B

    AIM: We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. MATERIALS AND METHODS: 3D models of commercially available cannula were positioned in 3D models of the

  16. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    Science.gov (United States)

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the

  17. Clinical value of real time 3D sonohysterography and 2D sonohysterography in comparison to hysteroscopy with subsequent histopathological examination in perimenopausal women with abnormal uterine bleeding.

    Science.gov (United States)

    Kowalczyk, Dariusz; Guzikowski, Wojciech; Więcek, Jacek; Sioma-Markowska, Urszula

    2012-01-01

    In many publications the transvaginal ultrasound is regarded as the first step to diagnose the cause of uterine bleeding in perimenopausal women. In order to improve the sensitivity and specificity of the conventional ultrasound physiological saline solution was administered to the uterine cavity and after expansion of its walls the interior uterine cavity was examined. And this procedure is called 2D sonohysterography (SIS 2D). By the ultrasound scanners which enable to get 3D real time image a spatial evaluation of the uterine cavity is possible. Clinical value of the real time 3D sonohysterography and 2D sonohysterography compared to hysteroscopy with histopathological examination in perimenopausal women. The study concerned a group of 97 perimenopausal women with abnormal uterine bleeding. In all of them after a standard transvaginal ultrasonography a catheter was inserted into the uterine cavity. After expansion of the uterine walls by administering about 10 ml of 0,9% saline solution the uterine cavity was examined by conventional sonohysterography. Then a 3D imaging mode was activated and the uterine interior was examined by real time 3D ultrasonography. The ultrasound results were verified by hysteroscopy, the endometrial lesions were removed and underwent a histopathological examination. In two cases the SIS examination was impossible because of uterine cervix atresion. In the rest of examined group the SIS 2D sensitivity and specificity came up to 72 and 96% respectively. In the group of SIS 3D the sensitivity and specificity reached 83 and 99% respectively. Adding SIS 3D, a minimally invasive method, to conventional sonohysterography improves the precision of diagnosis of endometrial pathology, allows to get three-dimensional image of the uterine cavity and enables examination of endometrial lesions. The diagnostic precision of this procedure is similar to the results achieved by hysteroscopy.

  18. A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy

    Science.gov (United States)

    Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.

    2018-01-01

    A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.

  19. A 3D virtual reality simulator for training of minimally invasive surgery.

    Science.gov (United States)

    Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin

    2014-01-01

    For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.

  20. Application of Real-Time 3D Navigation System in CT-Guided Percutaneous Interventional Procedures: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    Priya Bhattacharji

    2017-01-01

    Full Text Available Introduction. To evaluate the accuracy of a quantitative 3D navigation system for CT-guided interventional procedures in a two-part study. Materials and Methods. Twenty-two procedures were performed in abdominal and thoracic phantoms. Accuracies of the 3D anatomy map registration and navigation were evaluated. Time used for the navigated procedures was recorded. In the IRB approved clinical evaluation, 21 patients scheduled for CT-guided thoracic and hepatic biopsy and ablations were recruited. CT-guided procedures were performed without following the 3D navigation display. Accuracy of navigation as well as workflow fitness of the system was evaluated. Results. In phantoms, the average 3D anatomy map registration error was 1.79 mm. The average navigated needle placement accuracy for one-pass and two-pass procedures, respectively, was 2.0±0.7 mm and 2.8±1.1 mm in the liver and 2.7±1.7 mm and 3.0±1.4 mm in the lung. The average accuracy of the 3D navigation system in human subjects was 4.6 mm ± 3.1 for all procedures. The system fits the existing workflow of CT-guided interventions with minimum impact. Conclusion. A 3D navigation system can be performed along the existing workflow and has the potential to navigate precision needle placement in CT-guided interventional procedures.

  1. Anesthesiology training using 3D imaging and virtual reality

    Science.gov (United States)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  2. Use of real-time three-dimensional transesophageal echocardiography in type A aortic dissections: Advantages of 3D TEE illustrated in three cases

    Directory of Open Access Journals (Sweden)

    Cindy J Wang

    2015-01-01

    Full Text Available Stanford type A aortic dissections often present to the hospital requiring emergent surgical intervention. Initial diagnosis is usually made by computed tomography; however transesophageal echocardiography (TEE can further characterize aortic dissections with specific advantages: It may be performed on an unstable patient, it can be used intra-operatively, and it has the ability to provide continuous real-time information. Three-dimensional (3D TEE has become more accessible over recent years allowing it to serve as an additional tool in the operating room. We present a case series of three patients presenting with type A aortic dissections and the advantages of intra-operative 3D TEE to diagnose the extent of dissection in each case. Prior case reports have demonstrated the use of 3D TEE in type A aortic dissections to characterize the extent of dissection and involvement of neighboring structures. In our three cases described, 3D TEE provided additional understanding of spatial relationships between the dissection flap and neighboring structures such as the aortic valve and coronary orifices that were not fully appreciated with two-dimensional TEE, which affected surgical decisions in the operating room. This case series demonstrates the utility and benefit of real-time 3D TEE during intra-operative management of a type A aortic dissection.

  3. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    Science.gov (United States)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object

  4. Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.

    Science.gov (United States)

    Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas

    2018-04-01

    Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Real-time 3D transesophageal echocardiography-guided closure of a complicated patent ductus arteriosus in a dog.

    Science.gov (United States)

    Doocy, K R; Nelson, D A; Saunders, A B

    2017-06-01

    Advanced imaging modalities are becoming more widely available in veterinary cardiology, including the use of transesophageal echocardiography (TEE) during occlusion of patent ductus arteriosus (PDA) in dogs. The dog in this report had a complex history of attempted ligation and a large PDA that initially precluded device placement thereby limiting the options for PDA closure. Following a second thoracotomy and partial ligation, the morphology of the PDA was altered and device occlusion was an option. Angiographic assessment of the PDA was limited by the presence of hemoclips, and the direction of ductal flow related to the change in anatomy following ligature placement. Intra-operative TEE, in particular real-time three-dimensional imaging, was pivotal for assessing the PDA morphology, monitoring during the procedure, selecting the device size, and confirming device placement. The TEE images increased operator confidence that the size and location of the device were appropriate before release despite the unusual position. This report highlights the benefit of intra-operative TEE, in particular real-time three-dimensional imaging, for successful PDA occlusion in a complicated case. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A real-time monitoring/emergency response workstation using a 3-D numerical model initialized with SODAR

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-01

    Many workstation based emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, we have implemented the three-dimensional-diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability project

  7. Real-time virtual sonography for navigation during targeted prostate biopsy using magnetic resonance imaging data

    International Nuclear Information System (INIS)

    Miyagawa, Tomoaki; Ishikawa, Satoru; Kimura, Tomokazu; Suetomi, Takahiro; Tsutsumi, Masakazu; Irie, Toshiyuki; Kondoh, Masanao; Mitake, Tsuyoshi

    2010-01-01

    The objective of this study was to evaluate the effectiveness of the medical navigation technique, namely, Real-time Virtual Sonography (RVS), for targeted prostate biopsy. Eighty-five patients with suspected prostate cancer lesions using magnetic resonance imaging (MRI) were included in this study. All selected patients had at least one negative result on the previous transrectal biopsies. The acquired MRI volume data were loaded onto a personal computer installed with RVS software, which registers the volumes between MRI and real-time ultrasound data for real-time display. The registered MRI images were displayed adjacent to the ultrasonographic sagittal image on the same computer monitor. The suspected lesions on T2-weighted images were marked with a red circle. At first suspected lesions were biopsied transperineally under real-time navigation with RVS and then followed by the conventional transrectal and transperineal biopsy under spinal anesthesia. The median age of the patients was 69 years (56-84 years), and the prostate-specific antigen level and prostate volume were 9.9 ng/mL (4.0-34.2) and 37.2 mL (18-141), respectively. Prostate cancer was detected in 52 patients (61%). The biopsy specimens obtained using RVS revealed 45/52 patients (87%) positive for prostate cancer. A total of 192 biopsy cores were obtained using RVS. Sixty-two of these (32%) were positive for prostate cancer, whereas conventional random biopsy revealed cancer only in 75/833 (9%) cores (P<0.01). Targeted prostate biopsy with RVS is very effective to diagnose lesions detected with MRI. This technique only requires additional computer and RVS software and thus is cost-effective. Therefore, RVS-guided prostate biopsy has great potential for better management of prostate cancer patients. (author)

  8. Real-time monitoring of quorum sensing in 3D-printed bacterial aggregates using scanning electrochemical microscopy.

    Science.gov (United States)

    Connell, Jodi L; Kim, Jiyeon; Shear, Jason B; Bard, Allen J; Whiteley, Marvin

    2014-12-23

    Microbes frequently live in nature as small, densely packed aggregates containing ∼10(1)-10(5) cells. These aggregates not only display distinct phenotypes, including resistance to antibiotics, but also, serve as building blocks for larger biofilm communities. Aggregates within these larger communities display nonrandom spatial organization, and recent evidence indicates that this spatial organization is critical for fitness. Studying single aggregates as well as spatially organized aggregates remains challenging because of the technical difficulties associated with manipulating small populations. Micro-3D printing is a lithographic technique capable of creating aggregates in situ by printing protein-based walls around individual cells or small populations. This 3D-printing strategy can organize bacteria in complex arrangements to investigate how spatial and environmental parameters influence social behaviors. Here, we combined micro-3D printing and scanning electrochemical microscopy (SECM) to probe quorum sensing (QS)-mediated communication in the bacterium Pseudomonas aeruginosa. Our results reveal that QS-dependent behaviors are observed within aggregates as small as 500 cells; however, aggregates larger than 2,000 bacteria are required to stimulate QS in neighboring aggregates positioned 8 μm away. These studies provide a powerful system to analyze the impact of spatial organization and aggregate size on microbial behaviors.

  9. LED Virtual Simulation based on Web3D

    OpenAIRE

    Lilan Liu; Liu Han; Zhiqi Lin; Manping Li; Tao Yu

    2014-01-01

    Regarding to the high price and low market popularity of current LED indoor lighting products, a LED indoor lighting platform is proposed based on Web3D technology. The internet virtual reality technology is integrated and applied into the LED collaborative e-commerce website with Virtools. According to the characteristics of the LED indoor lighting products, this paper introduced the method to build encapsulated model and three characteristics of LED lighting: geometrical, optical and behavi...

  10. 3-D Sound for Virtual Reality and Multimedia

    Science.gov (United States)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  11. The 3D virtual environment online for real shopping

    OpenAIRE

    Khalil, Nahla

    2015-01-01

    The development of information technology and Internet has led to rapidly progressed in e-commerce and online shopping, due to the convenience that they provide consumers. E-commerce and online shopping are still not able to fully replace onsite shopping. In contrast, conventional online shopping websites often cannot provide enough information about a product for the customer to make an informed decision before checkout. 3D virtual shopping environment show great potential for enhancing e-co...

  12. Building intuitive 3D interfaces for virtual reality systems

    Science.gov (United States)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  13. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    Science.gov (United States)

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  14. Dental impressions using 3D digital scanners: virtual becomes reality.

    Science.gov (United States)

    Birnbaum, Nathan S; Aaronson, Heidi B

    2008-10-01

    The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.

  15. Interactive 3D visualization for theoretical virtual observatories

    Science.gov (United States)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  16. Interactive 3D Visualization for Theoretical Virtual Observatories

    Science.gov (United States)

    Dykes, Tim; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-04-01

    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  17. Real-time recording and classification of eye movements in an immersive virtual environment.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  18. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    Science.gov (United States)

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  19. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  20. Beyond Virtual Replicas: 3D Modeling and Maltese Prehistoric Architecture

    Directory of Open Access Journals (Sweden)

    Filippo Stanco

    2013-01-01

    Full Text Available In the past decade, computer graphics have become strategic for the development of projects aimed at the interpretation of archaeological evidence and the dissemination of scientific results to the public. Among all the solutions available, the use of 3D models is particularly relevant for the reconstruction of poorly preserved sites and monuments destroyed by natural causes or human actions. These digital replicas are, at the same time, a virtual environment that can be used as a tool for the interpretative hypotheses of archaeologists and as an effective medium for a visual description of the cultural heritage. In this paper, the innovative methodology and aims and outcomes of a virtual reconstruction of the Borg in-Nadur megalithic temple, carried out by Archeomatica Project of the University of Catania, are offered as a case study for a virtual archaeology of prehistoric Malta.

  1. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  2. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    International Nuclear Information System (INIS)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-01

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  3. Real-time 3D imaging methods using 2D phased arrays based on synthetic focusing techniques.

    Science.gov (United States)

    Kim, Jung-Jun; Song, Tai-Kyong

    2008-07-01

    A fast 3D ultrasound imaging technique using a 2D phased array transducer based on the synthetic focusing method for nondestructive testing or medical imaging is proposed. In the proposed method, each column of a 2D array is fired successively to produce transverse fan beams focused at a fixed depth along a given longitudinal direction and the resulting pulse echoes are received at all elements of a 2D array used. After firing all column arrays, a frame of high-resolution image along a given longitudinal direction is obtained with dynamic focusing employed in the longitudinal direction on receive and in the transverse direction on both transmit and receive. The volume rate of the proposed method can be increased much higher than that of the conventional 2D array imaging by employing an efficient sparse array technique. A simple modification to the proposed method can further increase the volume scan rate significantly. The proposed methods are verified through computer simulations.

  4. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  5. Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback.

    Science.gov (United States)

    Alaraj, Ali; Luciano, Cristian J; Bailey, Daniel P; Elsenousi, Abdussalam; Roitberg, Ben Z; Bernardo, Antonio; Banerjee, P Pat; Charbel, Fady T

    2015-03-01

    With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. To develop and evaluate the usefulness of a new haptic-based virtual reality simulator in the training of neurosurgical residents. A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the ImmersiveTouch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomographic angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-dimensional immersive virtual reality environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from 3 residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Residents thought that the simulation would be useful in preparing for real-life surgery. About two-thirds of the residents thought that the 3-dimensional immersive anatomic details provided a close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They thought the simulation was useful for preoperative surgical rehearsal and neurosurgical training. A third of the residents thought that the technology in its current form provided realistic haptic feedback for aneurysm surgery. Neurosurgical residents thought that the novel immersive VR simulator is helpful in their training, especially because they do not get a chance to perform aneurysm clippings until late in their residency programs.

  6. WE-AB-BRB-00: Session in Memory of Robert J. Shalek: High Resolution Dosimetry from 2D to 3D to Real-Time 3D

    International Nuclear Information System (INIS)

    2016-01-01

    Despite widespread IMRT treatments at modern radiation therapy clinics, precise dosimetric commissioning of an IMRT system remains a challenge. In the most recent report from the Radiological Physics Center (RPC), nearly 20% of institutions failed an end-to-end test with an anthropomorphic head and neck phantom, a test that has rather lenient dose difference and distance-to-agreement criteria of 7% and 4 mm. The RPC report provides strong evidence that IMRT implementation is prone to error and that improved quality assurance tools are required. At the heart of radiation therapy dosimetry is the multidimensional dosimeter. However, due to the limited availability of water-equivalent dosimetry materials, research and development in this important field is challenging. In this session, we will review a few dosimeter developments that are either in the laboratory phase or in the pre-commercialization phase. 1) Radiochromic plastic. Novel formulations exhibit light absorbing optical contrast with very little scatter, enabling faster, broad beam optical CT design. 2) Storage phosphor. After irradiation, the dosimetry panels will be read out using a dedicated 2D scanning apparatus in a non-invasive, electro-optic manner and immediately restored for further use. 3) Liquid scintillator. Scintillators convert the energy from x-rays and proton beams into visible light, which can be recorded with a scientific camera (CCD or CMOS) from multiple angles. The 3D shape of the dose distribution can then be reconstructed. 4) Cherenkov emission imaging. Gated intensified imaging allows video-rate passive detection of Cherenkov emission during radiation therapy with the room lights on. Learning Objectives: To understand the physics of a variety of dosimetry techniques based upon optical imaging To investigate the strategies to overcome respective challenges and limitations To explore novel ideas of dosimeter design Supported in part by NIH Grants R01CA148853, R01CA182450, R01CA109558

  7. WE-AB-BRB-00: Session in Memory of Robert J. Shalek: High Resolution Dosimetry from 2D to 3D to Real-Time 3D

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Despite widespread IMRT treatments at modern radiation therapy clinics, precise dosimetric commissioning of an IMRT system remains a challenge. In the most recent report from the Radiological Physics Center (RPC), nearly 20% of institutions failed an end-to-end test with an anthropomorphic head and neck phantom, a test that has rather lenient dose difference and distance-to-agreement criteria of 7% and 4 mm. The RPC report provides strong evidence that IMRT implementation is prone to error and that improved quality assurance tools are required. At the heart of radiation therapy dosimetry is the multidimensional dosimeter. However, due to the limited availability of water-equivalent dosimetry materials, research and development in this important field is challenging. In this session, we will review a few dosimeter developments that are either in the laboratory phase or in the pre-commercialization phase. 1) Radiochromic plastic. Novel formulations exhibit light absorbing optical contrast with very little scatter, enabling faster, broad beam optical CT design. 2) Storage phosphor. After irradiation, the dosimetry panels will be read out using a dedicated 2D scanning apparatus in a non-invasive, electro-optic manner and immediately restored for further use. 3) Liquid scintillator. Scintillators convert the energy from x-rays and proton beams into visible light, which can be recorded with a scientific camera (CCD or CMOS) from multiple angles. The 3D shape of the dose distribution can then be reconstructed. 4) Cherenkov emission imaging. Gated intensified imaging allows video-rate passive detection of Cherenkov emission during radiation therapy with the room lights on. Learning Objectives: To understand the physics of a variety of dosimetry techniques based upon optical imaging To investigate the strategies to overcome respective challenges and limitations To explore novel ideas of dosimeter design Supported in part by NIH Grants R01CA148853, R01CA182450, R01CA109558

  8. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  9. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1997-04-01

    Sandia National Laboratories' Straight Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight Line's efforts is to create a system to securely disseminate this data on the Internet's World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight Line secure 3-D web page. A discussion of the ''pros and cons'' of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at the following address: http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  10. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1996-01-01

    Sandia National Laboratories'' Straight-Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight-Line''s efforts is to create a system to securely disseminate this data on the Internet''s World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight-Line secure 3-D web page. A discussion of the pros and cons of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at this address, http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  11. Enhanced LOD Concepts for Virtual 3d City Models

    Science.gov (United States)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  12. Virtual reality 3D headset based on DMD light modulators

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  13. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  14. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    Science.gov (United States)

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. The Photogrammetric Survey Methodologies Applied to Low Cost 3d Virtual Exploration in Multidisciplinary Field

    Science.gov (United States)

    Palestini, C.; Basso, A.

    2017-11-01

    In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.

  16. 3D for Geosciences: Interactive Tangibles and Virtual Models

    Science.gov (United States)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of

  17. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  18. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  19. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  20. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    Science.gov (United States)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  1. Mitigating Space Weather Impacts on the Power Grid in Real-Time: Applying 3-D EarthScope Magnetotelluric Data to Forecasting Reactive Power Loss in Power Transformers

    Science.gov (United States)

    Schultz, A.; Bonner, L. R., IV

    2017-12-01

    Current efforts to assess risk to the power grid from geomagnetic disturbances (GMDs) that result in geomagnetically induced currents (GICs) seek to identify potential "hotspots," based on statistical models of GMD storm scenarios and power distribution grounding models that assume that the electrical conductivity of the Earth's crust and mantle varies only with depth. The NSF-supported EarthScope Magnetotelluric (MT) Program operated by Oregon State University has mapped 3-D ground electrical conductivity structure across more than half of the continental US. MT data, the naturally occurring time variations in the Earth's vector electric and magnetic fields at ground level, are used to determine the MT impedance tensor for each site (the ratio of horizontal vector electric and magnetic fields at ground level expressed as a complex-valued frequency domain quantity). The impedance provides information on the 3-D electrical conductivity structure of the Earth's crust and mantle. We demonstrate that use of 3-D ground conductivity information significantly improves the fidelity of GIC predictions over existing 1-D approaches. We project real-time magnetic field data streams from US Geological Survey magnetic observatories into a set of linear filters that employ the impedance data and that generate estimates of ground level electric fields at the locations of MT stations. The resulting ground electric fields are projected to and integrated along the path of power transmission lines. This serves as inputs to power flow models that represent the power transmission grid, yielding a time-varying set of quasi-real-time estimates of reactive power loss at the power transformers that are critical infrastructure for power distribution. We demonstrate that peak reactive power loss and hence peak risk for transformer damage from GICs does not necessarily occur during peak GMD storm times, but rather depends on the time-evolution of the polarization of the GMD's inducing fields

  2. Virtual reality 3D headset based on DMD light modulators

    Science.gov (United States)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  3. Virtual 3D planning of tracheostomy placement and clinical applicability of 3D cannula design: a three-step study.

    Science.gov (United States)

    de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B

    2018-02-01

    We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. 3D models of commercially available cannula were positioned in 3D models of the airway. In study (1), a cohort that underwent tracheostomy between 2013 and 2015 was selected (n = 26). The cannula was virtually placed in the airway in the pre-operative CT scan and its position was compared to the cannula position on post-operative CT scans. In study (2), a cohort with neuromuscular disease (n = 14) was analyzed. Virtual cannula placing was performed in CT scans and tested if problems could be anticipated. Finally (3), for a patient with Duchenne muscular dystrophy and complications of conventional tracheostomy cannula, a patient-specific cannula was 3D designed, fabricated, and placed. (1) The 3D planned and post-operative tracheostomy position differed significantly. (2) Three groups of patients were identified: (A) normal anatomy; (B) abnormal anatomy, commercially available cannula fits; and (C) abnormal anatomy, custom-made cannula, may be necessary. (3) The position of the custom-designed cannula was optimal and the trachea healed. Virtual planning of the tracheostomy did not correlate with actual cannula position. Identifying patients with abnormal airway anatomy in whom commercially available cannula cannot be optimally positioned is advantageous. Patient-specific cannula design based on 3D virtualization of the airway was beneficial in a patient with abnormal airway anatomy.

  4. Evaluation of two 3D virtual computer reconstructions for comparison of cleft lip and palate to normal fetal microanatomy.

    Science.gov (United States)

    Landes, Constantin A; Weichert, Frank; Geis, Philipp; Helga, Fritsch; Wagner, Mathias

    2006-03-01

    Cleft lip and palate reconstructive surgery requires thorough knowledge of normal and pathological labial, palatal, and velopharyngeal anatomy. This study compared two software algorithms and their 3D virtual anatomical reconstruction because exact 3D micromorphological reconstruction may improve learning, reveal spatial relationships, and provide data for mathematical modeling. Transverse and frontal serial sections of the midface of 18 fetal specimens (11th to 32nd gestational week) were used for two manual segmentation approaches. The first manual segmentation approach used bitmap images and either Windows-based or Mac-based SURFdriver commercial software that allowed manual contour matching, surface generation with average slice thickness, 3D triangulation, and real-time interactive virtual 3D reconstruction viewing. The second manual segmentation approach used tagged image format and platform-independent prototypical SeViSe software developed by one of the authors (F.W.). Distended or compressed structures were dynamically transformed. Registration was automatic but allowed manual correction, such as individual section thickness, surface generation, and interactive virtual 3D real-time viewing. SURFdriver permitted intuitive segmentation, easy manual offset correction, and the reconstruction showed complex spatial relationships in real time. However, frequent software crashes and erroneous landmarks appearing "out of the blue," requiring manual correction, were tedious. Individual section thickness, defined smoothing, and unlimited structure number could not be integrated. The reconstruction remained underdimensioned and not sufficiently accurate for this study's reconstruction problem. SeViSe permitted unlimited structure number, late addition of extra sections, and quantified smoothing and individual slice thickness; however, SeViSe required more elaborate work-up compared to SURFdriver, yet detailed and exact 3D reconstructions were created.

  5. NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.

    Science.gov (United States)

    Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul

    2014-09-30

    As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    Science.gov (United States)

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  7. Novel, high-definition 3-D endoscopy system with real-time compression communication system to aid diagnoses and treatment between hospitals in Thailand.

    Science.gov (United States)

    Uemura, Munenori; Kenmotsu, Hajime; Tomikawa, Morimasa; Kumashiro, Ryuichi; Yamashita, Makoto; Ikeda, Testuo; Yamashita, Hiromasa; Chiba, Toshio; Hayashi, Koichi; Sakae, Eiji; Eguchi, Mitsuo; Fukuyo, Tsuneo; Chittmittrapap, Soottiporn; Navicharern, Patpong; Chotiwan, Pornarong; Pattana-Arum, Jirawat; Hashizume, Makoto

    2015-05-01

    Traditionally, laparoscopy has been based on 2-D imaging, which represents a considerable challenge. As a result, 3-D visualization technology has been proposed as a way to better facilitate laparoscopy. We compared the latest 3-D systems with high-end 2-D monitors to validate the usefulness of new systems for endoscopic diagnoses and treatment in Thailand. We compared the abilities of our high-definition 3-D endoscopy system with real-time compression communication system with a conventional high-definition (2-D) endoscopy system by asking health-care staff to complete tasks. Participants answered questionnaires and whether procedures were easier using our system or the 2-D endoscopy system. Participants were significantly faster at suture insertion with our system (34.44 ± 15.91 s) than with the 2-D system (52.56 ± 37.51 s) (P < 0.01). Most surgeons thought that the 3-D system was good in terms of contrast, brightness, perception of the anteroposterior position of the needle, needle grasping, inserting the needle as planned, and needle adjustment during laparoscopic surgery. Several surgeons highlighted the usefulness of exposing and clipping the bile duct and gallbladder artery, as well as dissection from the liver bed during laparoscopic surgery. In an image-transfer experiment with RePure-L®, participants at Rajavithi Hospital could obtain reconstructed 3-D images that were non-inferior to conventional images from Chulalongkorn University Hospital (10 km away). These data suggest that our newly developed system could be of considerable benefit to the health-care system in Thailand. Transmission of moving endoscopic images from a center of excellence to a rural hospital could help in the diagnosis and treatment of various diseases. © 2015 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Wiley Publishing Asia Pty Ltd.

  8. Design and implementation of a 3D ocean virtual reality and visualization engine

    Science.gov (United States)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  9. Virtual decoupling flight control via real-time trajectory synthesis and tracking

    Science.gov (United States)

    Zhang, Xuefu

    The production of the General Aviation industry has declined in the past 25 years. Ironically, however, the increasing demand for air travel as a fast, safe, and high-quality mode of transportation has been far from satisfied. Addressing this demand shortfall with personal air transportation necessitates advanced systems for navigation, guidance, control, flight management, and flight traffic control. Among them, an effective decoupling flight control system will not only improve flight quality, safety, and simplicity, and increase air space usage, but also reduce expenses on pilot initial and current training, and thus expand the current market and explore new markets. Because of the formidable difficulties encountered in the actual decoupling of non-linear, time-variant, and highly coupled flight control systems through traditional approaches, a new approach, which essentially converts the decoupling problem into a real-time trajectory synthesis and tracking problem, is employed. Then, the converted problem is solved and a virtual decoupling effect is achieved. In this approach, a trajectory in inertial space can be predefined and dynamically modified based on the flight mission and the pilot's commands. A feedforward-feedback control architecture is constructed to guide the airplane along the trajectory as precisely as possible. Through this approach, the pilot has much simpler, virtually decoupled control of the airplane in terms of speed, flight path angle and horizontal radius of curvature. To verify and evaluate this approach, extensive computer simulation is performed. A great deal of test cases are designed for the flight control under different flight conditions. The simulation results show that our decoupling strategy is satisfactory and promising, and therefore the research can serve as a consolidated foundation for future practical applications.

  10. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy

    International Nuclear Information System (INIS)

    Seppenwoolde, Yvette; Shirato, Hiroki; Kitamura, Kei; Shimizu, Shinichi; Herk, Marcel van; Lebesque, Joos V.; Miyasaka, Kazuo

    2002-01-01

    Purpose: In this work, three-dimensional (3D) motion of lung tumors during radiotherapy in real time was investigated. Understanding the behavior of tumor motion in lung tissue to model tumor movement is necessary for accurate (gated or breath-hold) radiotherapy or CT scanning. Methods: Twenty patients were included in this study. Before treatment, a 2-mm gold marker was implanted in or near the tumor. A real-time tumor tracking system using two fluoroscopy image processor units was installed in the treatment room. The 3D position of the implanted gold marker was determined by using real-time pattern recognition and a calibrated projection geometry. The linear accelerator was triggered to irradiate the tumor only when the gold marker was located within a certain volume. The system provided the coordinates of the gold marker during beam-on and beam-off time in all directions simultaneously, at a sample rate of 30 images per second. The recorded tumor motion was analyzed in terms of the amplitude and curvature of the tumor motion in three directions, the differences in breathing level during treatment, hysteresis (the difference between the inhalation and exhalation trajectory of the tumor), and the amplitude of tumor motion induced by cardiac motion. Results: The average amplitude of the tumor motion was greatest (12±2 mm [SD]) in the cranial-caudal direction for tumors situated in the lower lobes and not attached to rigid structures such as the chest wall or vertebrae. For the lateral and anterior-posterior directions, tumor motion was small both for upper- and lower-lobe tumors (2±1 mm). The time-averaged tumor position was closer to the exhale position, because the tumor spent more time in the exhalation than in the inhalation phase. The tumor motion was modeled as a sinusoidal movement with varying asymmetry. The tumor position in the exhale phase was more stable than the tumor position in the inhale phase during individual treatment fields. However, in many

  11. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Science.gov (United States)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  12. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    Science.gov (United States)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  13. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  15. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Directory of Open Access Journals (Sweden)

    Marcel Tresanchez

    2012-10-01

    Full Text Available This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6 processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  16. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  17. 3D Boolean operations in virtual surgical planning.

    Science.gov (United States)

    Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun

    2017-10-01

    Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.

  18. Virtual environment display for a 3D audio room simulation

    Science.gov (United States)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  19. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging

    International Nuclear Information System (INIS)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-01-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ∼0.5 mm for the normal adult breathing pattern to ∼1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  20. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    Science.gov (United States)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  1. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  2. Evaluation of accuracy about 2D vs 3D real-time position management system based on couch rotation when non-coplanar respiratory gated radiation therapy

    International Nuclear Information System (INIS)

    Kwon, Kyung Tae; Kim, Jung Soo; Sim, Hyun Sun; Min, Jung Whan; Son, Soon Yong; Han, Dong Kyoon

    2016-01-01

    Because of non-coplanar therapy with couch rotation in respiratory gated radiation therapy, the recognition of marker movement due to the change in the distance between the infrared camera and the marker due to the rotation of the couch is called RPM (Real-time The purpose of this paper is to evaluate the accuracy of motion reflections (baseline changes) of 2D gating configuration (two dot marker block) and 3D gating configuration (six dot marker block). The motion was measured by varying the couch angle in the clockwise and counterclockwise directions by 10° in the 2D gating configuration. In the 3D gating configuration, the couch angle was changed by 10° in the clockwise direction and compared with the baseline at the reference 0°. The reference amplitude was 1.173 to 1.165, the couch angle at 20° was 1.132, and the couch angle at 1.0° was 1.083. At 350° counterclockwise, the reference amplitude was 1.168 to 1.157, the couch angle at 340° was 1.124, and the couch angle at 330° was 1.079. In this study, the phantom is used to quantitatively evaluate the value of the amplitude according to couch change

  3. Evaluation of accuracy about 2D vs 3D real-time position management system based on couch rotation when non-coplanar respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Kyung Tae; Kim, Jung Soo [Dongnam Health University, Suwon (Korea, Republic of); Sim, Hyun Sun [College of Health Sciences, Korea University, Seoul (Korea, Republic of); Min, Jung Whan [Shingu University College, Sungnam (Korea, Republic of); Son, Soon Yong [Wonkwang Health Science University, Iksan (Korea, Republic of); Han, Dong Kyoon [College of Health Sciences, EulJi University, Daejeon (Korea, Republic of)

    2016-12-15

    Because of non-coplanar therapy with couch rotation in respiratory gated radiation therapy, the recognition of marker movement due to the change in the distance between the infrared camera and the marker due to the rotation of the couch is called RPM (Real-time The purpose of this paper is to evaluate the accuracy of motion reflections (baseline changes) of 2D gating configuration (two dot marker block) and 3D gating configuration (six dot marker block). The motion was measured by varying the couch angle in the clockwise and counterclockwise directions by 10° in the 2D gating configuration. In the 3D gating configuration, the couch angle was changed by 10° in the clockwise direction and compared with the baseline at the reference 0°. The reference amplitude was 1.173 to 1.165, the couch angle at 20° was 1.132, and the couch angle at 1.0° was 1.083. At 350° counterclockwise, the reference amplitude was 1.168 to 1.157, the couch angle at 340° was 1.124, and the couch angle at 330° was 1.079. In this study, the phantom is used to quantitatively evaluate the value of the amplitude according to couch change.

  4. Real-time high resolution 3D imaging of the lyme disease spirochete adhering to and escaping from the vasculature of a living host.

    Directory of Open Access Journals (Sweden)

    Tara J Moriarty

    2008-06-01

    Full Text Available Pathogenic spirochetes are bacteria that cause a number of emerging and re-emerging diseases worldwide, including syphilis, leptospirosis, relapsing fever, and Lyme borreliosis. They navigate efficiently through dense extracellular matrix and cross the blood-brain barrier by unknown mechanisms. Due to their slender morphology, spirochetes are difficult to visualize by standard light microscopy, impeding studies of their behavior in situ. We engineered a fluorescent infectious strain of Borrelia burgdorferi, the Lyme disease pathogen, which expressed green fluorescent protein (GFP. Real-time 3D and 4D quantitative analysis of fluorescent spirochete dissemination from the microvasculature of living mice at high resolution revealed that dissemination was a multi-stage process that included transient tethering-type associations, short-term dragging interactions, and stationary adhesion. Stationary adhesions and extravasating spirochetes were most commonly observed at endothelial junctions, and translational motility of spirochetes appeared to play an integral role in transendothelial migration. To our knowledge, this is the first report of high resolution 3D and 4D visualization of dissemination of a bacterial pathogen in a living mammalian host, and provides the first direct insight into spirochete dissemination in vivo.

  5. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    Science.gov (United States)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  6. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    Science.gov (United States)

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  7. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  8. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  9. Intelligent Open Data 3D Maps in a Collaborative Virtual World

    Directory of Open Access Journals (Sweden)

    Juho-Pekka Virtanen

    2015-05-01

    Full Text Available Three-dimensional (3D maps have many potential applications, such as navigation and urban planning. In this article, we present the use of a 3D virtual world platform Meshmoon to create intelligent open data 3D maps. A processing method is developed to enable the generation of 3D virtual environments from the open data of the National Land Survey of Finland. The article combines the elements needed in contemporary smart city concepts, such as the connection between attribute information and 3D objects, and the creation of collaborative virtual worlds from open data. By using our 3D virtual world platform, it is possible to create up-to-date, collaborative 3D virtual models, which are automatically updated on all viewers. In the scenes, all users are able to interact with the model, and with each other. With the developed processing methods, the creation of virtual world scenes was partially automated for collaboration activities.

  10. Implementation of virtual models from sheet metal forming simulation into physical 3D colour models using 3D printing

    Science.gov (United States)

    Junk, S.

    2016-08-01

    Today the methods of numerical simulation of sheet metal forming offer a great diversity of possibilities for optimization in product development and in process design. However, the results from simulation are only available as virtual models. Because there are any forming tools available during the early stages of product development, physical models that could serve to represent the virtual results are therefore lacking. Physical 3D-models can be created using 3D-printing and serve as an illustration and present a better understanding of the simulation results. In this way, the results from the simulation can be made more “comprehensible” within a development team. This paper presents the possibilities of 3D-colour printing with particular consideration of the requirements regarding the implementation of sheet metal forming simulation. Using concrete examples of sheet metal forming, the manufacturing of 3D colour models will be expounded upon on the basis of simulation results.

  11. Real-time 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy.

    Science.gov (United States)

    Furtado, Hugo; Steiner, Elisabeth; Stock, Markus; Georg, Dietmar; Birkfellner, Wolfgang

    2013-10-01

    Intra-fractional respiratory motion during radiotherapy leads to a larger planning target volume (PTV). Real-time tumor motion tracking by two-dimensional (2D)/3D registration using on-board kilo-voltage (kV) imaging can allow for a reduction of the PTV though motion along the imaging beam axis cannot be resolved using only one projection image. We present a retrospective patient study investigating the impact of paired portal mega-voltage (MV) and kV images on registration accuracy. Material and methods. We used data from 10 patients suffering from non-small cell lung cancer (NSCLC) undergoing stereotactic body radiation therapy (SBRT) lung treatment. For each patient we acquired a planning computed tomography (CT) and sequences of kV and MV images during treatment. We compared the accuracy of motion tracking in six degrees-of-freedom (DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. Results. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 2.9 mm to 1.5 mm and the motion along AP was successfully extracted. Mean registration time was 188 ms. Conclusion. Our evaluation shows that using kV-MV image pairs leads to improved motion extraction in six DOF and is suitable for real-time tumor motion tracking with a conventional LINAC.

  12. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, R.; Verhoeven, S.; Vass, M.; Vriend, G.; Esch, I.J. de; Lusher, S.J.; Leurs, R.; Ridder, L.; Kooistra, A.J.; Ritschel, T.; Graaf, C. de

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  13. 3D-e-Chem-VM : Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; De Esch, Iwan J P; Lusher, Scott J.; Leurs, Rob; Ridder, Lars; Kooistra, Albert J.; Ritschel, Tina; de Graaf, C.

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  14. 4-D ICE: A 2-D Array Transducer With Integrated ASIC in a 10-Fr Catheter for Real-Time 3-D Intracardiac Echocardiography.

    Science.gov (United States)

    Wildes, Douglas; Lee, Warren; Haider, Bruno; Cogan, Scott; Sundaresan, Krishnakumar; Mills, David M; Yetter, Christopher; Hart, Patrick H; Haun, Christopher R; Concepcion, Mikael; Kirkhorn, Johan; Bitoun, Marc

    2016-12-01

    We developed a 2.5 ×6.6 mm 2 2 -D array transducer with integrated transmit/receive application-specific integrated circuit (ASIC) for real-time 3-D intracardiac echocardiography (4-D ICE) applications. The ASIC and transducer design were optimized so that the high-voltage transmit, low-voltage time-gain control and preamp, subaperture beamformer, and digital control circuits for each transducer element all fit within the 0.019-mm 2 area of the element. The transducer assembly was deployed in a 10-Fr (3.3-mm diameter) catheter, integrated with a GE Vivid E9 ultrasound imaging system, and evaluated in three preclinical studies. The 2-D image quality and imaging modes were comparable to commercial 2-D ICE catheters. The 4-D field of view was at least 90 ° ×60 ° ×8 cm and could be imaged at 30 vol/s, sufficient to visualize cardiac anatomy and other diagnostic and therapy catheters. 4-D ICE should significantly reduce X-ray fluoroscopy use and dose during electrophysiology ablation procedures. 4-D ICE may be able to replace transesophageal echocardiography (TEE), and the associated risks and costs of general anesthesia, for guidance of some structural heart procedures.

  15. Microwave ablation assisted by a real-time virtual navigation system for hepatocellular carcinoma undetectable by conventional ultrasonography

    International Nuclear Information System (INIS)

    Liu Fangyi; Yu Xiaoling; Liang Ping; Cheng Zhigang; Han Zhiyu; Dong Baowei; Zhang Xiaohong

    2012-01-01

    Objectives: To evaluate the efficiency and feasibility of microwave (MW) ablation assisted by a real-time virtual navigation system for hepatocellular carcinoma (HCC) undetectable by conventional ultrasonography. Methods: 18 patients with 18 HCC nodules (undetectable on conventional US but detectable by intravenous contrast-enhanced CT or MRI) were enrolled in this study. Before MW ablation, US images and MRI or CT images were synchronized using the internal markers at the best timing of the inspiration. Thereafter, MW ablation was performed under real-time virtual navigation system guidance. Therapeutic efficacy was assessed by the result of contrast-enhanced imagings after the treatment. Results: The target HCC nodules could be detected with fusion images in all patients. The time required for image fusion was 8–30 min (mean, 13.3 ± 5.7 min). 17 nodules were successfully ablated according to the contrast enhanced imagings 1 month after ablation. The technique effectiveness rate was 94.44% (17/18). The follow-up time was 3–12 months (median, 6 months) in our study. No severe complications occurred. No local recurrence was observed in any patients. Conclusions: MW ablation assisted by a real-time virtual navigation system is a feasible and efficient treatment of patients with HCC undetectable by conventional ultrasonography.

  16. Enhancing Learning within the 3-D Virtual Learning Environment

    OpenAIRE

    Shirin Shafieiyoun; Akbar Moazen Safaei

    2013-01-01

    Today’s using of virtual learning environments becomes more remarkable in education. The potential of virtual learning environments has frequently been related to the expansion of sense of social presence which is obtained from students and educators. This study investigated the effectiveness of social presence within virtual learning environments and analysed the impact of social presence on increasing learning satisfaction within virtual learning environments. Second Life, as an example of ...

  17. A method for enabling real-time structural deformation in remote handling control system by utilizing offline simulation results and 3D model morphing

    International Nuclear Information System (INIS)

    Kiviranta, Sauli; Saarinen, Hannu; Maekinen, Harri; Krassi, Boris

    2011-01-01

    A full scale physical test facility, DTP2 (Divertor Test Platform 2) has been established in Finland for demonstrating and refining the Remote Handling (RH) equipment designs for ITER. The first prototype RH equipment at DTP2 is the Cassette Multifunctional Mover (CMM) equipped with Second Cassette End Effector (SCEE) delivered to DTP2 in October 2008. The purpose is to prove that CMM/SCEE prototype can be used successfully for the 2nd cassette RH operations. At the end of F4E grant 'DTP2 test facility operation and upgrade preparation', the RH operations of the 2nd cassette were successfully demonstrated to the representatives of Fusion For Energy (F4E). Due to its design, the CMM/SCEE robot has relatively large mechanical flexibilities when the robot carries the nine-ton-weighting 2nd Cassette on the 3.6-m long lever. This leads into a poor absolute accuracy and into the situation where the 3D model, which is used in the control system, does not reflect the actual deformed state of the CMM/SCEE robot. To improve the accuracy, the new method has been developed in order to handle the flexibilities within the control system's virtual environment. The effect of the load on the CMM/SCEE has been measured and minimized in the load compensation model, which is implemented in the control system software. The proposed method accounts for the structural deformations of the robot in the control system through the 3D model morphing by utilizing the finite element method (FEM) analysis for morph targets. This resulted in a considerable improvement of the CMM/SCEE absolute accuracy and the adequacy of the 3D model, which is crucially important in the RH applications, where the visual information of the controlled device in the surrounding environment is limited.

  18. A virtual remote sensing observation network for continuous, near-real-time monitoring of atmospheric instability

    Science.gov (United States)

    Toporov, Maria; Löhnert, Ulrich; Potthast, Roland; Cimini, Domenico; De Angelis, Francesco

    2017-04-01

    Short-term forecasts of current high-resolution numerical weather prediction models still have large deficits in forecasting the exact temporal and spatial location of severe, locally influenced weather such as summer-time convective storms or cool season lifted stratus or ground fog. Often, the thermodynamic instability - especially in the boundary layer - plays an essential role in the evolution of weather events. While the thermodynamic state of the atmosphere is well measured close to the surface (i.e. 2 m) by in-situ sensors and in the upper troposphere by satellite sounders, the planetary boundary layer remains a largely under-sampled region of the atmosphere where only sporadic information from radiosondes or aircraft observations is available. The major objective of the presented DWD-funded project ARON (Extramural Research Programme) is to overcome this observational gap and to design an optimized network of ground based microwave radiometers (MWR) and compact Differential Absorption Lidars (DIAL) for a continuous, near-real-time monitoring of temperature and humidity in the atmospheric boundary layer in order to monitor thermodynamic (in)stability. Previous studies showed, that microwave profilers are well suited for continuously monitoring the temporal development of atmospheric stability (i.e. Cimini et al., 2015) before the initiation of deep convection, especially in the atmospheric boundary layer. However, the vertical resolution of microwave temperature profiles is best in the lowest kilometer above the surface, decreasing rapidly with increasing height. In addition, humidity profile retrievals typically cannot be resolved with more than two degrees of freedom for signal, resulting in a rather poor vertical resolution throughout the troposphere. Typical stability indices used to assess the potential of convection rely on temperature and humidity values not only in the region of the boundary layer but also in the layers above. Therefore, satellite

  19. Integration of the virtual 3D model of a control system with the virtual controller

    Science.gov (United States)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the

  20. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    Science.gov (United States)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  1. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  2. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2016-01-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN +  , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN + prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN + implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN +  . The experimental results show that the EKF-GPRN + algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN + algorithm can further reduce the prediction error by employing the gating function

  3. PAST AND FUTURE APPLICATIONS OF 3-D (VIRTUAL REALITY) TECHNOLOGY

    OpenAIRE

    Nigel Foreman; Liliya Korallo

    2014-01-01

    Virtual Reality (virtual environment technology, VET) has been widely available for twenty years. In that time, the benefits of using virtual environments (VEs) have become clear in many areas of application, including assessment and training, education, rehabilitation and psychological research in spatial cognition. The flexibility, reproducibility and adaptability of VEs are especially important, particularly in the training and testing of navigational and way-finding skills. Transfer of tr...

  4. 3D Virtual Learning Environments in Education: A Meta-Review

    Science.gov (United States)

    Reisoglu, I.; Topu, B.; Yilmaz, R.; Karakus Yilmaz, T.; Göktas, Y.

    2017-01-01

    The aim of this study is to investigate recent empirical research studies about 3D virtual learning environments. A total of 167 empirical studies that involve the use of 3D virtual worlds in education were examined by meta-review. Our findings show that the "Second Life" platform has been frequently used in studies. Among the reviewed…

  5. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    Science.gov (United States)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  6. Contextual EFL Learning in a 3D Virtual Environment

    Science.gov (United States)

    Lan, Yu-Ju

    2015-01-01

    The purposes of the current study are to develop virtually immersive EFL learning contexts for EFL learners in Taiwan to pre- and review English materials beyond the regular English class schedule. A 2-iteration action research lasting for one semester was conducted to evaluate the effects of virtual contexts on learners' EFL learning. 132…

  7. Effectiveness of Collaborative Learning with 3D Virtual Worlds

    Science.gov (United States)

    Cho, Young Hoan; Lim, Kenneth Y. T.

    2017-01-01

    Virtual worlds have affordances to enhance collaborative learning in authentic contexts. Despite the potential of collaborative learning with a virtual world, few studies investigated whether it is more effective in student achievements than teacher-directed instruction. This study investigated the effectiveness of collaborative problem solving…

  8. Evaluation of Real-Time Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland and California

    Science.gov (United States)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Heaton, T. H.

    2012-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) algorithms - that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS will be installed and tested at other European networks. VS has been running in real-time on stations of the Southern California Seismic Network (SCSN) since July 2008, and on stations of the Berkeley Digital Seismic Network (BDSN) and the USGS Menlo Park strong motion network in northern California since February 2009. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. We present summaries of the real-time performance of VS in Switzerland and California over the past two and three years respectively. The empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, are demonstrated to perform well in northern California and Switzerland. Implementation in real-time and off-line testing in Europe will potentially be extended to southern Italy, western Greece, Istanbul, Romania, and Iceland. Integration of the VS algorithm into both the CISN Advanced

  9. Mutating the realities in fashion design: virtual clothing for 3D avatars

    OpenAIRE

    Taylor, Andrew; Unver, Ertu

    2007-01-01

    “My fantasy is to be Uma Thurman in Kill Bill…and now I can… I’d pay $10 for her yellow jumpsuit and sword moves and I’m sure other people would too… \\ud Hundreds and thousands of humans living in different time zones around the world are choosing to re-create and express themselves as three dimensional avatars in 3D virtual online worlds: An avatar is defined as an interactive 3D image or character, representing a user in a multi-user virtual world/virtual reality space. 3D virtual online wo...

  10. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    Science.gov (United States)

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  11. Distance Learning for Students with Special Needs through 3D Virtual Learning

    Science.gov (United States)

    Laffey, James M.; Stichter, Janine; Galyen, Krista

    2014-01-01

    iSocial is a 3D Virtual Learning Environment (3D VLE) to develop social competency for students who have been identified with High-Functioning Autism Spectrum Disorders. The motivation for developing a 3D VLE is to improve access to special needs curriculum for students who live in rural or small school districts. The paper first describes a…

  12. LIME: 3D visualisation and interpretation of virtual geoscience models

    Science.gov (United States)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel

  13. VirtualizeMe: Real-time avatar creation for Tele-Immersion environments

    KAUST Repository

    Knoblauch, Daniel; Font, Pau Moreno; Kuester, Falko

    2010-01-01

    through lossless compression of the input data and introducing a focused volumetric visual hull reconstruction. The resulting avatar allows eye-to-eye collaboration for remote users. The interaction with the virtual world is facilitated by the volumetric

  14. Evaluation of Real-Time and Off-Line Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland

    Science.gov (United States)

    Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas

    2013-04-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.

  15. The role of virtual reality and 3D modelling in built environment education

    OpenAIRE

    Horne, Margaret; Thompson, Emine Mine

    2007-01-01

    This study builds upon previous research on the integration of Virtual Reality (VR) within the built environment curriculum and aims to investigate the role of Virtual Reality and three-dimensional (3D) computer modelling on learning and teaching in a school of the built environment. In order to achieve this aim a number of academic experiences were analysed to explore the applicability and viability of 3D computer modelling and Virtual Reality (VR) into built environment subject areas. Altho...

  16. PAST AND FUTURE APPLICATIONS OF 3-D (VIRTUAL REALITY TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Nigel Foreman

    2014-11-01

    Full Text Available Virtual Reality (virtual environment technology, VET has been widely available for twenty years. In that time, the benefits of using virtual environments (VEs have become clear in many areas of application, including assessment and training, education, rehabilitation and psychological research in spatial cognition. The flexibility, reproducibility and adaptability of VEs are especially important, particularly in the training and testing of navigational and way-finding skills. Transfer of training between real and virtual environments has been found to be reliable. However, input device usage can compromise spatial information acquisition from VEs, and distances in VEs are invariably underestimated. The present review traces the evolution of VET, anticipates future areas in which developments are likely to occur, and highlights areas in which research is needed to optimise usage.

  17. 3D multiplayer virtual pets game using Google Card Board

    Science.gov (United States)

    Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam

    2017-08-01

    Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.

  18. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    Science.gov (United States)

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  19. Virtual reality and 3D animation in forensic visualization.

    Science.gov (United States)

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  20. VirtualizeMe: Real-time avatar creation for Tele-Immersion environments

    KAUST Repository

    Knoblauch, Daniel

    2010-03-01

    VirtualizeMe introduces a new design for a fully immersive Tele-Immersion system for remote collaboration and virtual world interaction. This system introduces a new avatar creation approach full-filling four main attributes: high resolution, scalability, flexibility and affordability. This is achieved by a total separation of reconstruction and rendering and exploiting the capabilities of modern graphic cards. The high resolution is achieved by using as much of the input information as possible through lossless compression of the input data and introducing a focused volumetric visual hull reconstruction. The resulting avatar allows eye-to-eye collaboration for remote users. The interaction with the virtual world is facilitated by the volumetric avatar model and allows a fully immersive system. This paper shows a proof of concept based on publicly available pre-recorded data to allow easier comparison. ©2010 IEEE.

  1. METHODOLOGY TO CREATE DIGITAL AND VIRTUAL 3D ARTEFACTS IN ARCHAEOLOGY

    Directory of Open Access Journals (Sweden)

    Calin Neamtu

    2016-12-01

    Full Text Available The paper presents a methodology to create 3D digital and virtual artefacts in the field of archaeology using CAD software solution. The methodology includes the following steps: the digitalization process, the digital restoration and the dissemination process within a virtual environment. The resulted 3D digital artefacts have to be created in files formats that are compatible with a large variety of operating systems and hardware configurations such as: computers, graphic tablets and smartphones. The compatibility and portability of these 3D file formats has led to a series of quality related compromises to the 3D models in order to integrate them on in a wide variety of application that are running on different hardware configurations. The paper illustrates multiple virtual reality and augmented reality application that make use of the virtual 3D artefacts that have been generated using this methodology.

  2. Development of a Real-Time Virtual Nitric Oxide Sensor for Light-Duty Diesel Engines

    Directory of Open Access Journals (Sweden)

    Seungha Lee

    2017-03-01

    Full Text Available This study describes the development of a semi-physical, real-time nitric oxide (NO prediction model that is capable of cycle-by-cycle prediction in a light-duty diesel engine. The model utilizes the measured in-cylinder pressure and information obtained from the engine control unit (ECU. From the inputs, the model takes into account the pilot injection burning and mixing, which affects the in-cylinder mixture formation. The representative in-cylinder temperature for NO formation was determined from the mixture composition calculation. The selected temperature and mixture composition was substituted using a simplified form of the NO formation rate equation for the cycle-by-cycle estimation. The reactive area and the duration of NO formation were assumed to be limited by the fuel quantity. The model predictability was verified not only using various steady-state conditions, including the variation of the EGR rate, the boost pressure, the rail pressure, and the injection timing, but also using transient conditions, which represent the worldwide harmonized light vehicles test procedure (WLTC. The WLTC NO prediction results produced less than 3% error with the measured value. In addition, the proposed model maintained its reliability in terms of hardware aging, the changing and artificial perturbations during steady-state and transient engine operations. The model has been shown to require low computational effort because of the cycle-by-cycle, engine-out NO emission prediction and control were performed simultaneously in an embedded system for the automotive application. We expect that the developed NO prediction model can be helpful in emission calibration during the engine design stage or in the real-time controlling of the exhaust NO emission for improving fuel consumption while satisfying NO emission legislation.

  3. 3D Urban Virtual Models generation methodology for smart cities

    Directory of Open Access Journals (Sweden)

    M. Álvarez

    2018-04-01

    Full Text Available Currently the use of Urban 3D Models goes beyond the mere support of three-dimensional image for the visualization of our urban surroundings. The three-dimensional Urban Models are in themselves fundamental tools to manage the different phenomena that occur in smart cities. It is therefore necessary to generate realistic models, in which BIM building design information can be integrated with GIS and other space technologies. The generation of 3D Urban Models benefit from the amount of data from sensors with the latest technologies such as airborne sensors and of the existence of international standards such as CityGML. This paper presents a methodology for the development of a three - dimensional Urban Model, based on LiDAR data and the CityGML standard, applied to the city of Lorca.

  4. Application of computer virtual simulation technology in 3D animation production

    Science.gov (United States)

    Mo, Can

    2017-11-01

    In the continuous development of computer technology, the application system of virtual simulation technology has been further optimized and improved. It also has been widely used in various fields of social development, such as city construction, interior design, industrial simulation and tourism teaching etc. This paper mainly introduces the virtual simulation technology used in 3D animation. Based on analyzing the characteristics of virtual simulation technology, the application ways and means of this technology in 3D animation are researched. The purpose is to provide certain reference for the 3D effect promotion days after.

  5. Accelerating volumetric cine MRI (VC-MRI) using undersampling for real-time 3D target localization/tracking in radiation therapy: a feasibility study

    Science.gov (United States)

    Harris, Wendy; Yin, Fang-Fang; Wang, Chunhao; Zhang, You; Cai, Jing; Ren, Lei

    2018-01-01

    Purpose. To accelerate volumetric cine MRI (VC-MRI) using undersampled 2D-cine MRI to provide real-time 3D guidance for gating/target tracking in radiotherapy. Methods. 4D-MRI is acquired during patient simulation. One phase of the prior 4D-MRI is selected as the prior images, designated as MRIprior. The on-board VC-MRI at each time-step is considered a deformation of the MRIprior. The deformation field map is represented as a linear combination of the motion components extracted by principal component analysis from the prior 4D-MRI. The weighting coefficients of the motion components are solved by matching the corresponding 2D-slice of the VC-MRI with the on-board undersampled 2D-cine MRI acquired. Undersampled Cartesian and radial k-space acquisition strategies were investigated. The effects of k-space sampling percentage (SP) and distribution, tumor sizes and noise on the VC-MRI estimation were studied. The VC-MRI estimation was evaluated using XCAT simulation of lung cancer patients and data from liver cancer patients. Volume percent difference (VPD) and Center of Mass Shift (COMS) of the tumor volumes and tumor tracking errors were calculated. Results. For XCAT, VPD/COMS were 11.93  ±  2.37%/0.90  ±  0.27 mm and 11.53  ±  1.47%/0.85  ±  0.20 mm among all scenarios with Cartesian sampling (SP  =  10%) and radial sampling (21 spokes, SP  =  5.2%), respectively. When tumor size decreased, higher sampling rate achieved more accurate VC-MRI than lower sampling rate. VC-MRI was robust against noise levels up to SNR  =  20. For patient data, the tumor tracking errors in superior-inferior, anterior-posterior and lateral (LAT) directions were 0.46  ±  0.20 mm, 0.56  ±  0.17 mm and 0.23  ±  0.16 mm, respectively, for Cartesian-based sampling with SP  =  20% and 0.60  ±  0.19 mm, 0.56  ±  0.22 mm and 0.42  ±  0.15 mm, respectively, for

  6. Assessing 3D Virtual World Disaster Training Through Adult Learning Theory

    Directory of Open Access Journals (Sweden)

    Lee Taylor-Nelms

    2014-10-01

    Full Text Available As role-play, virtual reality, and simulated environments gain popularity through virtual worlds such as Second Life, the importance of identifying best practices for education and emergency management training becomes necessary. Using a formal needs assessment approach, we examined the extent to which 3D virtual tornado simulation trainings follow the principles of adult learning theory employed by the Federal Emergency Management Agency's (FEMA National Training and Education Division. Through a three-fold methodology of observation, interviews, and reflection on action, 3D virtual world tornado trainings were analyzed for congruence to adult learning theory.

  7. Marker-referred movement measurement with grey-scale coordinate extraction for high-resolution real-time 3D at 100 Hz

    NARCIS (Netherlands)

    Furnée, E.H.; Jobbá, A.; Sabel, J.C.; Veenendaal, H.L.J. van; Martin, F.; Andriessen, D.C.W.G.

    1997-01-01

    A review of early history in photography highlights the origin of cinefilm as a scientific tool for image-based measurement of human and animal motion. The paper is concerned with scanned-area video sensors (CCD) and a computer interface for the real-time, high-resolution extraction of image

  8. Collaborative Virtual 3D Environment for Internet-Accessible Physics Experiments

    Directory of Open Access Journals (Sweden)

    Bettina Scheucher

    2009-08-01

    Full Text Available Abstract—Immersive 3D worlds have increasingly raised the interest of researchers and practitioners for various learning and training settings over the last decade. These virtual worlds can provide multiple communication channels between users and improve presence and awareness in the learning process. Consequently virtual 3D environments facilitate collaborative learning and training scenarios. In this paper we focus on the integration of internet-accessible physics experiments (iLabs combined with the TEALsim 3D simulation toolkit in Project Wonderland, Sun's toolkit for creating collaborative 3D virtual worlds. Within such a collaborative environment these tools provide the opportunity for teachers and students to work together as avatars as they control actual equipment, visualize physical phenomenon generated by the experiment, and discuss the results. In particular we will outline the steps of integration, future goals, as well as the value of a collaboration space in Wonderland's virtual world.

  9. Tactile display for virtual 3D shape rendering

    CERN Document Server

    Mansutti, Alessandro; Bordegoni, Monica; Cugini, Umberto

    2017-01-01

    This book describes a novel system for the simultaneous visual and tactile rendering of product shapes which allows designers to simultaneously touch and see new product shapes during the conceptual phase of product development. This system offers important advantages, including potential cost and time savings, compared with the standard product design process in which digital 3D models and physical prototypes are often repeatedly modified until an optimal design is achieved. The system consists of a tactile display that is able to represent, within a real environment, the shape of a product. Designers can explore the rendered surface by touching curves lying on the product shape, selecting those curves that can be considered style features and evaluating their aesthetic quality. In order to physically represent these selected curves, a flexible surface is modeled by means of servo-actuated modules controlling a physical deforming strip. The tactile display is designed so as to be portable, low cost, modular,...

  10. The virtual craniofacial patient: 3D jaw modeling and animation.

    Science.gov (United States)

    Enciso, Reyes; Memon, Ahmed; Fidaleo, Douglas A; Neumann, Ulrich; Mah, James

    2003-01-01

    In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures. More recently this data has been utilized to generate models. Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images and manually segment the jaw from MRI images. Recently opto-electronic and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).

  11. A standardized set of 3-D objects for virtual reality research and applications.

    Science.gov (United States)

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  12. Immersive Learning Environment Using 3D Virtual Worlds and Integrated Remote Experimentation

    Directory of Open Access Journals (Sweden)

    Roderval Marcelino

    2013-01-01

    Full Text Available This project seeks to demonstrate the use of remote experimentation and 3D virtual environments applied to the teaching-learning in the areas of exact sciences-physics. In proposing the combination of remote experimentation and 3D virtual worlds in teaching-learning process, we intend to achieve greater geographic coverage, contributing to the construction of new methodologies of teaching support, speed of access and foremost motivation for students to continue in scientific study of the technology areas. The proposed architecture is based on a model implemented fully featured open source and open hardware. The virtual world was built in OpenSim software and integrated it a remote physics experiment called "electrical panel". Accessing the virtual world the user has total control of the experiment within the 3D virtual world.

  13. Poor Man's Virtual Camera: Real-Time Simultaneous Matting and Camera Pose Estimation.

    Science.gov (United States)

    Szentandrasi, Istvan; Dubska, Marketa; Zacharias, Michal; Herout, Adam

    2016-03-18

    Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.

  14. Contrast-enhanced MR angiography of the carotid artery using 3D time-resolved imaging of contrast kinetics. Comparison with real-time fluoroscopic triggered 3D-elliptical centric view ordering

    International Nuclear Information System (INIS)

    Naganawa, Shinji; Koshikawa, Tokiko; Fukatsu, Hiroshi; Sakurai, Yasuo; Ishiguchi, Tsuneo; Ishigaki, Takeo; Ichinose, Nobuyasu

    2001-01-01

    The purpose of this study was to evaluate contrast-enhanced MR angiography using the 3D time-resolved imaging of contrast kinetics technique (3D-TRICKS) by direct comparison with the fluoroscopic triggered 3D-elliptical centric view ordering (3D-ELLIP) technique. 3D-TRICKS and 3D-ELLIP were directly compared on a 1.5-Tesla MR unit using the same spatial resolution and matrix. In 3D-TRICKS, the central part of the k-space is updated more frequently than the peripheral part of the k-space, which is divided in the slice-encoding direction. The carotid arteries were imaged using 3D-TRICKS and 3D-ELLIP sequentially in 14 patients. Temporal resolution was 12 sec for 3D-ELLIP and 6 sec for 3D-TRICKS. The signal-to-noise ratio (S/N) of the common carotid artery was measured, and the quality of MIP images was then scored in terms of venous overlap and blurring of vessel contours. No significant difference in mean S/N was seen between the two methods. Significant venous overlap was not seen in any of the patients examined. Moderate blurring of vessel contours was noted on 3D-TRICKS in five patients and on 3D-ELLIP in four patients. Blurring in the slice-encoding direction was slightly more pronounced in 3D-TRICKS. However, qualitative analysis scores showed no significant differences. When the spatial resolution of the two methods was identical, the performance of 3D-TRICKS was found to be comparable in static visualization of the carotid arteries with 3D-ELLIP, although blurring in the slice-encoding direction was slightly more pronounced in 3D-TRICKS. 3D-TRICKS is a more robust technique than 3D-ELLIP, because 3D-ELLIP requires operator-dependent fluoroscopic triggering. Furthermore, 3D-TRICKS can achieve higher temporal resolution. For the spatial resolution employed in this study, 3D-TRICKS may be the method of choice. (author)

  15. The Arnolfini Portrait in 3d: Creating Virtual World of a Painting with Inconsistent Perspective

    NARCIS (Netherlands)

    Jansen, P.H.; Ruttkay, Z.M.; Arnold, D. B.; Ferko, A.

    We report on creating a 3d virtual reconstruction of the scene shown in "The Arnolfini Portrait" by Jan van Eyck. This early Renaissance painting, if painted faithfully, should confirm to one-point perspective, however it has several vanishing points instead of one. Hence our 3d reconstruction had

  16. Preoperative Planning Using 3D Reconstructions and Virtual Endoscopy for Location of the Frontal Sinus

    Directory of Open Access Journals (Sweden)

    Abreu, João Paulo Saraiva

    2011-01-01

    Full Text Available Introduction: Computed tomography (TC generated tridimensional (3D reconstructions allow the observation of cavities and anatomic structures of our body with detail. In our specialty there have been attempts to carry out virtual endoscopies and laryngoscopies. However, such application has been practically abandoned due to its complexity and need for computers with high power of graphic processing. Objective: To demonstrate the production of 3D reconstructions from CTs of patients in personal computers, with a free specific program and compare them to the surgery actual endoscopic images. Method: Prospective study in which the CTs proper files of 10 patients were reconstructed with the program Intage Realia, version 2009, 0, 0, 702 (KGT Inc., Japan. The reconstructions were carried out before the surgeries and a virtual endoscopy was made to assess the recess and frontal sinus region. After this study, the surgery was digitally performed and stored. The actual endoscopic images of the recess and frontal sinus region were compared to the virtual images. Results: The 3D reconstruction and virtual endoscopy were made in 10 patients submitted to the surgery. The virtual images had a large resemblance with the actual surgical images. Conclusion: With relatively simple tools and personal computer, we demonstrated the possibility to generate 3D reconstructions and virtual endoscopies. The preoperative knowledge of the frontal sinus natural draining path location may generate benefits during the performance of surgeries. However, more studies must be developed for the evaluation of the real roles of such 3D reconstructions and virtual endoscopies.

  17. Real-time Near-infrared Virtual Intraoperative Surgical Photoacoustic Microscopy

    Directory of Open Access Journals (Sweden)

    Changho Lee

    2015-09-01

    Full Text Available We developed a near infrared (NIR virtual intraoperative surgical photoacoustic microscopy (NIR-VISPAM system that combines a conventional surgical microscope and an NIR light photoacoustic microscopy (PAM system. NIR-VISPAM can simultaneously visualize PA B-scan images at a maximum display rate of 45 Hz and display enlarged microscopic images on a surgeon's view plane through the ocular lenses of the surgical microscope as augmented reality. The use of the invisible NIR light eliminated the disturbance to the surgeon's vision caused by the visible PAM excitation laser in a previous report. Further, the maximum permissible laser pulse energy at this wavelength is approximately 5 times more than that at the visible spectral range. The use of a needle-type ultrasound transducer without any water bath for acoustic coupling can enhance convenience in an intraoperative environment. We successfully guided needle and injected carbon particles in biological tissues ex vivo and in melanoma-bearing mice in vivo.

  18. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    Science.gov (United States)

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  19. Teaching Physics to Deaf College Students in a 3-D Virtual Lab

    Science.gov (United States)

    Robinson, Vicki

    2013-01-01

    Virtual worlds are used in many educational and business applications. At the National Technical Institute for the Deaf at Rochester Institute of Technology (NTID/RIT), deaf college students are introduced to the virtual world of Second Life, which is a 3-D immersive, interactive environment, accessed through computer software. NTID students use…

  20. Real-Time Extraction of Course Track Networks in Confined Waters as Decision Support for Vessel Navigation in 3-D Nautical Chart

    National Research Council Canada - National Science Library

    Porathe, Thomas

    2006-01-01

    In an information design project at Malardalen University in Sweden a computer based 3-D nautical chart system is designed based on human factors principles of more intuitive navigation in high speeds...

  1. Avatar-mediation and Transformation of Practice in a 3D Virtual World

    DEFF Research Database (Denmark)

    Riis, Marianne

    The purpose of this study is to understand and conceptualize the transformation of a particular community of pedagogical practice based on the implementation of the 3D virtual world, Second Life™. The community setting is a course at the Master's programme on ICT and Learning (MIL), Aalborg...... with knowledge about 3D Virtual Worlds, the influence of the avatar phenomenon and the consequences of 3D-remediation in relation to teaching and learning in online education. Based on the findings, a conceptual design model, a set of design principles, and a design framework has been developed....

  2. Passive hybrid force-position control for tele-operation based on real-time simulation of a virtual mechanism

    International Nuclear Information System (INIS)

    Joly, L.; Andriot, C.

    1995-01-01

    Hybrid force-position control aims at controlling position and force in separate directions. It is particularly useful to perform certain robotic tasks. In tele-operation context, passivity is important because it ensures stability when the system interacts with any passive environment. In this paper, we propose an original approach to hybrid force-position control of a force reflecting tele-robot system. It is based on real-time simulation of a virtual mechanism corresponding to the task. the resulting control law is passive. Experiments on a 6 degrees of freedom tele-operation system consisting in following a bent pipe under several control modes validate the approach. (authors). 12 refs., 6 figs

  3. Motor facilitation during real-time movement imitation in Parkinson's disease: a virtual reality study.

    Science.gov (United States)

    Robles-García, Verónica; Arias, Pablo; Sanmartín, Gabriel; Espinosa, Nelson; Flores, Julian; Grieve, Kenneth L; Cudeiro, Javier

    2013-12-01

    Impaired temporal stability and poor motor unit recruitment are key impairments in Parkinsonian motor control during a whole spectrum of rhythmic movements, from simple finger tapping to gait. Therapies based on imitation can be designed for patients with motor impairments and virtual-reality (VR) offers a new perspective. Motor actions are known to depend upon the dopaminergic system, whose involvement in imitation is unknown. We sought to understand this role and the underlying possibilities for motor rehabilitation, by observing the execution of different motor-patterns during imitation in a VR environment in subjects with and without dopaminergic deficits. 10 OFF-dose idiopathic Parkinson's Disease patients (PD), 9 age-matched and 9 young-subjects participated. Subjects performed finger-tapping at their "comfort" and "slow-comfort" rates, while immersed in VR presenting their "avatar" in 1st person perspective. Imitation was evaluated by asking subjects to replicate finger-tapping patterns different to their natural one. The finger-pattern presented matched their comfort and comfort-slow rates, but without a pause on the table (continuously moving). Patients were able to adapt their finger-tapping correctly, showing that in comparison with the control groups, the dopaminergic deficiency of PD did not impair imitation. During imitation the magnitude of EMG increased and the temporal variability of movement decreased. PD-patients have unaltered ability to imitate instructed motor-patterns, suggesting that a fully-functional dopaminergic system is not essential for such imitation. It should be further investigated if imitation training over a period of time induces positive off-line motor adaptations with transfer to non-imitation tasks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Development and application of visual support module for remote operator in 3D virtual environment

    International Nuclear Information System (INIS)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo; Bae, Chang Hyun

    2006-02-01

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module

  5. Development and application of visual support module for remote operator in 3D virtual environment

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo [Cheju Nat. Univ., Jeju (Korea, Republic of); Bae, Chang Hyun [Pusan Nat. Univ., Busan (Korea, Republic of)

    2006-02-15

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module.

  6. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  7. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time

  8. An embedded real-time red peach detection system based on an OV7670 camera, ARM Cortex-M4 processor and 3D Look-Up Tables

    OpenAIRE

    Teixidó Cairol, Mercè; Font Calafell, Davinia; Pallejà Cabrè, Tomàs; Tresánchez Ribes, Marcel; Nogués Aymamí, Miquel; Palacín Roca, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future...

  9. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography.

    Science.gov (United States)

    Carlier, Stéphane; Didday, Rich; Slots, Tristan; Kayaert, Peter; Sonck, Jeroen; El-Mourad, Mike; Preumont, Nicolas; Schoors, Dany; Van Camp, Guy

    2014-06-01

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator's identification of landmarks to establish the image synchronization. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography

    International Nuclear Information System (INIS)

    Carlier, Stéphane; Didday, Rich; Slots, Tristan; Kayaert, Peter; Sonck, Jeroen; El-Mourad, Mike; Preumont, Nicolas; Schoors, Dany; Van Camp, Guy

    2014-01-01

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator’s identification of landmarks to establish the image synchronization

  11. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Carlier, Stéphane, E-mail: sgcarlier@hotmail.com [Department of Cardiology, Universitair Ziekenhuis - UZ Brussel, Brussels (Belgium); Department of Cardiology, Erasme University Hospital, Université Libre de Bruxelles (ULB), Brussels (Belgium); Didday, Rich [INDEC Medical Systems Inc., Santa Clara, CA (United States); Slots, Tristan [Pie Medical Imaging BV, Maastricht (Netherlands); Kayaert, Peter; Sonck, Jeroen [Department of Cardiology, Universitair Ziekenhuis - UZ Brussel, Brussels (Belgium); El-Mourad, Mike; Preumont, Nicolas [Department of Cardiology, Erasme University Hospital, Université Libre de Bruxelles (ULB), Brussels (Belgium); Schoors, Dany; Van Camp, Guy [Department of Cardiology, Universitair Ziekenhuis - UZ Brussel, Brussels (Belgium)

    2014-06-15

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator’s identification of landmarks to establish the image synchronization.

  12. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  13. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  14. Integration of virtual and real scenes within an integral 3D imaging environment

    Science.gov (United States)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  15. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    Science.gov (United States)

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...series of successive states until a given name is reached such as: Object Animate Animal Mammal Dog Labrador Chocolate (Brown) Male Name...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust

  16. Radiofrequency ablation assisted by real-time virtual sonography for hepatocellular carcinoma inconspicuous under sonography and high-risk locations

    Directory of Open Access Journals (Sweden)

    Cheng-Han Lee

    2015-08-01

    Full Text Available Radiofrequency ablation (RFA is an effective and real-time targeting modality for small hepatocellular carcinomas (HCCs. However, mistargeting may occur when the target tumor is confused with cirrhotic nodules or because of the poor conspicuity of the index tumor under ultrasonography (US. Real-time virtual sonography (RVS can provide the same reconstruction computed tomography images as US images. The aim of this study is to investigate the usefulness of RVS-assisted RFA for HCCs that are inconspicuous or conspicuous under US. A total of 21 patients with 28 HCC tumors—divided into US inconspicuous and high-risk subgroup (3 tumors in 3 patients, US inconspicuous and nonhigh-risk subgroup (5 tumors in 4 patients, US conspicuous and high-risk subgroup (16 tumors in 14 patients, and US conspicuous and nonhigh-risk subgroup (4 tumors in 3 patients—underwent RVS-assisted RFA between May 2012 and June 2014 in our institution. The mean diameter of the nodules was 2.0 ± 1.1 cm. The results showed that the complete ablation rate is 87.5% (7/8 in the US undetectable group and 75% (15/20 in the US detectable group. A comparison between six tumors with incomplete ablation and 22 tumors with complete ablation showed higher alpha-fetoprotein level (mean, 1912 ng/mL vs. 112 ng/mL and larger tumor size (mean diameter, 26 mm vs. 16 mm in the incomplete ablation nodules (both p < 0.05. In conclusion, RVS-assisted RFA is useful for tumors that are difficult to detect under conventional US and may also be useful for tumors in high-risk locations because it may prevent complication induced by mistargeting.

  17. Radiofrequency Ablation Assisted by Real-Time Virtual Sonography and CT for Hepatocellular Carcinoma Undetectable by Conventional Sonography

    International Nuclear Information System (INIS)

    Nakai, Motoki; Sato, Morio; Sahara, Shinya; Takasaka, Isao; Kawai, Nobuyuki; Minamiguchi, Hiroki; Tanihata, Hirohiko; Kimura, Masashi; Takeuchi, Nozomu

    2009-01-01

    Real-time virtual sonography (RVS) is a diagnostic imaging support system, which provides the same cross-sectional multiplanar reconstruction images as ultrasound images on the same monitor screen in real time. The purpose of this study was to evaluate radiofrequency ablation (RFA) assisted by RVS and CT for hepatocellular carcinoma (HCC) undetectable with conventional sonography. Subjects were 20 patients with 20 HCC nodules not detected by conventional sonography but detectable by CT or MRI. All patients had hepatitis C-induced liver cirrhosis; there were 13 males and 7 females aged 55-81 years (mean, 69.3 years). RFA was performed in the CT room, and the tumor was punctured with the assistance of RVS. CT was performed immediately after puncture, and ablation was performed after confirming that the needle had been inserted into the tumor precisely. The mean number of punctures and success rates of the first puncture were evaluated. Treatment effects were evaluated with dynamic CT every 3 months after RFA. RFA was technically feasible and local tumor control was achieved in all patients. The mean number of punctures was 1.1, and the success rate of the first puncture was 90.0%. This method enabled safe ablation without complications. The mean follow-up period was 13.5 month (range, 9-18 months). No local recurrence was observed at the follow-up points. In conclusion, RFA assisted by RVS and CT is a safe and efficacious method of treatment for HCC undetectable by conventional sonography.

  18. Avatar-mediation and transformation of practice in a 3D virtual world

    DEFF Research Database (Denmark)

    Riis, Marianne

    2016-01-01

    The purpose of this study is to understand and conceptualize the transformation of a particular community of pedagogical practice based on the implementation of the 3D virtual world, Second Life™. The community setting is a course at the Danish online postgraduate Master's programme on ICT...... and Learning, which is formally situated at Aalborg University. The study is guided by two research questions focusing on the participants' responses to the avatar phenomenon and the design of the course. In order to conduct and theorize about the transformation of this community of practice due to the 3D....... In summary, the study contributes with knowledge about 3D Virtual Worlds, the influence of the avatar phenomenon and the consequences of 3D-remediation in relation to teaching and learning in online education. Based on the findings, a conceptual design model, a set of design principles, and a design...

  19. Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

    Institute of Scientific and Technical Information of China (English)

    DU Jie; CHEN Xiaoyong; FumioYamazaki

    2003-01-01

    This paper demonstrates an experiment on the generation of 3D virtual geographic environment on the basis of experimental flight laser scanning data by a set of algorithms and methods that were developed to automatically interpret range images for extracting geo-spatial features and then to reconstruct geo-objects. The algorithms and methods for the interpretation and modeling of laser scanner data include triangulated-irregular-network (TIN)-based range image interpolation ; mathematical-morphology(MM)-based range image filtering,feature extraction and range image segmentation, feature generalization and optimization, 3D objects reconstruction and modeling; computergraphics (CG)-based visualization and animation of geographic virtual reality environment.

  20. APPLICATION OF 3D MODEL OF CULTURAL RELICS IN VIRTUAL RESTORATION

    Directory of Open Access Journals (Sweden)

    S. Zhao

    2018-04-01

    Full Text Available In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.

  1. Application of 3d Model of Cultural Relics in Virtual Restoration

    Science.gov (United States)

    Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q.

    2018-04-01

    In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.

  2. Experiential Virtual Scenarios With Real-Time Monitoring (Interreality) for the Management of Psychological Stress: A Block Randomized Controlled Trial

    Science.gov (United States)

    Pallavicini, Federica; Morganti, Luca; Serino, Silvia; Scaratti, Chiara; Briguglio, Marilena; Crifaci, Giulia; Vetrano, Noemi; Giulintano, Annunziata; Bernava, Giuseppe; Tartarisco, Gennaro; Pioggia, Giovanni; Raspelli, Simona; Cipresso, Pietro; Vigna, Cinzia; Grassi, Alessandra; Baruffi, Margherita; Wiederhold, Brenda; Riva, Giuseppe

    2014-01-01

    Background The recent convergence between technology and medicine is offering innovative methods and tools for behavioral health care. Among these, an emerging approach is the use of virtual reality (VR) within exposure-based protocols for anxiety disorders, and in particular posttraumatic stress disorder. However, no systematically tested VR protocols are available for the management of psychological stress. Objective Our goal was to evaluate the efficacy of a new technological paradigm, Interreality, for the management and prevention of psychological stress. The main feature of Interreality is a twofold link between the virtual and the real world achieved through experiential virtual scenarios (fully controlled by the therapist, used to learn coping skills and improve self-efficacy) with real-time monitoring and support (identifying critical situations and assessing clinical change) using advanced technologies (virtual worlds, wearable biosensors, and smartphones). Methods The study was designed as a block randomized controlled trial involving 121 participants recruited from two different worker populations—teachers and nurses—that are highly exposed to psychological stress. Participants were a sample of teachers recruited in Milan (Block 1: n=61) and a sample of nurses recruited in Messina, Italy (Block 2: n=60). Participants within each block were randomly assigned to the (1) Experimental Group (EG): n=40; B1=20, B2=20, which received a 5-week treatment based on the Interreality paradigm; (2) Control Group (CG): n=42; B1=22, B2=20, which received a 5-week traditional stress management training based on cognitive behavioral therapy (CBT); and (3) the Wait-List group (WL): n=39, B1=19, B2=20, which was reassessed and compared with the two other groups 5 weeks after the initial evaluation. Results Although both treatments were able to significantly reduce perceived stress better than WL, only EG participants reported a significant reduction (EG=12% vs CG=0

  3. Experiential virtual scenarios with real-time monitoring (interreality) for the management of psychological stress: a block randomized controlled trial.

    Science.gov (United States)

    Gaggioli, Andrea; Pallavicini, Federica; Morganti, Luca; Serino, Silvia; Scaratti, Chiara; Briguglio, Marilena; Crifaci, Giulia; Vetrano, Noemi; Giulintano, Annunziata; Bernava, Giuseppe; Tartarisco, Gennaro; Pioggia, Giovanni; Raspelli, Simona; Cipresso, Pietro; Vigna, Cinzia; Grassi, Alessandra; Baruffi, Margherita; Wiederhold, Brenda; Riva, Giuseppe

    2014-07-08

    The recent convergence between technology and medicine is offering innovative methods and tools for behavioral health care. Among these, an emerging approach is the use of virtual reality (VR) within exposure-based protocols for anxiety disorders, and in particular posttraumatic stress disorder. However, no systematically tested VR protocols are available for the management of psychological stress. Our goal was to evaluate the efficacy of a new technological paradigm, Interreality, for the management and prevention of psychological stress. The main feature of Interreality is a twofold link between the virtual and the real world achieved through experiential virtual scenarios (fully controlled by the therapist, used to learn coping skills and improve self-efficacy) with real-time monitoring and support (identifying critical situations and assessing clinical change) using advanced technologies (virtual worlds, wearable biosensors, and smartphones). The study was designed as a block randomized controlled trial involving 121 participants recruited from two different worker populations-teachers and nurses-that are highly exposed to psychological stress. Participants were a sample of teachers recruited in Milan (Block 1: n=61) and a sample of nurses recruited in Messina, Italy (Block 2: n=60). Participants within each block were randomly assigned to the (1) Experimental Group (EG): n=40; B1=20, B2=20, which received a 5-week treatment based on the Interreality paradigm; (2) Control Group (CG): n=42; B1=22, B2=20, which received a 5-week traditional stress management training based on cognitive behavioral therapy (CBT); and (3) the Wait-List group (WL): n=39, B1=19, B2=20, which was reassessed and compared with the two other groups 5 weeks after the initial evaluation. Although both treatments were able to significantly reduce perceived stress better than WL, only EG participants reported a significant reduction (EG=12% vs. CG=0.5%) in chronic "trait" anxiety. A similar

  4. Navigation and wayfinding in learning spaces in 3D virtual worlds

    OpenAIRE

    Minocha, Shailey; Hardy, Christopher

    2016-01-01

    There is a lack of published research on the design guidelines of learning spaces in virtual worlds. Therefore, when institutions aspire to create learning spaces in Second Life, there are few studies or guidelines to inform them except for individual case studies. The Design of Learning Spaces in 3D Virtual Environments (DELVE) project, funded by the Joint Information Systems Committee in the UK, was one of the first initiatives that identified through empirical investigations the usability ...

  5. Virtual inspector: a flexible visualizer for dense 3D scanned models

    OpenAIRE

    Callieri, Marco; Ponchio, Federico; Cignoni, Paolo; Scopigno, Roberto

    2008-01-01

    The rapid evolution of automatic shape acquisition technologies will make huge amount of sampled 3D data available in the near future. Cul- tural Heritage (CH) domain is one of the ideal fields of application of 3D scanned data, while some issues in the use of those data are: how to visualize at interactive rates and full quality on commodity computers; how to improve visualization ease of use; how to support the integrated visualization of a virtual 3D artwork and the multimedia data which t...

  6. 3D virtual environment of Taman Mini Indonesia Indah in a web

    Science.gov (United States)

    Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.

    2018-05-01

    Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.

  7. Interactive virtual simulation using a 3D computer graphics model for microvascular decompression surgery.

    Science.gov (United States)

    Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko

    2012-09-01

    The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p computer graphics model provided a realistic environment for performing virtual simulations prior to MVD surgery and enabled us to ascertain complex microsurgical anatomy.

  8. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    Science.gov (United States)

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  9. APPROACH TO CONSTRUCTING 3D VIRTUAL SCENE OF IRRIGATION AREA USING MULTI-SOURCE DATA

    Directory of Open Access Journals (Sweden)

    S. Cheng

    2015-10-01

    Full Text Available For an irrigation area that is often complicated by various 3D artificial ground features and natural environment, disadvantages of traditional 2D GIS in spatial data representation, management, query, analysis and visualization is becoming more and more evident. Building a more realistic 3D virtual scene is thus especially urgent for irrigation area managers and decision makers, so that they can carry out various irrigational operations lively and intuitively. Based on previous researchers' achievements, a simple, practical and cost-effective approach was proposed in this study, by adopting3D geographic information system (3D GIS, remote sensing (RS technology. Based on multi-source data such as Google Earth (GE high-resolution remote sensing image, ASTER G-DEM, hydrological facility maps and so on, 3D terrain model and ground feature models were created interactively. Both of the models were then rendered with texture data and integrated under ArcGIS platform. A vivid, realistic 3D virtual scene of irrigation area that has a good visual effect and possesses primary GIS functions about data query and analysis was constructed.Yet, there is still a long way to go for establishing a true 3D GIS for the irrigation are: issues of this study were deeply discussed and future research direction was pointed out in the end of the paper.

  10. Real-time virtual sonography (RVS)-guided vacuum-assisted breast biopsy for lesions initially detected with breast MRI.

    Science.gov (United States)

    Uematsu, Takayoshi

    2013-12-01

    To report on our initial experiences with a new method of real-time virtual sonography (RVS)-guided 11-gauge vacuum-assisted breast biopsy for lesions that were initially detected with breast MRI. RVS-guided 11-gauge vacuum-assisted biopsy is performed when a lesion with suspicious characteristics is initially detected with breast MRI and is occult on mammography, sonography, and physical examination. Live sonographic images were co-registered to the previously loaded second-look spine contrast-enhanced breast MRI volume data to correlate the sonography and MR images. Six lesions were examined in six consecutive patients scheduled to undergo RVS-guided 11-gauge vacuum-assisted biopsy. One patient was removed from the study because of non-visualization of the lesion in the second-look spine contrast-enhanced breast MRI. Five patients with non-mass enhancement lesions were biopsied. The lesions ranged in size from 9 to 13 mm (mean 11 mm). The average procedural time, including the sonography and MR image co-registration time, was 25 min. All biopsies resulted in tissue retrieval. One was fibroadenomatous nodules, and those of four were fibrocystic changes. There were no complications during or after the procedures. RVS-guided 11-gauge vacuum-assisted breast biopsies provide a safe and effective method for the examination of suspicious lesions initially detected with MRI.

  11. Optical gradients in a-Si:H thin films detected using real-time spectroscopic ellipsometry with virtual interface analysis

    Science.gov (United States)

    Junda, Maxwell M.; Karki Gautam, Laxmi; Collins, Robert W.; Podraza, Nikolas J.

    2018-04-01

    Virtual interface analysis (VIA) is applied to real time spectroscopic ellipsometry measurements taken during the growth of hydrogenated amorphous silicon (a-Si:H) thin films using various hydrogen dilutions of precursor gases and on different substrates during plasma enhanced chemical vapor deposition. A procedure is developed for optimizing VIA model configurations by adjusting sampling depth into the film and the analyzed spectral range such that model fits with the lowest possible error function are achieved. The optimal VIA configurations are found to be different depending on hydrogen dilution, substrate composition, and instantaneous film thickness. A depth profile in the optical properties of the films is then extracted that results from a variation in an optical absorption broadening parameter in a parametric a-Si:H model as a function of film thickness during deposition. Previously identified relationships are used linking this broadening parameter to the overall shape of the optical properties. This parameter is observed to converge after about 2000-3000 Å of accumulated thickness in all layers, implying that similar order in the a-Si:H network can be reached after sufficient thicknesses. In the early stages of growth, however, significant variations in broadening resulting from substrate- and processing-induced order are detected and tracked as a function of bulk layer thickness yielding an optical property depth profile in the final film. The best results are achieved with the simplest film-on-substrate structures while limitations are identified in cases where films have been deposited on more complex substrate structures.

  12. [3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].

    Science.gov (United States)

    Kneist, W; Huber, T; Paschold, M; Lang, H

    2016-06-01

    The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.

  13. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    Science.gov (United States)

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  14. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    Science.gov (United States)

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  15. Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life

    Science.gov (United States)

    Minocha, Shailey; Morse, David R.

    2010-01-01

    Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…

  16. Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method

    Science.gov (United States)

    Dan, A.; Reiner, M.

    2018-01-01

    Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…

  17. A 3D virtual plant-modelling study : Tillering in spring wheat

    NARCIS (Netherlands)

    Evers, J.B.; Vos, J.

    2007-01-01

    Tillering in wheat (Triticum aestivum L.) is influenced by both light intensity and the ratio between the intensities of red and far-red light. The relationships between canopy architecture, light properties within the canopy, and tillering in spring-wheat plants were studied using a 3D virtual

  18. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    Science.gov (United States)

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  19. Interaksi pada Museum Virtual Menggunakan Pengindera Tangan dengan Penyajian Stereoscopic 3D

    Directory of Open Access Journals (Sweden)

    Gary Almas Samaita

    2017-01-01

    Full Text Available Kemajuan teknologi menjadikan museum mengembangkan cara penyajian koleksinya. Salah satu teknologi yang diadaptasi dalam penyajian museum virtual adalah Virtual Reality (VR dengan stereoscopic 3D. Sayangnya, museum virtual dengan teknik penyajian stereoscopic masih menggunakan keyboard dan mouse sebagai perangkat interaksi. Penelitian ini bertujuan untuk merancang dan menerapkan interaksi dengan pengindera tangan pada museum virtual dengan penyajian stereoscopic 3D. Museum virtual divisualisasikan dengan teknik stereoscopic side-by-side melalui Head Mounting Display (HMD berbasis Android. HMD juga memiliki fungsi head tracking dengan membaca orientasi kepala. Interaksi tangan diterapkan dengan menggunakan pengindera tangan yang ditempatkan pada HMD. Karena pengindera tangan tidak didukung oleh HMD berbasis Android, maka digunakan server sebagai perantara HMD dan pengindera tangan. Setelah melalui pengujian, diketahui bahwa rata-rata confidence rate dari pembacaan pengindera tangan pada pola tangan untuk memicu interaksi adalah sebesar 99,92% dengan rata-rata efektifitas 92,61%. Uji ketergunaan juga dilakukan dengan pendasaran ISO/IEC 9126-4 untuk mengukur efektifitas, efisiensi, dan kepuasan pengguna dari sistem yang dirancang dengan meminta partisipan untuk melakukan 9 tugas yang mewakili interaksi tangan dalam museum virtual. Hasil pengujian menunjukkan bahwa semua pola tangan yang dirancang dapat dilakukan oleh partisipan meskipun pola tangan dinilai cukup sulit dilakukan. Melalui kuisioner diketahui bahwa total 86,67% partisipan setuju bahwa interaksi tangan memberikan pengalaman baru dalam menikmati museum virtual.

  20. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    Science.gov (United States)

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.

  1. 3D Adaptive Virtual Exhibit for the University of Denver Digital Collections

    Directory of Open Access Journals (Sweden)

    Shea-Tinn Yeh

    2015-07-01

    Full Text Available While the gaming industry has taken the world by storm with its three-dimensional (3D user interfaces, current digital collection exhibits presented by museums, historical societies, and libraries are still limited to a two-dimensional (2D interface display. Why can’t digital collections take advantage of this 3D interface advancement? The prototype discussed in this paper presents to the visitor a 3D virtual exhibit containing a set of digital objects from the University of Denver Libraries’ digital image collections, giving visitors an immersive experience when viewing the collections. In particular, the interface is adaptive to the visitor’s browsing behaviors and alters the selection and display of the objects throughout the exhibit to encourage serendipitous discovery. Social media features were also integrated to allow visitors to share items of interest and to create a sense of virtual community.

  2. Hybrid Design Tools in a Social Virtual Reality Using Networked Oculus Rift: A Feasibility Study in Remote Real-Time Interaction

    NARCIS (Netherlands)

    Wendrich, Robert E.; Chambers, Kris-Howard; Al-Halabi, Wadee; Seibel, Eric J.; Grevenstuk, Olaf; Ullman, David; Hoffman, Hunter G.

    2016-01-01

    Hybrid Design Tool Environments (HDTE) allow designers and engineers to use real tangible tools and physical objects and/or artifacts to make and create real-time virtual representations and presentations on-the-fly. Manipulations of the real tangible objects (e.g., real wire mesh, clay, sketches,

  3. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    Science.gov (United States)

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  4. Virtual cardiotomy based on 3-D MRI for preoperative planning in congenital heart disease

    International Nuclear Information System (INIS)

    Soerensen, Thomas Sangild; Beerbaum, Philipp; Razavi, Reza; Greil, Gerald Franz; Mosegaard, Jesper; Rasmusson, Allan; Schaeffter, Tobias; Austin, Conal

    2008-01-01

    Patient-specific preoperative planning in complex congenital heart disease may be greatly facilitated by virtual cardiotomy. Surgeons can perform an unlimited number of surgical incisions on a virtual 3-D reconstruction to evaluate the feasibility of different surgical strategies. To quantitatively evaluate the quality of the underlying imaging data and the accuracy of the corresponding segmentation, and to qualitatively evaluate the feasibility of virtual cardiotomy. A whole-heart MRI sequence was applied in 42 children with congenital heart disease (age 3±3 years, weight 13±9 kg, heart rate 96± 21 bpm). Image quality was graded 1-4 (diagnostic image quality ≥2) by two independent blinded observers. In patients with diagnostic image quality the segmentation quality was also graded 1-4 (4 no discrepancies, 1 misleading error). The average image quality score was 2.7 - sufficient for virtual reconstruction in 35 of 38 patients (92%) older than 1 month. Segmentation time was 59±10 min (average quality score 3.5). Virtual cardiotomy was performed in 19 patients. Accurate virtual reconstructions of patient-specific cardiac anatomy can be produced in less than 1 h from 3-D MRI. The presented work thus introduces a new, clinically feasible noninvasive technique for improved preoperative planning in complex cases of congenital heart disease. (orig.)

  5. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  6. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    Science.gov (United States)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-27

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  7. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    Science.gov (United States)

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  8. Augmenting real-time video with virtual models for enhanced visualization for simulation, teaching, training and guidance

    Science.gov (United States)

    Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.

    2015-03-01

    In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.

  9. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    Science.gov (United States)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  10. APLIKASI 3D TERRAIN VIRTUAL RECREATION GARUDA WISNU KENCANA CULTURAL PARK

    Directory of Open Access Journals (Sweden)

    Gede Indra Raditya Martha

    2016-08-01

    Full Text Available Aplikasi 3D Terrain Garuda Wisnu Kencana Cultural Park (GWK atau GWK 3DVR adalah sebuah aplikasi virtual recreation yang merupakan salah satu cara tercepat untuk merampungkan proyek prestisius GWK secara virtual yang terhambat pembangunannya karena krisis moneter Indonesia di Tahun 1997. Aplikasi ini dibuat dengan menggabungkan objek 3 dimensi kedalam virtual environtment yang didesain agar menyerupai keadaan lapangan GWK berdasarkan masterplan 2014, digabungkan dengan wawancara langsung kepada pihak arsitektur GWK. Aplikasi GWK 3DVR merupakan aplikasi yang memerlukan spesifikasi perangkat keras yang cukup tinggi sehingga GWK 3DVR dilengkapi dengan fitur pengaturan kualitas grafis. Pengguna aplikasi seakan-akan berjalan di areal kompleks GWK dengan mengunakan tombol navigasi dan mode kamera first person yang terdapat pada aplikasi. Sensasi immersive dan realitas dapat dirasakan apabila pengoperasiannya disertai dengan pengunaan head mounted display yang kedepannya lebih mudah didapat. Hal tersebut dikarenakan virtual reality saat ini mulai berkembang cepat seiring dengan kepopulerannya pada bidang multimedia dan gaming. Walaupun hanya berbentuk virtual setidaknya aplikasi ini diharapkan dapat memvisualisasikan bentuk jadi dari GWK dan secara keseluruhan aplikasi telah mampu berjalan dengan baik serta menampilkan bentuk dan perkiraan tata letak juga tempat dari GWK yang saat ini belum rampung dengan bentuk virtual 3 dimensi. Kata kunci: Virtual recreation, first person point of view, Garuda Wisnu Kencana.

  11. Objective and subjective quality assessment of geometry compression of reconstructed 3D Humans in a 3D virtual room

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); P.S. Cesar Garcia (Pablo Santiago); A. Frisiello (Antonella); I. Doumanis (Ioannis)

    2015-01-01

    htmlabstractCompression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced

  12. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    International Nuclear Information System (INIS)

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M.

    1995-01-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed

  13. Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.

    Science.gov (United States)

    Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-01-01

    This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.

  14. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    Science.gov (United States)

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology.

  15. Versatile, immersive, creative and dynamic virtual 3-D healthcare learning environments: a review of the literature.

    Science.gov (United States)

    Hansen, Margaret M

    2008-09-01

    The author provides a critical overview of three-dimensional (3-D) virtual worlds and "serious gaming" that are currently being developed and used in healthcare professional education and medicine. The relevance of this e-learning innovation for teaching students and professionals is debatable and variables influencing adoption, such as increased knowledge, self-directed learning, and peer collaboration, by academics, healthcare professionals, and business executives are examined while looking at various Web 2.0/3.0 applications. There is a need for more empirical research in order to unearth the pedagogical outcomes and advantages associated with this e-learning technology. A brief description of Roger's Diffusion of Innovations Theory and Siemens' Connectivism Theory for today's learners is presented as potential underlying pedagogical tenets to support the use of virtual 3-D learning environments in higher education and healthcare.

  16. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    Science.gov (United States)

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  17. The Application of the Technology of 3D Satellite Cloud Imaging in Virtual Reality Simulation

    Directory of Open Access Journals (Sweden)

    Xiao-fang Xie

    2007-05-01

    Full Text Available Using satellite cloud images to simulate clouds is one of the new visual simulation technologies in Virtual Reality (VR. Taking the original data of satellite cloud images as the source, this paper depicts specifically the technology of 3D satellite cloud imaging through the transforming of coordinates and projection, creating a DEM (Digital Elevation Model of cloud imaging and 3D simulation. A Mercator projection was introduced to create a cloud image DEM, while solutions for geodetic problems were introduced to calculate distances, and the outer-trajectory science of rockets was introduced to obtain the elevation of clouds. For demonstration, we report on a computer program to simulate the 3D satellite cloud images.

  18. 2D virtual texture on 3D real object with coded structured light

    Science.gov (United States)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  19. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  20. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    Science.gov (United States)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  1. 3D virtual character reconstruction from projections: a NURBS-based approach

    Science.gov (United States)

    Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.

    2004-05-01

    This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.

  2. Effects of Different Types of 3D Rest Frames on Reducing Cybersickness in a Virtual Environment

    Directory of Open Access Journals (Sweden)

    KyungHun Han

    2011-10-01

    Full Text Available A virtual environment (VE presents several kinds of sensory stimuli for creating a virtual reality. Some sensory stimuli presented in the VE have been reported to provoke cybersickness, which is caused by conflicts between sensory stimuli, especially conflicts between visual and vestibular sensations. Application of a rest frame has been known to be effective on reducing cybersickness by alleviating sensory conflict. The form and the way rest frames are presented in 3D VEs have different effects on reducing cybersickness. In this study, two different types of 3D rest frames were created. For verifying the rest frames' effects in reducing cybersickness, twenty subjects were exposed to two different rest frame conditions and a non-rest frame condition after an interval of three days in 3D VE. We observed the characteristic changes in the physiology of cybersickness in terms of autonomic regulation. Psychophysiological signals including EEG, EGG, and HRV were recorded and a simulator sickness questionnaire (SSQ was used for measuring the intensity of the sickness before and after the exposure to the different conditions. In the results, the SSQ was reduced significantly in the rest frame conditions. Psychophysiological responses changed significantly in the rest frame conditions compared to the non-rest frame condition. The results suggest that the rest frame conditions have condition-specific effects on reducing cybersickness by differentially alleviating aspects of visual and vestibular sensory conflicts in 3D VE.

  3. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    Directory of Open Access Journals (Sweden)

    Matti Pouke

    2013-12-01

    Full Text Available Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand.

  4. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Science.gov (United States)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  5. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    Science.gov (United States)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  6. 3D Virtual Worlds as Art Media and Exhibition Arenas: Students' Responses and Challenges in Contemporary Art Education

    Science.gov (United States)

    Lu, Lilly

    2013-01-01

    3D virtual worlds (3D VWs) are considered one of the emerging learning spaces of the 21st century; however, few empirical studies have investigated educational applications and student learning aspects in art education. This study focused on students' responses to and challenges with 3D VWs in both aspects. The findings show that most participants…

  7. Virtual endoscopic images by 3D FASE cisternography for neurovascular compression

    International Nuclear Information System (INIS)

    Ishimori, Takashi; Nakano, Satoru; Kagawa, Masahiro

    2003-01-01

    Three-dimensional fast asymmetric spin echo (3D FASE) cisternography provides high spatial resolution and excellent contrast as a water image acquisition technique. It is also useful for the evaluation of various anatomical regions. This study investigated the usefulness and limitations of virtual endoscopic images obtained by 3D FASE MR cisternography in the preoperative evaluation of patients with neurovascular compression. The study included 12 patients with neurovascular compression: 10 with hemifacial spasm and two with trigeminal neuralgia. The diagnosis was surgically confirmed in all patients. The virtual endoscopic images obtained were judged to be of acceptable quality for interpretation in all cases. The areas of compression identified in preoperative diagnosis with virtual endoscopic images showed good agreement with those observed from surgery, except in one case in which the common trunk of the anterior inferior cerebellar artery and posterior inferior cerebellar artery (AICA-PICA) bifurcated near the root exit zone of the facial nerve. The veins are displayed in some cases but not in others. The main advantage of generating virtual endoscopic images is that such images can be used for surgical simulation, allowing the neurosurgeon to perform surgical procedures with greater confidence. (author)

  8. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    Science.gov (United States)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  9. 3D virtual facilities with interactive instructions for nuclear education and training

    International Nuclear Information System (INIS)

    Satoh, Yoshinori; Li, Ye; Zhu, Yuefeng; Rizwan-uddin

    2015-01-01

    Efficient and effective education and training of nuclear engineering students and future operators are critical for the safe operation and maintenance of nuclear power plants. Students and future operators used to receive some of the education and training at university laboratories and research reactors. With many university research reactors now shutdown, both students and future operators are deprived of this valuable training source. With an eye toward this need and to take advantage of recent developments in human machine interface technologies, we have focused on the development of 3D virtual laboratories for nuclear engineering education and training as well as to conduct virtual experiments. These virtual laboratories are expected to supplement currently available resources and education and training experiences. Resent focus is on adding interactivity and physics model to allow trainees to conduct virtual experiments. This paper reports some recent extensions to our virtual nuclear education laboratory and research reactor laboratory. These include head mounted display as well as hand tracking devices for virtual operations. (author)

  10. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects.

    Science.gov (United States)

    Tetsworth, Kevin; Block, Steve; Glatt, Vaida

    2017-01-01

    3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. © The Authors, published by EDP Sciences, 2017.

  11. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects

    Directory of Open Access Journals (Sweden)

    Tetsworth Kevin

    2017-01-01

    Full Text Available 3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case.

  12. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    Science.gov (United States)

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  13. Virtual Team Work : Group Decision Making in 3D Virtual Environments

    NARCIS (Netherlands)

    Schouten, Alexander P.; van den Hooff, Bart; Feldberg, Frans

    This study investigates how three-dimensional virtual environments (3DVEs) support shared understanding and group decision making. Based on media synchronicity theory, we pose that the shared environment and avatar-based interaction allowed by 3DVEs aid convergence processes in teams working on a

  14. Virtual Team Work : Group Decision Making in 3D Virtual Environments

    NARCIS (Netherlands)

    Schouten, A.P.; van den Hooff, B.; Feldberg, F.

    2016-01-01

    This study investigates how three-dimensional virtual environments (3DVEs) support shared understanding and group decision making. Based on media synchronicity theory, we pose that the shared environment and avatar-based interaction allowed by 3DVEs aid convergence processes in teams working on a

  15. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    Science.gov (United States)

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  16. Lead-oriented synthesis: Investigation of organolithium-mediated routes to 3-D scaffolds and 3-D shape analysis of a virtual lead-like library.

    Science.gov (United States)

    Lüthy, Monique; Wheldon, Mary C; Haji-Cheteh, Chehasnah; Atobe, Masakazu; Bond, Paul S; O'Brien, Peter; Hubbard, Roderick E; Fairlamb, Ian J S

    2015-06-01

    Synthetic routes to six 3-D scaffolds containing piperazine, pyrrolidine and piperidine cores have been developed. The synthetic methodology focused on the use of N-Boc α-lithiation-trapping chemistry. Notably, suitably protected and/or functionalised medicinal chemistry building blocks were synthesised via concise, connective methodology. This represents a rare example of lead-oriented synthesis. A virtual library of 190 compounds was then enumerated from the six scaffolds. Of these, 92 compounds (48%) fit the lead-like criteria of: (i) -1⩽AlogP⩽3; (ii) 14⩽number of heavy atoms⩽26; (iii) total polar surface area⩾50Å(2). The 3-D shapes of the 190 compounds were analysed using a triangular plot of normalised principal moments of inertia (PMI). From this, 46 compounds were identified which had lead-like properties and possessed 3-D shapes in under-represented areas of pharmaceutical space. Thus, the PMI analysis of the 190 member virtual library showed that whilst scaffolds which may appear on paper to be 3-D in shape, only 24% of the compounds actually had 3-D structures in the more interesting areas of 3-D drug space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Virtual chromoendoscopy for the real-time assessment of colorectal polyps in vivo: a systematic review and economic evaluation.

    Science.gov (United States)

    Picot, Joanna; Rose, Micah; Cooper, Keith; Pickett, Karen; Lord, Joanne; Harris, Petra; Whyte, Sophie; Böhning, Dankmar; Shepherd, Jonathan

    2017-12-01

    Current clinical practice is to remove a colorectal polyp detected during colonoscopy and determine whether it is an adenoma or hyperplastic by histopathology. Identifying adenomas is important because they may eventually become cancerous if untreated, whereas hyperplastic polyps do not usually develop into cancer, and a surveillance interval is set based on the number and size of adenomas found. Virtual chromoendoscopy (VCE) (an electronic endoscopic imaging technique) could be used by the endoscopist under strictly controlled conditions for real-time optical diagnosis of diminutive (≤ 5 mm) colorectal polyps to replace histopathological diagnosis. To assess the clinical effectiveness and cost-effectiveness of the VCE technologies narrow-band imaging (NBI), flexible spectral imaging colour enhancement (FICE) and i-scan for the characterisation and management of diminutive (≤ 5 mm) colorectal polyps using high-definition (HD) systems without magnification. Systematic review and economic analysis. People undergoing colonoscopy for screening or surveillance or to investigate symptoms suggestive of colorectal cancer. NBI, FICE and i-scan. Diagnostic accuracy, recommended surveillance intervals, health-related quality of life (HRQoL), adverse effects, incidence of colorectal cancer, mortality and cost-effectiveness of VCE compared with histopathology. Electronic bibliographic databases including MEDLINE, EMBASE, The Cochrane Library and Database of Abstracts of Reviews of Effects were searched for published English-language studies from inception to June 2016. Bibliographies of related papers, systematic reviews and company information were screened and experts were contacted to identify additional evidence. Systematic reviews of test accuracy and economic evaluations were undertaken in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Meta-analyses were conducted, where possible, to inform the independent

  18. Generating classes of 3D virtual mandibles for AR-based medical simulation.

    Science.gov (United States)

    Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P

    2008-01-01

    Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.

  19. Options in virtual 3D, optical-impression-based planning of dental implants.

    Science.gov (United States)

    Reich, Sven; Kern, Thomas; Ritter, Lutz

    2014-01-01

    If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow.

  20. HERRAMIENTAS EN 3D PARA EL MODELADO DE ESCENARIOS VIRTUALES BASADOS EN LOGO. ESTADO DEL ARTE

    Directory of Open Access Journals (Sweden)

    Luz Santamaría Granados

    2009-01-01

    Full Text Available Este artículo revisa la comprobada fundamentación pedagógica de LOGO (Papert, 2003 que a su vez ofrece interesantes estrategias de motivación para los niños, en aspectos tales como el desarrollo de habilidades espaciales a través de su propia exploración de mundos virtuales. La metodología original fue propuesta por Seymour Papert para escenarios en dos dimensiones (2D. Por lo tanto, se analiza la posibilidad de integrar las ventajas pedagógicas de LOGO con una interfaz gráfica en tres dimensiones (3D, al aprovechar la tecnología contemplada en los estándares del consorcio Web3D. Además menciona los componentes X3D que permiten el uso de avatares (humanoides para facilitar la interacción de los usuarios en mundos virtuales dinámicos, al disponer de personajes adicionales al de la tortuga de LOGO.

  1. Employing 3D Virtual Reality and the Unity Game Engine to Support Nuclear Verification Research

    International Nuclear Information System (INIS)

    Patton, T.

    2015-01-01

    This project centres on the development of a virtual nuclear facility environment to assist non-proliferation and nuclear arms control practitioners - including researchers, negotiators, or inspectors - in developing and refining a verification system and secure chain of custody of material or equipment. The platform for creating the virtual facility environment is the Unity 3D game engine. This advanced platform offers both the robust capability and flexibility necessary to support the design goals of the facility. The project also employs Trimble SketchUp and Blender 3D for constructing the model components. The development goal of this phase of the project was to generate a virtual environment that includes basic physics in which avatars can interact with their environment through actions such as picking up objects, operating vehicles, dismantling a warhead through a spherical representation system, opening/closing doors through a custom security access system, and conducting CCTV surveillance. Initial testing of virtual radiation simulation techniques was also explored in preparation for the next phase of development. Some of the eventual utilities and applications for this platform include: 1. conducting live multi-person exercises of verification activities within a single, shared virtual environment, 2. refining procedures, individual roles, and equipment placement in the contexts of non-proliferation or arms control negotiations 3. hands on training for inspectors, and 4. a portable tool/reference for inspectors to use while carrying out inspections. This project was developed under the Multilateral Verification Project, led by the Verification Research, Training and Information Centre (VERTIC) in the United Kingdom, and financed by the Norwegian Ministry of Foreign Affairs. The environment was constructed at the Vienna Center for Disarmament and Non-Proliferation (VCDNP). (author)

  2. Virtual reality hardware for use in interactive 3D data fusion and visualization

    Science.gov (United States)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  3. GE3D: A Virtual Campus for Technology-Enhanced Distance Learning

    Directory of Open Access Journals (Sweden)

    Jean Grieu

    2010-09-01

    Full Text Available A lot of learning systems platforms are used all over the world. But these conventional E-learning platforms aim at students who are used to work on their own. Our students are young (19 years old – 22 years old, and in their first year at the university. Following extensive interviews with our students, we have designed GE3D, an E-learning platform, according to their expectations and our criteria. In this paper, we describe the students’ demands, resulting from the interviews. Then, we describe our virtual campus. Even if our platform uses some elements coming from the 3D games world, it is always a pedagogical tool. Using this technology, we developed a 3D representation of the real world. GE3D is a multi-users tool, with a synchronous technology, an intuitive interface for end-users and an embedded Intelligent Tutoring System to support learners. We also describe the process of a lecture on the Programmable Logic Controllers (PLC’s in this new universe.

  4. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    Science.gov (United States)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  5. Virtual 3D tumor marking-exact intraoperative coordinate mapping improve post-operative radiotherapy

    International Nuclear Information System (INIS)

    Essig, Harald; Gellrich, Nils-Claudius; Rana, Majeed; Meyer, Andreas; Eckardt, André M; Kokemueller, Horst; See, Constantin von; Lindhorst, Daniel; Tavassol, Frank; Ruecker, Martin

    2011-01-01

    The quality of the interdisciplinary interface in oncological treatment between surgery, pathology and radiotherapy is mainly dependent on reliable anatomical three-dimensional (3D) allocation of specimen and their context sensitive interpretation which defines further treatment protocols. Computer-assisted preoperative planning (CAPP) allows for outlining macroscopical tumor size and margins. A new technique facilitates the 3D virtual marking and mapping of frozen sections and resection margins or important surgical intraoperative information. These data could be stored in DICOM format (Digital Imaging and Communication in Medicine) in terms of augmented reality and transferred to communicate patient's specific tumor information (invasion to vessels and nerves, non-resectable tumor) to oncologists, radiotherapists and pathologists

  6. Virtual 3D tumor marking-exact intraoperative coordinate mapping improve post-operative radiotherapy

    Directory of Open Access Journals (Sweden)

    Essig Harald

    2011-11-01

    Full Text Available Abstract The quality of the interdisciplinary interface in oncological treatment between surgery, pathology and radiotherapy is mainly dependent on reliable anatomical three-dimensional (3D allocation of specimen and their context sensitive interpretation which defines further treatment protocols. Computer-assisted preoperative planning (CAPP allows for outlining macroscopical tumor size and margins. A new technique facilitates the 3D virtual marking and mapping of frozen sections and resection margins or important surgical intraoperative information. These data could be stored in DICOM format (Digital Imaging and Communication in Medicine in terms of augmented reality and transferred to communicate patient's specific tumor information (invasion to vessels and nerves, non-resectable tumor to oncologists, radiotherapists and pathologists.

  7. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Directory of Open Access Journals (Sweden)

    S. Gonizzi Barsanti

    2015-08-01

    Full Text Available Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the “path of the dead”, an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  8. IMAGE-BASED VIRTUAL TOURS AND 3D MODELING OF PAST AND CURRENT AGES FOR THE ENHANCEMENT OF ARCHAEOLOGICAL PARKS: THE VISUALVERSILIA 3D PROJECT

    Directory of Open Access Journals (Sweden)

    C. Castagnetti

    2017-05-01

    Full Text Available The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy. The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  9. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    Science.gov (United States)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  10. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    Science.gov (United States)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  11. Virtual reality and interactive 3D as effective tools for medical training.

    Science.gov (United States)

    Webb, George; Norcliffe, Alex; Cannings, Peter; Sharkey, Paul; Roberts, Dave

    2003-01-01

    CAVE-like displays allow a user to walk in to a virtual environment, and use natural movement to change the viewpoint of virtual objects which they can manipulate with a hand held device. This maps well to many surgical procedures offering strong potential for training and planning. These devices may be networked together allowing geographically remote users to share the interactive experience. This maps to the strong need for distance training and planning of surgeons. Our paper shows how the properties of a CAVE-Like facility can be maximised in order to provide an ideal environment for medical training. The implementation of a large 3D-eye is described. The resulting application is that of an eye that can be manipulated and examined by trainee medics under the guidance of a medical expert. The progression and effects of different ailments can be illustrated and corrective procedures, demonstrated.

  12. DEVELOPMENT OF A VIRTUAL 3D-SIMULATOR OF THE FEED PELLETING TECHNOLOGICAL PROCESS

    Directory of Open Access Journals (Sweden)

    N. Shcherbakov

    2017-08-01

    Full Text Available The process of developing a virtual 3D simulator of the process of granulation of mixed fodders is considered. The consequences of errors in the operation of press granulator operators are considered. The difficulties associated with the training of high-tech and expensive equipment operators are described. The necessity and described difficulties of acquiring practical skills of working with such equipment at the training stage are substantiated. It is argued the need to introduce computer simulators in educational institutions in order to improve the quality of the acquired knowledge, form a complex decision-making skill for future operators of technological processes. The results of a survey on improving the efficiency of management of technological processes after the introduction of simulators at enterprises are considered. The data of the simulator market and its forecasts for 2017 by regions and types of the interface used are presented. The conclusion is drawn about the growing popularity of simulators based on the 3D interface. The advantage of using the 3D interface with respect to the 2D interface is substantiated. The types of immersion in the learning environment in various simulator interfaces are considered. The vulnerabilities of the 3D simulator are noted. The goal is to develop a 3D simulator for a press granulator operator. A solution of a set of tasks is proposed to achieve this goal. The plan for creating a simulator was developed. A detailed consideration of the development stages of the simulator is given. The possibilities of using the simulator being developed are considered. The possibility of developing a simulator of emergency situations is described. The relevance of this development is justified.

  13. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation.

    Science.gov (United States)

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi

  14. The virtual lover: variable and easily guided 3D fish animations as an innovative tool in mate-choice experiments with sailfin mollies-I. Design and implementation.

    Science.gov (United States)

    Müller, Klaus; Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter

    2017-02-01

    Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna . As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments.

  15. Virtual endoscopy and 3D volume rendering in the management of frontal sinus fractures.

    Science.gov (United States)

    Belina, Stanko; Cuk, Viseslav; Klapan, Ivica

    2009-12-01

    Frontal sinus fractures (FSF) are commonly caused by traffic accidents, assaults, industrial accidents and gunshot wounds. Classical roentgenography has high proportion of false negative findings in cases of FSF and is not particularly useful in examining the severity of damage to the frontal sinus posterior table and the nasofrontal duct region. High resolution computed tomography was inavoidable during the management of such patients but it may produce large quantity of 2D images. Postprocessing of datasets acquired by high resolution computer tomography from patients with severe head trauma may offer a valuable additional help in diagnostics and surgery planning. We performed virtual endoscopy (VE) and 3D volume rendering (3DVR) on high resolution CT data acquired from a 54-year-old man with with both anterior and posterior frontal sinus wall fracture in order to demonstrate advantages and disadvantages of these methods. Data acquisition was done by Siemens Somatom Emotion scanner and postprocessing was performed with Syngo 2006G software. VE and 3DVR were performed in a man who suffered blunt trauma to his forehead and nose in an traffic accident. Left frontal sinus anterior wall fracture without dislocation and fracture of tabula interna with dislocation were found. 3D position and orientation of fracture lines were shown in by 3D rendering software. We concluded that VE and 3DVR can clearly display the anatomic structure of the paranasal sinuses and nasopharyngeal cavity, revealing damage to the sinus wall caused by a fracture and its relationship to surrounding anatomical structures.

  16. Utilising a Collaborative Macro-Script to Enhance Student Engagement: A Mixed Method Study in a 3D Virtual Environment

    Science.gov (United States)

    Bouta, Hara; Retalis, Symeon; Paraskeva, Fotini

    2012-01-01

    This study examines the effect of using an online 3D virtual environment in teaching Mathematics in Primary Education. In particular, it explores the extent to which student engagement--behavioral, affective and cognitive--is fostered by such tools in order to enhance collaborative learning. For the study we used a purpose-created 3D virtual…

  17. Application of virtual machine technology to real-time mapping of Thomson scattering data to flux coordinates for the LHD

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yoshida, Masanobu; Suzuki, Chihiro; Suzuki, Yasuhiro; Ida, Katsumi; Nagayama, Yoshio; Akiyama, Tsuyoshi; Kawahata, Kazuo; Narihara, Kazumichi; Tokuzawa, Tokihiko; Yamada, Ichihiro

    2012-01-01

    Highlights: ► We have developed a mapping system of the electron temperature profile to the flux coordinates. ► To increases the performance, multiple virtual machines are used. ► The virtual machine technology is flexible when increasing the number of computers. - Abstract: This paper presents a system called “TSMAP” that maps electron temperature profiles to flux coordinates for the Large Helical Device (LHD). Considering the flux surface is isothermal, TSMAP searches an equilibrium database for the LHD equilibrium that fits the electron temperature profile. The equilibrium database is built through many VMEC computations of the helical equilibria. Because the number of equilibria is large, the most important technical issue for realizing the TSMAP system is computational performance. Therefore, we use multiple personal computers to enhance performance when building the database for TSMAP. We use virtual machines on multiple Linux computers to run the TSMAP program. Virtual machine technology is flexible, allowing the number of computers to be easily increased. This paper discusses how the use of virtual machine technology enhances the performance of TSMAP calculations when multiple CPU cores are used.

  18. HVM-TP: A Time Predictable, Portable Java Virtual Machine for Hard Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Luckow, Kasper Søe; Thomsen, Bent; Korsholm, Stephan Erbs

    2014-01-01

    We present HVMTIME; a portable and time predictable JVM implementation with applications in resource-constrained hard real-time embedded systems. In addition, it implements the Safety Critical Java (SCJ) Level 1 specification. Time predictability is achieved by a combination of time predictable...... algorithms, exploiting the programming model of the SCJ specification, and harnessing static knowledge of the hosted SCJ system. This paper presents HVMTIME in terms of its design and capabilities, and demonstrates how a complete timing model of the JVM represented as a Network of Timed Automata can...... be obtained using the tool TetaSARTSJVM. Further, using the timing model, we derive Worst Case Execution Times (WCETs) and Best Case Execution Times (BCETs) of the Java Bytecodes....

  19. Real-time systems

    OpenAIRE

    Badr, Salah M.; Bruztman, Donald P.; Nelson, Michael L.; Byrnes, Ronald Benton

    1992-01-01

    This paper presents an introduction to the basic issues involved in real-time systems. Both real-time operating sys and real-time programming languages are explored. Concurrent programming and process synchronization and communication are also discussed. The real-time requirements of the Naval Postgraduate School Autonomous Under Vehicle (AUV) are then examined. Autonomous underwater vehicle (AUV), hard real-time system, real-time operating system, real-time programming language, real-time sy...

  20. Rehabilitation after Stroke using Immersive User Interfaces in 3D Virtual and Augmented Gaming Environments

    Directory of Open Access Journals (Sweden)

    E. Vogiatzaki

    2015-05-01

    Full Text Available Stroke is one of most common diseases of our modern societies with high socio-economic impact. Hence, rehabilitation approach involving patients in their rehabilitation process while lowering costly involvement of specialised human personnel is needed. This article describes a novel approach, offering an integrated rehabilitation training for stroke patients using a serious gaming approach based on a Unity3D virtual reality engine combined with a range of advanced technologies and immersive user interfaces. It puts patients and caretakers in control of the rehabilitation protocols, while leading physicians are enabled to supervise the progress of the rehabilitation via Personal Health Record. Possibility to perform training in a familiar home environment directly improves the effectiveness of the rehabilitation. The work presented herein has been conducted within the "StrokeBack" project co-funded by the European Commission under the Framework 7 Program in the ICT domain.

  1. 3-D thermal weight function method and multiple virtual crack extension technique for thermal shock problems

    International Nuclear Information System (INIS)

    Lu Yanlin; Zhou Xiao; Qu Jiadi; Dou Yikang; He Yinbiao

    2005-01-01

    An efficient scheme, 3-D thermal weight function (TWF) method, and a novel numerical technique, multiple virtual crack extension (MVCE) technique, were developed for determination of histories of transient stress intensity factor (SIF) distributions along 3-D crack fronts of a body subjected to thermal shock. The TWF is a universal function, which is dependent only on the crack configuration and body geometry. TWF is independent of time during thermal shock, so the whole history of transient SIF distributions along crack fronts can be directly calculated through integration of the products of TWF and transient temperatures and temperature gradients. The repeated determinations of the distributions of stresses (or displacements) fields for individual time instants are thus avoided in the TWF method. An expression of the basic equation for the 3-D universal weight function method for Mode I in an isotropic elastic body is derived. This equation can also be derived from Bueckner-Rice's 3-D WF formulations in the framework of transformation strain. It can be understood from this equation that the so-called thermal WF is in fact coincident with the mechanical WF except for some constants of elasticity. The details and formulations of the MVCE technique are given for elliptical cracks. The MVCE technique possesses several advantages. The specially selected linearly independent VCE modes can directly be used as shape functions for the interpolation of unknown SIFs. As a result, the coefficient matrix of the final system of equations in the MVCE method is a triple-diagonal matrix and the values of the coefficients on the main diagonal are large. The system of equations has good numerical properties. The number of linearly independent VCE modes that can be introduced in a problem is unlimited. Complex situations in which the SIFs vary dramatically along crack fronts can be numerically well simulated by the MVCE technique. An integrated system of programs for solving the

  2. Imposing motion constraints to a force reflecting tele-robot through real-time simulation of a virtual mechanism

    International Nuclear Information System (INIS)

    Joly, L.; Andriot, C.

    1995-01-01

    In a tele-operation system, assistance can be given to the operator by constraining the tele-robot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non linear constraints (complex surfaces) involving coupling between translations and rotations and physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF tele-operation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces. (authors). 11 refs., 7 figs

  3. Imposing motion constraints to a force reflecting tele-robot through real-time simulation of a virtual mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Joly, L.; Andriot, C.

    1995-12-31

    In a tele-operation system, assistance can be given to the operator by constraining the tele-robot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non linear constraints (complex surfaces) involving coupling between translations and rotations and physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF tele-operation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces. (authors). 11 refs., 7 figs.

  4. SU-G-JeP2-04: Comparison Between Fricke-Type 3D Radiochromic Dosimeters for Real-Time Dose Distribution Measurements in MR-Guided Radiation Therapy

    International Nuclear Information System (INIS)

    Lee, H; Alqathami, M; Wang, J; Ibbott, G; Kadbi, M; Blencowe, A

    2016-01-01

    Purpose: To assess MR signal contrast for different ferrous ion compounds used in Fricke-type gel dosimeters for real-time dose measurements for MR-guided radiation therapy applications. Methods: Fricke-type gel dosimeters were prepared in 4% w/w gelatin prior to irradiation in an integrated 1.5 T MRI and 7 MV linear accelerator system (MR-Linac). 4 different ferrous ion (Fe2?) compounds (referred to as A, B, C, and D) were investigated for this study. Dosimeter D consisted of ferrous ammonium sulfate (FAS), which is conventionally used for Fricke dosimeters. Approximately half of each cylindrical dosimeter (45 mm diameter, 80 mm length) was irradiated to ∼17 Gy. MR imaging during irradiation was performed with the MR-Linac using a balanced-FFE sequence of TR/TE = 5/2.4 ms. An approximate uncertainty of 5% in our dose delivery was anticipated since the MR-Linac had not yet been fully commissioned. Results: The signal intensities (SI) increased between the un-irradiated and irradiated regions by approximately 8.6%, 4.4%, 3.2%, and 4.3% after delivery of ∼2.8 Gy for dosimeters A, B, C, and D, respectively. After delivery of ∼17 Gy, the SI had increased by 24.4%, 21.0%, 3.1%, and 22.2% compared to the un-irradiated regions. The increase in SI with respect to dose was linear for dosimeters A, B, and D with slopes of 0.0164, 0.0251, and 0.0236 Gy"−"1 (R"2 = 0.92, 0.97, and 0.96), respectively. Visually, dosimeter A had the greatest optical contrast from yellow to purple in the irradiated region. Conclusion: This study demonstrated the feasibility of using Fricke-type dosimeters for real-time dose measurements with the greatest optical and MR contrast for dosimeter A. We also demonstrated the need to investigate Fe"2"+ compounds beyond the conventionally utilized FAS compound in order to improve the MR signal contrast in 3D dosimeters used for MR-guided radiation therapy. This material is based upon work supported by the National Science Foundation Graduate

  5. SU-G-JeP2-04: Comparison Between Fricke-Type 3D Radiochromic Dosimeters for Real-Time Dose Distribution Measurements in MR-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, H; Alqathami, M; Wang, J; Ibbott, G [UT MD Anderson Cancer Center, Houston, TX (United States); Kadbi, M [MR Therapy, Philips healthTech, Cleveland, OH (United States); Blencowe, A [The University of South Australia, South Australia, SA (Australia)

    2016-06-15

    Purpose: To assess MR signal contrast for different ferrous ion compounds used in Fricke-type gel dosimeters for real-time dose measurements for MR-guided radiation therapy applications. Methods: Fricke-type gel dosimeters were prepared in 4% w/w gelatin prior to irradiation in an integrated 1.5 T MRI and 7 MV linear accelerator system (MR-Linac). 4 different ferrous ion (Fe2?) compounds (referred to as A, B, C, and D) were investigated for this study. Dosimeter D consisted of ferrous ammonium sulfate (FAS), which is conventionally used for Fricke dosimeters. Approximately half of each cylindrical dosimeter (45 mm diameter, 80 mm length) was irradiated to ∼17 Gy. MR imaging during irradiation was performed with the MR-Linac using a balanced-FFE sequence of TR/TE = 5/2.4 ms. An approximate uncertainty of 5% in our dose delivery was anticipated since the MR-Linac had not yet been fully commissioned. Results: The signal intensities (SI) increased between the un-irradiated and irradiated regions by approximately 8.6%, 4.4%, 3.2%, and 4.3% after delivery of ∼2.8 Gy for dosimeters A, B, C, and D, respectively. After delivery of ∼17 Gy, the SI had increased by 24.4%, 21.0%, 3.1%, and 22.2% compared to the un-irradiated regions. The increase in SI with respect to dose was linear for dosimeters A, B, and D with slopes of 0.0164, 0.0251, and 0.0236 Gy{sup −1} (R{sup 2} = 0.92, 0.97, and 0.96), respectively. Visually, dosimeter A had the greatest optical contrast from yellow to purple in the irradiated region. Conclusion: This study demonstrated the feasibility of using Fricke-type dosimeters for real-time dose measurements with the greatest optical and MR contrast for dosimeter A. We also demonstrated the need to investigate Fe{sup 2+} compounds beyond the conventionally utilized FAS compound in order to improve the MR signal contrast in 3D dosimeters used for MR-guided radiation therapy. This material is based upon work supported by the National Science Foundation

  6. Reaching to virtual targets: The oblique effect reloaded in 3-D.

    Science.gov (United States)

    Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2017-02-20

    Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Combinatorial Pharmacophore-Based 3D-QSAR Analysis and Virtual Screening of FGFR1 Inhibitors

    Directory of Open Access Journals (Sweden)

    Nannan Zhou

    2015-06-01

    Full Text Available The fibroblast growth factor/fibroblast growth factor receptor (FGF/FGFR signaling pathway plays crucial roles in cell proliferation, angiogenesis, migration, and survival. Aberration in FGFRs correlates with several malignancies and disorders. FGFRs have proved to be attractive targets for therapeutic intervention in cancer, and it is of high interest to find FGFR inhibitors with novel scaffolds. In this study, a combinatorial three-dimensional quantitative structure-activity relationship (3D-QSAR model was developed based on previously reported FGFR1 inhibitors with diverse structural skeletons. This model was evaluated for its prediction performance on a diverse test set containing 232 FGFR inhibitors, and it yielded a SD value of 0.75 pIC50 units from measured inhibition affinities and a Pearson’s correlation coefficient R2 of 0.53. This result suggests that the combinatorial 3D-QSAR model could be used to search for new FGFR1 hit structures and predict their potential activity. To further evaluate the performance of the model, a decoy set validation was used to measure the efficiency of the model by calculating EF (enrichment factor. Based on the combinatorial pharmacophore model, a virtual screening against SPECS database was performed. Nineteen novel active compounds were successfully identified, which provide new chemical starting points for further structural optimization of FGFR1 inhibitors.

  8. 3D VIRTUAL RECONSTRUCTION OF AN URBAN HISTORICAL SPACE: A CONSIDERATION ON THE METHOD

    Directory of Open Access Journals (Sweden)

    M. Galizia

    2012-09-01

    Full Text Available Urban historical spaces are often characterized by a variety of shapes, geometries, volumes, materials. Their virtual reconstruction requires a critical approach in terms of acquired data's density, timing optimization, final product's quality and slimness. The research team has focused its attention on the study on Francesco Neglia square (previously named Saint Thomas square in Enna. This square is an urban space fronted by architectures which present historical and stylistic differences. For example you can find the Saint Thomas'church belfry (in aragounese-catalan stile dated XIV century and the porch, the Anime Sante baroque's church (XVII century, Saint Mary of the Grace's nunnery (XVIII century and as well as some civil buildings of minor importance built in the mid twentieth century. The research has compared two different modeling tools approaches: the first one is based on the construction of triangulated surfaces which are segmented and simplified; the second one is based on the detection of surfaces geometrical features, the extraction of the more significant profiles by using a software dedicated to the elaboration of cloud points and the subsequent mathematical reconstruction by using a 3d modelling software. The following step was aimed to process the virtual reconstruction of urban scene by assembling the single optimized models. This work highlighted the importance of the image of the operator and of its cultural contribution, essential to recognize geometries which generates surfaces in order to create high quality semantic models.

  9. Clinical anatomy and 3D virtual reconstruction of the lumbar plexus with respect to lumbar surgery

    Directory of Open Access Journals (Sweden)

    Ding Zi-hai

    2011-04-01

    Full Text Available Abstract Background Exposure of the anterior or lateral lumbar via the retroperitoneal approach easily causes injuries to the lumbar plexus. Lumbar plexus injuries which occur during anterior or transpsoas lumbar spine exposure and placement of instruments have been reported. This study aims is to provide more anatomical data and surgical landmarks in operations concerning the lumbar plexus in order to prevent lumbar plexus injuries and to increase the possibility of safety in anterior approach lumbar surgery. Methods To study the applied anatomy related to the lumbar plexus of fifteen formaldehyde-preserved cadavers, Five sets of Virtual Human (VH data set were prepared and used in the study. Three-dimensional (3D computerized reconstructions of the lumbar plexus and their adjacent structures were conducted from the VH female data set. Results The order of lumbar nerves is regular. From the anterior view, lumbar plexus nerves are arranged from medial at L5 to lateral at L2. From the lateral view, lumbar nerves are arranged from ventral at L2 to dorsal at L5. The angle of each nerve root exiting outward to the corresponding intervertebral foramen increases from L1 to L5. The lumbar plexus nerves are observed to be in close contact with transverse processes (TP. All parts of the lumbar plexus were located by sectional anatomy in the dorsal third of the psoas muscle. Thus, access to the psoas major muscle at the ventral 2/3 region can safely prevent nerve injuries. 3D reconstruction of the lumbar plexus based on VCH data can clearly show the relationships between the lumbar plexus and the blood vessels, vertebral body, kidney, and psoas muscle. Conclusion The psoas muscle can be considered as a surgical landmark since incision at the ventral 2/3 of the region can prevent lumbar plexus injuries for procedures requiring exposure of the lateral anterior of the lumbar. The transverse process can be considered as a landmark and reference in surgical

  10. Clinical anatomy and 3D virtual reconstruction of the lumbar plexus with respect to lumbar surgery.

    Science.gov (United States)

    Lu, Sheng; Chang, Shan; Zhang, Yuan-zhi; Ding, Zi-hai; Xu, Xin Ming; Xu, Yong-qing

    2011-04-14

    Exposure of the anterior or lateral lumbar via the retroperitoneal approach easily causes injuries to the lumbar plexus. Lumbar plexus injuries which occur during anterior or transpsoas lumbar spine exposure and placement of instruments have been reported. This study aims is to provide more anatomical data and surgical landmarks in operations concerning the lumbar plexus in order to prevent lumbar plexus injuries and to increase the possibility of safety in anterior approach lumbar surgery. To study the applied anatomy related to the lumbar plexus of fifteen formaldehyde-preserved cadavers, Five sets of Virtual Human (VH) data set were prepared and used in the study. Three-dimensional (3D) computerized reconstructions of the lumbar plexus and their adjacent structures were conducted from the VH female data set. The order of lumbar nerves is regular. From the anterior view, lumbar plexus nerves are arranged from medial at L5 to lateral at L2. From the lateral view, lumbar nerves are arranged from ventral at L2 to dorsal at L5. The angle of each nerve root exiting outward to the corresponding intervertebral foramen increases from L1 to L5. The lumbar plexus nerves are observed to be in close contact with transverse processes (TP). All parts of the lumbar plexus were located by sectional anatomy in the dorsal third of the psoas muscle. Thus, access to the psoas major muscle at the ventral 2/3 region can safely prevent nerve injuries. 3D reconstruction of the lumbar plexus based on VCH data can clearly show the relationships between the lumbar plexus and the blood vessels, vertebral body, kidney, and psoas muscle. The psoas muscle can be considered as a surgical landmark since incision at the ventral 2/3 of the region can prevent lumbar plexus injuries for procedures requiring exposure of the lateral anterior of the lumbar. The transverse process can be considered as a landmark and reference in surgical operations by its relative position to the lumbar plexus. 3D

  11. From survey to 3d model and from 3d model to “videogame”. The virtual reconstruction of a Roman Camp in Masada, Israel.

    Directory of Open Access Journals (Sweden)

    Sandro Parrinello

    2017-12-01

    Full Text Available The archaeological survey is carried out by com-bining the study and observation of material rea-lity with the in-depth study of historical sources, thus allowing to “translate” the signs of history into drawings, or rather as complex represen-tations of an embedded system of information. The MRP-Masada Research Project was deve-loped by the Joint Inter-University Laboratory Landscaper,Survey & Design with the aim of ex-perimenting with various digital technologies in order to create a complete digital documenta-tion of the important archaeological site, now protected by UNESCO. The paper describes the case study of the virtual 3D reconstruction of the F2 Roman camp, so-called “Campo del Generale Silva” and the potentialities that 3d models offer in terms of communication and dissemination of the Archaeological Heritage.

  12. Towards Virtual Prototyping of Synchronous Real-time Systems on NoC-based MPSoCs

    OpenAIRE

    Razi Seyyedi; M. T. Mohammadat; Maher Fakih; Kim Grüttner; Johnny Öberg

    2017-01-01

    NoC-based designs provide a scalable and flexible communication solution for the rising number of processing cores on a single chip. To master the complexity of the software design in such a NoC-based multi-core architecture, advanced incremental integration testing solutions are required. In this presents a virtual platform based software testing and debugging approach for a synchronous application model on a 2x2 NoC-based MPSoC. We propose a development approach and a t...

  13. 3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement

    Science.gov (United States)

    Barba, S.; Fiorillo, F.; De Feo, E.

    2013-02-01

    . In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.

  14. Stereoselective virtual screening of the ZINC database using atom pair 3D-fingerprints.

    Science.gov (United States)

    Awale, Mahendra; Jin, Xian; Reymond, Jean-Louis

    2015-01-01

    Tools to explore large compound databases in search for analogs of query molecules provide a strategically important support in drug discovery to help identify available analogs of any given reference or hit compound by ligand based virtual screening (LBVS). We recently showed that large databases can be formatted for very fast searching with various 2D-fingerprints using the city-block distance as similarity measure, in particular a 2D-atom pair fingerprint (APfp) and the related category extended atom pair fingerprint (Xfp) which efficiently encode molecular shape and pharmacophores, but do not perceive stereochemistry. Here we investigated related 3D-atom pair fingerprints to enable rapid stereoselective searches in the ZINC database (23.2 million 3D structures). Molecular fingerprints counting atom pairs at increasing through-space distance intervals were designed using either all atoms (16-bit 3DAPfp) or different atom categories (80-bit 3DXfp). These 3D-fingerprints retrieved molecular shape and pharmacophore analogs (defined by OpenEye ROCS scoring functions) of 110,000 compounds from the Cambridge Structural Database with equal or better accuracy than the 2D-fingerprints APfp and Xfp, and showed comparable performance in recovering actives from decoys in the DUD database. LBVS by 3DXfp or 3DAPfp similarity was stereoselective and gave very different analogs when starting from different diastereomers of the same chiral drug. Results were also different from LBVS with the parent 2D-fingerprints Xfp or APfp. 3D- and 2D-fingerprints also gave very different results in LBVS of folded molecules where through-space distances between atom pairs are much shorter than topological distances. 3DAPfp and 3DXfp are suitable for stereoselective searches for shape and pharmacophore analogs of query molecules in large databases. Web-browsers for searching ZINC by 3DAPfp and 3DXfp similarity are accessible at www.gdb.unibe.ch and should provide useful assistance to drug

  15. Gait adaptation to visual kinematic perturbations using a real-time closed-loop brain-computer interface to a virtual reality avatar.

    Science.gov (United States)

    Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagame, Sho; Contreras-Vidal, Jose L

    2016-06-01

    The control of human bipedal locomotion is of great interest to the field of lower-body brain-computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1-3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson's r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31; Knee: 0.23 ± 0.33; Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24; Knee: 0.55 ± 0.20; Ankle: 0.29 ± 0.22) on Day 8. These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system.

  16. Gait adaptation to visual kinematic perturbations using a real-time closed-loop brain-computer interface to a virtual reality avatar

    Science.gov (United States)

    Phat Luu, Trieu; He, Yongtian; Brown, Samuel; Nakagome, Sho; Contreras-Vidal, Jose L.

    2016-06-01

    Objective. The control of human bipedal locomotion is of great interest to the field of lower-body brain-computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. Approach. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1-3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Main results. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson’s r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31 Knee: 0.23 ± 0.33 Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24 Knee: 0.55 ± 0.20 Ankle: 0.29 ± 0.22) on Day 8. Significance. These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system.

  17. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept.

    Science.gov (United States)

    Roosink, Meyke; Robitaille, Nicolas; McFadyen, Bradford J; Hébert, Luc J; Jackson, Philip L; Bouyer, Laurent J; Mercier, Catherine

    2015-01-05

    Virtual reality (VR) provides interactive multimodal sensory stimuli and biofeedback, and can be a powerful tool for physical and cognitive rehabilitation. However, existing systems have generally not implemented realistic full-body avatars and/or a scaling of visual movement feedback. We developed a "virtual mirror" that displays a realistic full-body avatar that responds to full-body movements in all movement planes in real-time, and that allows for the scaling of visual feedback on movements in real-time. The primary objective of this proof-of-concept study was to assess the ability of healthy subjects to detect scaled feedback on trunk flexion movements. The "virtual mirror" was developed by integrating motion capture, virtual reality and projection systems. A protocol was developed to provide both augmented and reduced feedback on trunk flexion movements while sitting and standing. The task required reliance on both visual and proprioceptive feedback. The ability to detect scaled feedback was assessed in healthy subjects (n = 10) using a two-alternative forced choice paradigm. Additionally, immersion in the VR environment and task adherence (flexion angles, velocity, and fluency) were assessed. The ability to detect scaled feedback could be modelled using a sigmoid curve with a high goodness of fit (R2 range 89-98%). The point of subjective equivalence was not significantly different from 0 (i.e. not shifted), indicating an unbiased perception. The just noticeable difference was 0.035 ± 0.007, indicating that subjects were able to discriminate different scaling levels consistently. VR immersion was reported to be good, despite some perceived delays between movements and VR projections. Movement kinematic analysis confirmed task adherence. The new "virtual mirror" extends existing VR systems for motor and pain rehabilitation by enabling the use of realistic full-body avatars and scaled feedback. Proof-of-concept was demonstrated for the assessment of

  18. Using virtual ridge augmentation and 3D printing to fabricate a titanium mesh positioning device: A novel technique letter.

    Science.gov (United States)

    Al-Ardah, Aladdin; Alqahtani, Nasser; AlHelal, Abdulaziz; Goodacre, Brian; Swamidass, Rajesh; Garbacea, Antoanela; Lozada, Jaime

    2018-05-02

    This technique describes a novel approach for planning and augmenting a large bony defect using a titanium mesh (TiMe). A 3-dimensional (3D) surgical model was virtually created from a cone beam computed tomography (CBCT) and wax-pattern of the final prosthetic outcome. The required bone volume (horizontally and vertically) was digitally augmented and then 3D printed to create a bone model. The 3D model was then used to contour the TiMe in accordance with the digital augmentation. With the contoured / preformed TiMe on the 3D printed model a positioning jig was made to aid the placement of the TiMe as planned during surgery. Although this technique does not impact the final outcome of the augmentation procedure, it allows the clinician to virtually design the augmentation, preform and contour the TiMe, and create a positioning jig reducing surgical time and error.

  19. Viewing medium affects arm motor performance in 3D virtual environments.

    Science.gov (United States)

    Subramanian, Sandeep K; Levin, Mindy F

    2011-06-30

    2D and 3D virtual reality platforms are used for designing individualized training environments for post-stroke rehabilitation. Virtual environments (VEs) are viewed using media like head mounted displays (HMDs) and large screen projection systems (SPS) which can influence the quality of perception of the environment. We estimated if there were differences in arm pointing kinematics when subjects with and without stroke viewed a 3D VE through two different media: HMD and SPS. Two groups of subjects participated (healthy control, n=10, aged 53.6 ± 17.2 yrs; stroke, n=20, 66.2 ± 11.3 yrs). Arm motor impairment and spasticity were assessed in the stroke group which was divided into mild (n=10) and moderate-to-severe (n=10) sub-groups based on Fugl-Meyer Scores. Subjects pointed (8 times each) to 6 randomly presented targets located at two heights in the ipsilateral, middle and contralateral arm workspaces. Movements were repeated in the same VE viewed using HMD (Kaiser XL50) and SPS. Movement kinematics were recorded using an Optotrak system (Certus, 6 markers, 100 Hz). Upper limb motor performance (precision, velocity, trajectory straightness) and movement pattern (elbow, shoulder ranges and trunk displacement) outcomes were analyzed using repeated measures ANOVAs. For all groups, there were no differences in endpoint trajectory straightness, shoulder flexion and shoulder horizontal adduction ranges and sagittal trunk displacement between the two media. All subjects, however, made larger errors in the vertical direction using HMD compared to SPS. Healthy subjects also made larger errors in the sagittal direction, slower movements overall and used less range of elbow extension for the lower central target using HMD compared to SPS. The mild and moderate-to-severe sub-groups made larger RMS errors with HMD. The only advantage of using the HMD was that movements were 22% faster in the moderate-to-severe stroke sub-group compared to the SPS. Despite the similarity in

  20. Viewing medium affects arm motor performance in 3D virtual environments

    Directory of Open Access Journals (Sweden)

    Subramanian Sandeep K

    2011-06-01

    Full Text Available Abstract Background 2D and 3D virtual reality platforms are used for designing individualized training environments for post-stroke rehabilitation. Virtual environments (VEs are viewed using media like head mounted displays (HMDs and large screen projection systems (SPS which can influence the quality of perception of the environment. We estimated if there were differences in arm pointing kinematics when subjects with and without stroke viewed a 3D VE through two different media: HMD and SPS. Methods Two groups of subjects participated (healthy control, n = 10, aged 53.6 ± 17.2 yrs; stroke, n = 20, 66.2 ± 11.3 yrs. Arm motor impairment and spasticity were assessed in the stroke group which was divided into mild (n = 10 and moderate-to-severe (n = 10 sub-groups based on Fugl-Meyer Scores. Subjects pointed (8 times each to 6 randomly presented targets located at two heights in the ipsilateral, middle and contralateral arm workspaces. Movements were repeated in the same VE viewed using HMD (Kaiser XL50 and SPS. Movement kinematics were recorded using an Optotrak system (Certus, 6 markers, 100 Hz. Upper limb motor performance (precision, velocity, trajectory straightness and movement pattern (elbow, shoulder ranges and trunk displacement outcomes were analyzed using repeated measures ANOVAs. Results For all groups, there were no differences in endpoint trajectory straightness, shoulder flexion and shoulder horizontal adduction ranges and sagittal trunk displacement between the two media. All subjects, however, made larger errors in the vertical direction using HMD compared to SPS. Healthy subjects also made larger errors in the sagittal direction, slower movements overall and used less range of elbow extension for the lower central target using HMD compared to SPS. The mild and moderate-to-severe sub-groups made larger RMS errors with HMD. The only advantage of using the HMD was that movements were 22% faster in the moderate-to-severe stroke sub

  1. NanTroSEIZE in 3-D: Creating a Virtual Research Experience in Undergraduate Geoscience Courses

    Science.gov (United States)

    Reed, D. L.; Bangs, N. L.; Moore, G. F.; Tobin, H.

    2009-12-01

    Marine research programs, both large and small, have increasingly added a web-based component to facilitate outreach to K-12 and the public, in general. These efforts have included, among other activities, information-rich websites, ship-to-shore communication with scientists during expeditions, blogs at sea, clips on YouTube, and information about daily shipboard activities. Our objective was to leverage a portion of the vast collection of data acquired through the NSF-MARGINS program to create a learning tool with a long lifespan for use in undergraduate geoscience courses. We have developed a web-based virtual expedition, NanTroSEIZE in 3-D, based on a seismic survey associated with the NanTroSEIZE program of NSF-MARGINS and IODP to study the properties of the plate boundary fault system in the upper limit of the seismogenic zone off Japan. The virtual voyage can be used in undergraduate classes at anytime, since it is not directly tied to the finite duration of a specific seagoing project. The website combines text, graphics, audio and video to place learning in an experiential framework as students participate on the expedition and carry out research. Students learn about the scientific background of the program, especially the critical role of international collaboration, and meet the chief scientists before joining the sea-going expedition. Students are presented with the principles of 3-D seismic imaging, data processing and interpretation while mapping and identifying the active faults that were the likely sources of devastating earthquakes and tsunamis in Japan in 1944 and 1948. They also learn about IODP drilling that began in 2007 and will extend through much of the next decade. The website is being tested in undergraduate classes in fall 2009 and will be distributed through the NSF-MARGINS website (http://www.nsf-margins.org/) and the MARGINS Mini-lesson section of the Science Education Resource Center (SERC) (http

  2. A simplified 2D to 3D video conversion technology——taking virtual campus video production as an example

    Directory of Open Access Journals (Sweden)

    ZHUANG Huiyang

    2012-10-01

    Full Text Available This paper describes a simplified 2D to 3D Video Conversion Technology, taking virtual campus 3D video production as an example. First, it clarifies the meaning of the 2D to 3D Video Conversion Technology, and points out the disadvantages of traditional methods. Second, it forms an innovative and convenient method. A flow diagram, software and hardware configurations are presented. Finally, detailed description of the conversion steps and precautions are given in turn to the three processes, namely, preparing materials, modeling objects and baking landscapes, recording screen and converting videos .

  3. EFFECTIVE 3D DIGITIZATION OF ARCHAEOLOGICAL ARTIFACTS FOR INTERACTIVE VIRTUAL MUSEUM

    Directory of Open Access Journals (Sweden)

    G. Tucci

    2012-09-01

    Full Text Available This paper presents a set of results of an on-going research on digital 3D reproduction of medium and small size archaeological artifacts which is intended to support the elaboration of a virtual and interactive exhibition environment, and also to provide a scientific archive of highly accurate models for specialists. After a short illustration of the background project and its finalities, we present the data acquisition through triangulation-based laser scanning and the post-processing methods we used to face the challenge of obtaining a large number of reliable digital copies at reasonable costs and within a short time frame, giving an account of the most recurrent problematic issues of the established work-flow and how they were solved (the careful placing of the artifacts to be digitized so to achieve the best results, the cleaning operations in order to build a coherent single polygon mesh, how to deal with unavoidable missing parts or defected textures in the generated model, etc..

  4. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. CamMedNP: building the Cameroonian 3D structural natural products database for virtual screening.

    Science.gov (United States)

    Ntie-Kang, Fidele; Mbah, James A; Mbaze, Luc Meva'a; Lifongo, Lydia L; Scharfe, Michael; Hanna, Joelle Ngo; Cho-Ngwa, Fidelis; Onguéné, Pascal Amoa; Owono Owono, Luc C; Megnassan, Eugene; Sippl, Wolfgang; Efange, Simon M N

    2013-04-16

    Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the "drug-likeness" of this database using Lipinski's "Rule of Five". A diversity analysis has been carried out in comparison with the ChemBridge diverse database. CamMedNP could be highly useful for database screening and natural product lead generation programs.

  6. Virtual film technique used in 3d and step-shot IMRT planning check

    International Nuclear Information System (INIS)

    Wang, Y.; Zealey, W.; Deng, X.; Huang, S.; Qi, Z.

    2004-01-01

    Full text: A virtual film technique developed and used in segmented field dose reconstruction for IMRT planning dose distribution check. Film dosimetry analysis is commonly used for the isodose curve comparison but the result can be affected by film dosimetry technical problems, and the film processing also takes a significant amount of workload. This study is focused on using digital image technique to reconstruct dose distribution for a 3D plan by mapping water-scanning data on screen in black and white intensity value, and by simulating the film analysis process to plot equivalent Isodose curve for the planning Isodose comparison check. In-house developed software is used to select the TPR (Tissue-Phantom Ratio) and OCR (Off Central-Axis Ratio) data for different beam field types and sizes; each point dose of the field is interpolated and converted into the greyscale pixel value. The location of the pixel is calculated by the triangular function according to the beam entry position and gantry/collimator angles. After each segment field is processed, the program gathers all the segments and overlays the greyscale value pixel by pixel for all the segments into a combined map. The background value is calibrated to match the water scan curve background level. The penumbra slope is adjusted by an interpolated divergent angle according to the OAD (Off Central-Axis Distance) of the field. A normal film dosimetry analysis can then be performed to plot the Isodose curves. By comparing some typical fields with both single beam and segmented IMRT fields, with the point dose checked by ionization measurement, the central point dose discrepancy is within ±2% and the maximum 3-5% for a random point using TLD technique. Compare the Isodose overlaying result to planning curves for both perpendicular and lateral beam. Although the curve shape for the virtual film viewed is more artificial compared with real film, the results are easier to compare for the quantity analysis with

  7. A 3-D Virtual Reality Model of the Sun and the Moon for E-Learning at Elementary Schools

    Science.gov (United States)

    Sun, Koun-Tem; Lin, Ching-Ling; Wang, Sheng-Min

    2010-01-01

    The relative positions of the sun, moon, and earth, their movements, and their relationships are abstract and difficult to understand astronomical concepts in elementary school science. This study proposes a three-dimensional (3-D) virtual reality (VR) model named the "Sun and Moon System." This e-learning resource was designed by…

  8. A cone-beam CT based technique to augment the 3D virtual skull model with a detailed dental surface.

    NARCIS (Netherlands)

    Swennen, G.R.J.; Mommaerts, M.Y.; Abeloos, J.V.S.; Clercq, C. De; Lamoral, P.; Neyt, N.; Casselman, J.W.; Schutyser, F.A.C.

    2009-01-01

    Cone-beam computed tomography (CBCT) is used for maxillofacial imaging. 3D virtual planning of orthognathic and facial orthomorphic surgery requires detailed visualisation of the interocclusal relationship. This study aimed to introduce and evaluate the use of a double CBCT scan procedure with a

  9. Virtual reality 3D echocardiography in the assessment of tricuspid valve function after surgical closure of ventricular septal defect

    NARCIS (Netherlands)

    G. Bol-Raap (Goris); A.H.J. Koning (Anton); T.V. Scohy (Thierry); A.D.J. ten Harkel (Arend); F.J. Meijboom (Folkert); A.P. Kappetein (Arie Pieter); P.J. van der Spek (Peter); A.J.J.C. Bogers (Ad)

    2007-01-01

    textabstractBackground. This study was done to investigate the potential additional role of virtual reality, using three-dimensional (3D) echocardiographic holograms, in the postoperative assessment of tricuspid valve function after surgical closure of ventricular septal defect (VSD). Methods. 12

  10. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report.

    Science.gov (United States)

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.

  11. A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.

    Science.gov (United States)

    Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-08-05

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.

  12. Novel 3D modeling methods for virtual fabrication and EDA compatible design of MEMS via parametric libraries

    International Nuclear Information System (INIS)

    Schröpfer, Gerold; Lorenz, Gunar; Rouvillois, Stéphane; Breit, Stephen

    2010-01-01

    This paper provides a brief summary of the state-of-the-art of MEMS-specific modeling techniques and describes the validation of new models for a parametric component library. Two recently developed 3D modeling tools are described in more detail. The first one captures a methodology for designing MEMS devices and simulating them together with integrated electronics within a standard electronic design automation (EDA) environment. The MEMS designer can construct the MEMS model directly in a 3D view. The resulting 3D model differs from a typical feature-based 3D CAD modeling tool in that there is an underlying behavioral model and parametric layout associated with each MEMS component. The model of the complete MEMS device that is shared with the standard EDA environment can be fully parameterized with respect to manufacturing- and design-dependent variables. Another recent innovation is a process modeling tool that allows accurate and highly realistic visualization of the step-by-step creation of 3D micro-fabricated devices. The novelty of the tool lies in its use of voxels (3D pixels) rather than conventional 3D CAD techniques to represent the 3D geometry. Case studies for experimental devices are presented showing how the examination of these virtual prototypes can reveal design errors before mask tape out, support process development before actual fabrication and also enable failure analysis after manufacturing.

  13. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report

    Directory of Open Access Journals (Sweden)

    Huang Y

    2016-10-01

    Full Text Available Yu-Hui Huang,1,2 Rosemary Seelaus,1,2 Linping Zhao,1,2 Pravin K Patel,1,2 Mimis Cohen1,2 1The Craniofacial Center, Department of Surgery, Division of Plastic & Reconstructive Surgery, University of Illinois Hospital & Health Sciences System, 2University of Illinois College of Medicine at Chicago, Chicago, IL, USA Abstract: Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. Keywords: computer-assisted surgery, virtual surgical planning (VSP, 3D printing, orbital prosthetic reconstruction, craniofacial implants

  14. A New Navigation System of Renal Puncture for Endoscopic Combined Intrarenal Surgery: Real-time Virtual Sonography-guided Renal Access.

    Science.gov (United States)

    Hamamoto, Shuzo; Unno, Rei; Taguchi, Kazumi; Ando, Ryosuke; Hamakawa, Takashi; Naiki, Taku; Okada, Shinsuke; Inoue, Takaaki; Okada, Atsushi; Kohri, Kenjiro; Yasui, Takahiro

    2017-11-01

    To evaluate the clinical utility of a new navigation technique for percutaneous renal puncture using real-time virtual sonography (RVS) during endoscopic combined intrarenal surgery. Thirty consecutive patients who underwent endoscopic combined intrarenal surgery for renal calculi, between April 2014 and July 2015, were divided into the RVS-guided puncture (RVS; n = 15) group and the ultrasonography-guided puncture (US; n = 15) group. In the RVS group, renal puncture was repeated until precise piercing of a papilla was achieved under direct endoscopic vision, using the RVS system to synchronize the real-time US image with the preoperative computed tomography image. In the US group, renal puncture was performed under US guidance only. In both groups, 2 urologists worked simultaneously to fragment the renal calculi after inserting the miniature percutaneous tract. The mean sizes of the renal calculi in the RVS and the US group were 33.5 and 30.5 mm, respectively. A lower mean number of puncture attempts until renal access through the calyx was needed for the RVS compared with the US group (1.6 vs 3.4 times, respectively; P = .001). The RVS group had a lower mean postoperative hemoglobin decrease (0.93 vs 1.39 g/dL, respectively; P = .04), but with no between-group differences with regard to operative time, tubeless rate, and stone-free rate. None of the patients in the RVS group experienced postoperative complications of a Clavien score ≥2, with 3 patients experiencing such complications in the US group. RVS-guided renal puncture was effective, with a lower incidence of bleeding-related complications compared with US-guided puncture. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Hsp90 inhibitors, part 1: definition of 3-D QSAutogrid/R models as a tool for virtual screening.

    Science.gov (United States)

    Ballante, Flavio; Caroli, Antonia; Wickersham, Richard B; Ragno, Rino

    2014-03-24

    The multichaperone heat shock protein (Hsp) 90 complex mediates the maturation and stability of a variety of oncogenic signaling proteins. For this reason, Hsp90 has emerged as a promising target for anticancer drug development. Herein, we describe a complete computational procedure for building several 3-D QSAR models used as a ligand-based (LB) component of a comprehensive ligand-based (LB) and structure-based (SB) virtual screening (VS) protocol to identify novel molecular scaffolds of Hsp90 inhibitors. By the application of the 3-D QSAutogrid/R method, eight SB PLS 3-D QSAR models were generated, leading to a final multiprobe (MP) 3-D QSAR pharmacophoric model capable of recognizing the most significant chemical features for Hsp90 inhibition. Both the monoprobe and multiprobe models were optimized, cross-validated, and tested against an external test set. The obtained statistical results confirmed the models as robust and predictive to be used in a subsequent VS.

  16. Virtual 3-D operational planning in the maintenance of main mine equipment; Virtuelle 3D-Ablaufplanung in der Instandhaltung von Tagebaugrossgeraeten

    Energy Technology Data Exchange (ETDEWEB)

    Suchodoll, Dirk; Eberlein, Mark [RWE Power AG, Frechen (Germany). Technikzentrum Tagebaue/HW; Stock, Wilhelm [RWE Power AG, Koeln (Germany)

    2012-09-15

    To optimise the sequence of operations for replacing the ball race of a bucket-wheel excavator during an outage of several weeks using a virtual 3-D model of this ball race, an interdisciplinary project team consisting of employees of the Fraunhofer Institute of Factory Operation and Automation (IFF) in Magdeburg, mechanical engineers specialised in maintenance and main mine equipment and employees of RWE Power AG specialised in technical further education developed, optimised and applied such a model. Owing to the fact that a process of this complexity has never been examined, documented and assessed at this level of detail before, immense knowledge present only in experts' minds so far could be recorded and transferred to the model. This constitutes a significant added value to RWE Power AG's maintenance core competency. Following a modification of its data set, this model can be used further for subsequent projects in connection with other bucket-wheel excavators. This model combines engineering, maintenance and occupational safety and can be equally applied for process optimisation, trainings on the construction site and technical further education. This is promoted by the fact that the virtual model is intuitive to use and has no special hardware requirements. The results have shown that virtualising processes and plants also opens up new ways in knowledge retention and transfer to respond more effectively, promptly and in a more demand-oriented manner to the further-education requirements of a modern company. The use of this virtual 3-D model cannot be assessed reliably in terms of costs and benefits because of its manifold and complex application fields on the one hand and due to the difficulty of evaluating the process improvements achieved on the other hand. In addition, it must be taken into account that those process improvements could only be made in the first place because of the development of this model, allowing the &apos

  17. Simulation of mirror surfaces for virtual estimation of visibility lines for 3D motor vehicle collision reconstruction.

    Science.gov (United States)

    Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars

    2017-10-01

    3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  19. Second Life: an overview of the potential of 3-D virtual worlds in medical and health education.

    Science.gov (United States)

    Boulos, Maged N Kamel; Hetherington, Lee; Wheeler, Steve

    2007-12-01

    This hybrid review-case study introduces three-dimensional (3-D) virtual worlds and their educational potential to medical/health librarians and educators. Second life (http://secondlife.com/) is perhaps the most popular virtual world platform in use today, with an emphasis on social interaction. We describe some medical and health education examples from Second Life, including Second Life Medical and Consumer Health Libraries (Healthinfo Island-funded by a grant from the US National Library of Medicine), and VNEC (Virtual Neurological Education Centre-developed at the University of Plymouth, UK), which we present as two detailed 'case studies'. The pedagogical potentials of Second Life are then discussed, as well as some issues and challenges related to the use of virtual worlds. We have also compiled an up-to-date resource page (http://healthcybermap.org/sl.htm), with additional online material and pointers to support and extend this study.

  20. Implementation of 3D-virtual brachytherapy in the management of breast cancer: a description of a new method of interstitial brachytherapy

    International Nuclear Information System (INIS)

    Vicini, Frank A.; Jaffray, David A.; Horwitz, Eric M.; Edmundson, Gregory K.; DeBiose, David A.; Kini, Vijay R.; Martinez, Alvaro A.

    1998-01-01

    preoperatively. Results: Intraoperative ultrasound was used to check the real-time position of the afterloading needles in reference to the chest wall and posterior border of the target volume. No adjustment of needles was required in any of the 11 patients. Assessment of target volume coverage between the virtual implant and the actual CT image of the implant showed excellent agreement. In each case, all target volume boundaries specified by the physician were adequately covered. The total number of implant planes, intertemplate separation, and template orientation were identical between the virtual and real implant. Conclusion: We conclude that 3D virtual brachytherapy may offer an improved technique for accurately performing interstitial implants of the breast with a closed lumpectomy cavity in selected patients. Although preliminary results show excellent coverage of the desired target volume, additional patients will be required to establish the reproducibility of this technique and its practical limitations

  1. Virtual animation of victim-specific 3D models obtained from CT scans for forensic reconstructions

    DEFF Research Database (Denmark)

    Villa, C; Olsen, K B; Hansen, S H

    2017-01-01

    Post-mortem CT scanning (PMCT) has been introduced at several forensic medical institutions many years ago and has proved to be a useful tool. 3D models of bones, skin, internal organs and bullet paths can rapidly be generated using post-processing software. These 3D models reflect the individual...

  2. The use of virtual reality and intelligent database systems for procedure planning, visualisation, and real-time component tracking in remote handling operations

    International Nuclear Information System (INIS)

    Robbins, Edward; Sanders, Stephen; Williams, Adrian; Allan, Peter

    2009-01-01

    The organisation of remote handling (RH) operations in fusion environments is increasingly critical as the number of tasks, components and tooling that RH operations teams must deal with inexorably rises. During the recent JET EP1 RH shutdown the existing virtual reality (VR) and procedural database systems proved essential for visualisation and tracking of operations, particularly due to the increasing complexity of remote tasks. A new task planning system for RH operations is in development, and is expected to be ready for use during the next major shutdown, planned for 2009. The system will make use of information available from the remote operations procedures, the RH equipment human-machine interfaces, the on-line RH equipment control systems and also the virtual reality (VR) system to establish a complete database for the location of plant items and RH equipment as RH operations progress. It is intended that the system be used during both preparation and implementation of shutdowns. In the preparations phase the system can be used to validate procedures and overall logistics by allowing an operator to increment through each operation step and to use the VR system to visualise the location and status of all components, manipulators and RH tools. During task development the RH operations engineers can plan and visualise movement of components and tooling to examine handling concepts and establish storage requirements. In the implementation of operations the daily work schedules information will be integrated with the RH operations procedures tracking records to enable the VR system to provide a visual representation of the status of remote operations in real time. Monitoring of the usage history of items will allow estimates of radiation dosage and contaminant exposure to be made. This paper describes the overall aims, structure and use of the system, discusses its application to JET and also considers potential future developments.

  3. A Method for Teaching the Modeling of Manikins Suitable for Third-Person 3-D Virtual Worlds and Games

    Directory of Open Access Journals (Sweden)

    Nick V. Flor

    2012-08-01

    Full Text Available Virtual Worlds have the potential to transform the way people learn, work, and play. With the emerging fields of service science and design science, professors and students at universities are in a unique position to lead the research and development of innovative and value-adding virtual worlds. However, a key barrier in the development of virtual worlds—especially for business, technical, and non-artistic students—is the ability to model human figures in 3-D for use as avatars and automated characters in virtual worlds. There are no articles in either research or teaching journals which describe methods that non-artists can use to create 3-D human figures. This paper presents a repeatable and flexible method I have taught successfully to both artists and business students, which allows them to quickly model human-like figures (manikins that are sufficient for prototype purposes and that allows students and researchers alike to explore the development of new kinds of virtual worlds.

  4. Virtual animation of victim-specific 3D models obtained from CT scans for forensic reconstructions: Living and dead subjects.

    Science.gov (United States)

    Villa, C; Olsen, K B; Hansen, S H

    2017-09-01

    Post-mortem CT scanning (PMCT) has been introduced at several forensic medical institutions many years ago and has proved to be a useful tool. 3D models of bones, skin, internal organs and bullet paths can rapidly be generated using post-processing software. These 3D models reflect the individual physiognomics and can be used to create whole-body 3D virtual animations. In such way, virtual reconstructions of the probable ante-mortem postures of victims can be constructed and contribute to understand the sequence of events. This procedure is demonstrated in two victims of gunshot injuries. Case #1 was a man showing three perforating gunshot wounds, who died due to the injuries of the incident. Whole-body PMCT was performed and 3D reconstructions of bones, relevant internal organs and bullet paths were generated. Using 3ds Max software and a human anatomy 3D model, a virtual animated body was built and probable ante-mortem postures visualized. Case #2 was a man presenting three perforating gunshot wounds, who survived the incident: one in the left arm and two in the thorax. Only CT scans of the thorax, abdomen and the injured arm were provided by the hospital. Therefore, a whole-body 3D model reflecting the anatomical proportions of the patient was made combining the actual bones of the victim with those obtained from the human anatomy 3D model. The resulted 3D model was used for the animation process. Several probable postures were also visualized in this case. It has be shown that in Case #1 the lesions and the bullet path were not consistent with an upright standing position; instead, the victim was slightly bent forward, i.e. he was sitting or running when he was shot. In Case #2, one of the bullets could have passed through the arm and continued into the thorax. In conclusion, specialized 3D modelling and animation techniques allow for the reconstruction of ante-mortem postures based on both PMCT and clinical CT. Copyright © 2017 Elsevier B.V. All rights

  5. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    Science.gov (United States)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  6. 4Cin: A computational pipeline for 3D genome modeling and virtual Hi-C analyses from 4C data.

    Directory of Open Access Journals (Sweden)

    Ibai Irastorza-Azcarate

    2018-03-01

    Full Text Available The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.

  7. Using a 3D virtual supermarket to measure food purchase behavior: a validation study.