WorldWideScience

Sample records for enhanced reality visualization

  1. Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality

    OpenAIRE

    Uematsu, Yuko; Saito, Hideo

    2008-01-01

    This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera....

  2. Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality

    Directory of Open Access Journals (Sweden)

    Hideo Saito

    2008-09-01

    Full Text Available This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera. Second application is a bowling system which allows users to roll a real ball down a real bowling lane model on the tabletop and knock down virtual pins. The users watch the virtual pins through the monitor. The lane and the ball are also tracked by vision-based tracking. In those applications, we utilize multiple 2D markers distributed at arbitrary positions and directions. Even though the geometrical relationship among the markers is unknown, we can track the camera in very wide area.

  3. Visualizing Compound Rotations with Virtual Reality

    Science.gov (United States)

    Flanders, Megan; Kavanagh, Richard C.

    2013-01-01

    Mental rotations are among the most difficult of all spatial tasks to perform, and even those with high levels of spatial ability can struggle to visualize the result of compound rotations. This pilot study investigates the use of the virtual reality-based Rotation Tool, created using the Virtual Reality Modeling Language (VRML) together with…

  4. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

    International Nuclear Information System (INIS)

    Grimson, W.E.L.; Lozano-Perez, T.; White, S.J.; Wells, W.M. III; Kikinis, R.

    1996-01-01

    There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images

  5. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    Science.gov (United States)

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  6. Augmented Reality Prototype for Visualizing Large Sensors’ Datasets

    Directory of Open Access Journals (Sweden)

    Folorunso Olufemi A.

    2011-04-01

    Full Text Available This paper addressed the development of an augmented reality (AR based scientific visualization system prototype that supports identification, localisation, and 3D visualisation of oil leakages sensors datasets. Sensors generates significant amount of multivariate datasets during normal and leak situations which made data exploration and visualisation daunting tasks. Therefore a model to manage such data and enhance computational support needed for effective explorations are developed in this paper. A challenge of this approach is to reduce the data inefficiency. This paper presented a model for computing information gain for each data attributes and determine a lead attribute.The computed lead attribute is then used for the development of an AR-based scientific visualization interface which automatically identifies, localises and visualizes all necessary data relevant to a particularly selected region of interest (ROI on the network. Necessary architectural system supports and the interface requirements for such visualizations are also presented.

  7. Students and Teachers as Developers of Visual Learning Designs with Augmented Reality for Visual Arts Education

    DEFF Research Database (Denmark)

    Buhl, Mie

    2017-01-01

    upon which to discuss the potential for reengineering the traditional role of the teacher/learning designer as the only supplier and the students as the receivers of digital learning designs in higher education. The discussion applies the actor-network theory and socio-material perspectives...... on education in order to enhance the meta-perspective of traditional teacher and student roles.......Abstract This paper reports on a project in which communication and digital media students collaborated with visual arts teacher students and their teacher trainer to develop visual digital designs for learning that involved Augmented Reality (AR) technology. The project exemplified a design...

  8. COMBINING INDEPENDENT VISUALIZATION AND TRACKING SYSTEMS FOR AUGMENTED REALITY

    Directory of Open Access Journals (Sweden)

    P. Hübner

    2018-05-01

    Full Text Available The basic requirement for the successful deployment of a mobile augmented reality application is a reliable tracking system with high accuracy. Recently, a helmet-based inside-out tracking system which meets this demand has been proposed for self-localization in buildings. To realize an augmented reality application based on this tracking system, a display has to be added for visualization purposes. Therefore, the relative pose of this visualization platform with respect to the helmet has to be tracked. In the case of hand-held visualization platforms like smartphones or tablets, this can be achieved by means of image-based tracking methods like marker-based or model-based tracking. In this paper, we present two marker-based methods for tracking the relative pose between the helmet-based tracking system and a tablet-based visualization system. Both methods were implemented and comparatively evaluated in terms of tracking accuracy. Our results show that mobile inside-out tracking systems without integrated displays can easily be supplemented with a hand-held tablet as visualization device for augmented reality purposes.

  9. Immersive Earth Science: Data Visualization in Virtual Reality

    Science.gov (United States)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  10. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    Science.gov (United States)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  11. Subjective visual vertical assessment with mobile virtual reality system

    Directory of Open Access Journals (Sweden)

    Ingrida Ulozienė

    Full Text Available Background and objective: The subjective visual vertical (SVV is a measure of a subject's perceived verticality, and a sensitive test of vestibular dysfunction. Despite this, and consequent upon technical and logistical limitations, SVV has not entered mainstream clinical practice. The aim of the study was to develop a mobile virtual reality based system for SVV test, evaluate the suitability of different controllers and assess the system's usability in practical settings. Materials and methods: In this study, we describe a novel virtual reality based system that has been developed to test SVV using integrated software and hardware, and report normative values across healthy population. Participants wore a mobile virtual reality headset in order to observe a 3D stimulus presented across separate conditions – static, dynamic and an immersive real-world (“boat in the sea” SVV tests. The virtual reality environment was controlled by the tester using a Bluetooth connected controllers. Participants controlled the movement of a vertical arrow using either a gesture control armband or a general-purpose gamepad, to indicate perceived verticality. We wanted to compare 2 different methods for object control in the system, determine normal values and compare them with literature data, to evaluate the developed system with the help of the system usability scale questionnaire and evaluate possible virtually induced dizziness with the help of subjective visual analog scale. Results: There were no statistically significant differences in SVV values during static, dynamic and virtual reality stimulus conditions, obtained using the two different controllers and the results are compared to those previously reported in the literature using alternative methodologies. The SUS scores for the system were high, with a median of 82.5 for the Myo controller and of 95.0 for the Gamepad controller, representing a statistically significant difference between the two

  12. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter

    Directory of Open Access Journals (Sweden)

    Tsapakis S

    2017-08-01

    Full Text Available Stylianos Tsapakis, Dimitrios Papaconstantinou, Andreas Diagourtas, Konstantinos Droutsas, Konstantinos Andreanos, Marilita M Moschos, Dimitrios Brouzas 1st Department of Ophthalmology, National and Kapodistrian University of Athens, Athens, Greece Purpose: To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter.Materials and methods: Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter.Results: High correlation coefficient (r=0.808, P<0.0001 was found between the virtual reality visual field test and the Humphrey perimeter visual field.Conclusion: Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use. Keywords: visual fields, virtual reality glasses, perimetry, visual fields software, smartphone

  13. Integrated Data Visualization and Virtual Reality Tool

    Science.gov (United States)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  14. [Parallel virtual reality visualization of extreme large medical datasets].

    Science.gov (United States)

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  15. Visualizing Cumulus Clouds in Virtual Reality

    NARCIS (Netherlands)

    Griffith, E.J.

    2010-01-01

    This thesis focuses on interactively visualizing, and ultimately simulating, cumulus clouds both in virtual reality (VR) and with a standard desktop computer. The cumulus clouds in question are found in data sets generated by Large-Eddy Simulations (LES), which are used to simulate a small section

  16. Sensorimotor enhancement with a mixed reality system for balance and mobility rehabilitation.

    Science.gov (United States)

    Fung, Joyce; Perez, Claire F

    2011-01-01

    We have developed a mixed reality system incorporating virtual reality (VR), surface perturbations and light touch for gait rehabilitation. Haptic touch has emerged as a novel and efficient technique to improve postural control and dynamic stability. Our system combines visual display with the manipulation of physical environments and addition of haptic feedback to enhance balance and mobility post stroke. A research study involving 9 participants with stroke and 9 age-matched healthy individuals show that the haptic cue provided while walking is an effective means of improving gait stability in people post stroke, especially during challenging environmental conditions such as downslope walking.

  17. Augmented Reality as a Visualizing facilitator in Nursing Education

    DEFF Research Database (Denmark)

    Rahn, Annette; Kjærgaard, Hanne Wacher

    2014-01-01

    Title: Augmented Reality as a visualizing facilitator in nursing education Background: Understanding the workings of the biological human body is as complex as the body itself, and because of their complexity, the phenomena of respiration and lung anatomy pose a special problem for nursing students......’ understanding within anatomy and physiology. Aim: Against this background, the current project set out to investigate how and to what extent the application of augmented reality (AR) could help students gain a better understanding through an increased focus on contextualized visualization. The overall aim...

  18. Are Spatial Visualization Abilities Relevant to Virtual Reality?

    Science.gov (United States)

    Chen, Chwen Jen

    2006-01-01

    This study aims to investigate the effects of virtual reality (VR)-based learning environment on learners of different spatial visualization abilities. The findings of the aptitude-by-treatment interaction study have shown that learners benefit most from the Guided VR mode, irrespective of their spatial visualization abilities. This indicates that…

  19. Visualization framework for CAVE virtual reality systems

    OpenAIRE

    Kageyama, Akira; Tomiyama, Asako

    2016-01-01

    We have developed a software framework for scientific visualization in immersive-type, room-sized virtual reality (VR) systems, or Cave automatic virtual environment (CAVEs). This program, called Multiverse, allows users to select and invoke visualization programs without leaving CAVE’s VR space. Multiverse is a kind of immersive “desktop environment” for users, with a three-dimensional graphical user interface. For application developers, Multiverse is a software framework with useful class ...

  20. Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

    Science.gov (United States)

    Cherukuru, Nihanth Wagmi

    Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as

  1. An augmented-reality edge enhancement application for Google Glass.

    Science.gov (United States)

    Hwang, Alex D; Peli, Eli

    2014-08-01

    Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.

  2. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Science.gov (United States)

    Meyer, Georg F; Shao, Fei; White, Mark D; Hopkins, Carl; Robotham, Antony J

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  3. Visually representing reality: aesthetics and accessibility aspects

    Science.gov (United States)

    van Nes, Floris L.

    2009-02-01

    This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.

  4. Virtual Reality: A Tool for Cartographic Visualization | Quaye-Ballard ...

    African Journals Online (AJOL)

    Visualization methods in the analysis of geographical datasets are based on static models, which restrict the visual analysis capabilities. The use of virtual reality, which is a three-dimensional (3D) perspective, gives the user the ability to change viewpoints and models dynamically overcomes the static limitations of ...

  5. Virtual reality visualization of accelerator magnets

    International Nuclear Information System (INIS)

    Huang, M.; Papka, M.; DeFanti, T.; Kettunen, L.

    1995-01-01

    The authors describe the use of the CAVE virtual reality visualization environment as an aid to the design of accelerator magnets. They have modeled an elliptical multipole wiggler magnet being designed for use at the Advanced Photon Source at Argonne National Laboratory. The CAVE environment allows the authors to explore and interact with the 3-D visualization of the magnet. Capabilities include changing the number of periods the magnet displayed, changing the icons used for displaying the magnetic field, and changing the current in the electromagnet and observing the effect on the magnetic field and particle beam trajectory through the field

  6. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  7. Interactive Scientific Visualization in 3D Virtual Reality Model

    Directory of Open Access Journals (Sweden)

    Filip Popovski

    2016-11-01

    Full Text Available Scientific visualization in technology of virtual reality is a graphical representation of virtual environment in the form of images or animation that can be displayed with various devices such as Head Mounted Display (HMD or monitors that can view threedimensional world. Research in real time is a desirable capability for scientific visualization and virtual reality in which we are immersed and make the research process easier. In this scientific paper the interaction between the user and objects in the virtual environment аrе in real time which gives a sense of reality to the user. Also, Quest3D VR software package is used and the movement of the user through the virtual environment, the impossibility to walk through solid objects, methods for grabbing objects and their displacement are programmed and all interactions between them will be possible. At the end some critical analysis were made on all of these techniques on various computer systems and excellent results were obtained.

  8. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    Science.gov (United States)

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  9. Dynamic MR mammography: multidimensional visualization of contrast enhancement in virtual reality

    International Nuclear Information System (INIS)

    Englmeier, K.-H.; Siebert, M.; Griebel, J.; Lucht, R.; Brix, G.; Knopp, M.

    2000-01-01

    Background: The purpose of this study was the development of a method for fast and efficient analysis of dynamic MR images of the female breast. The image data sets were acquired with a saturation-recovery turbo-FLASH sequence which enables the detection of the kinetics of the contrast agent concentration in the whole breast with a high temporal and spatial resolution. In addition, a morphologic 3D-FLASH data set was acquired. Methods: The dynamic image datasets were analyzed by a pharmacokinetic model which enables the representation of the relevant functional tissue information by two parameters. In order to display simultaneously morphologic and functional tissue information, we developed a multidimensional visualization system, which enables a practical and intuitive human-computer interface in virtual reality. Discussions: The developed system allows the fast and efficient analysis of dynamic MR data sets. An important clinical application is the localization and definition of multiple lesions of the female breast. (orig.) [de

  10. Mobile Virtual Reality : A Solution for Big Data Visualization

    Science.gov (United States)

    Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.

    2015-12-01

    Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and

  11. Enhancing Education through Mobile Augmented Reality

    Science.gov (United States)

    Joan, D. R. Robert

    2015-01-01

    In this article, the author has discussed about the Mobile Augmented Reality and enhancing education through it. The aim of the present study was to give some general information about mobile augmented reality which helps to boost education. Purpose of the current study reveals the mobile networks which are used in the institution campus as well…

  12. Subsurface data visualization in Virtual Reality

    Science.gov (United States)

    Krijnen, Robbert; Smelik, Ruben; Appleton, Rick; van Maanen, Peter-Paul

    2017-04-01

    Due to their increasing complexity and size, visualization of geological data is becoming more and more important. It enables detailed examining and reviewing of large volumes of geological data and it is often used as a communication tool for reporting and education to demonstrate the importance of the geology to policy makers. In the Netherlands two types of nation-wide geological models are available: 1) Layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and 2) Voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties per voxel. The Geological Survey of the Netherlands (GSN) provides an interactive web portal that delivers maps and vertical cross-sections of such layer-based and voxel models. From this portal you can download a 3D subsurface viewer that can visualize the voxel model data of an area of 20 × 25 km with 100 × 100 × 5 meter voxel resolution on a desktop computer. Virtual Reality (VR) technology enables us to enhance the visualization of this volumetric data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of-the-shelf VR hardware enabled us to create an new intuitive and low visualization tool. A VR viewer has been implemented using the HTC Vive head set and allows visualization and analysis of the GSN voxel model data with geological or hydrogeological units. The user can navigate freely around the voxel data (20 × 25 km) which is presented in a virtual room at a scale of 2 × 2 or 3 × 3 meters. To enable analysis, e.g. hydraulic conductivity, the user can select filters to remove specific hydrogeological units. The user can also use slicing to cut-off specific sections of the voxel data to get a closer look. This slicing

  13. Fused Reality for Enhanced Flight Test Capabilities

    Science.gov (United States)

    Bachelder, Ed; Klyde, David

    2011-01-01

    The feasibility of using Fused Reality-based simulation technology to enhance flight test capabilities has been investigated. In terms of relevancy to piloted evaluation, there remains no substitute for actual flight tests, even when considering the fidelity and effectiveness of modern ground-based simulators. In addition to real-world cueing (vestibular, visual, aural, environmental, etc.), flight tests provide subtle but key intangibles that cannot be duplicated in a ground-based simulator. There is, however, a cost to be paid for the benefits of flight in terms of budget, mission complexity, and safety, including the need for ground and control-room personnel, additional aircraft, etc. A Fused Reality(tm) (FR) Flight system was developed that allows a virtual environment to be integrated with the test aircraft so that tasks such as aerial refueling, formation flying, or approach and landing can be accomplished without additional aircraft resources or the risk of operating in close proximity to the ground or other aircraft. Furthermore, the dynamic motions of the simulated objects can be directly correlated with the responses of the test aircraft. The FR Flight system will allow real-time observation of, and manual interaction with, the cockpit environment that serves as a frame for the virtual out-the-window scene.

  14. Between reality and deception: the anamorphosis in visual communication

    Directory of Open Access Journals (Sweden)

    Helena Ferreira

    2016-06-01

    Full Text Available This article aims to reflect on the use of anamorphosis in the context of the graphic and visual communication by presenting a brief evolution of anamorphosis in visual communication, from its origin to the present time, through the analysis of historical and contemporary examples of anamorphic representations used in art and design. This is a reflection on the potential of the mechanism of anamorphosis as a vehicle of visual communication based on perceptive game between reality and deception. Thus, we propose the possibility of this perceptual mechanism to fit in a more comprehensive history, the history of visuality.

  15. Enhancing tourism with augmented and virtual reality

    OpenAIRE

    Jenny, Sandra

    2017-01-01

    Augmented and virtual reality are on the advance. In the last twelve months, several interesting devices have entered the market. Since tourism is one of the fastest growing economic sectors in the world and has become one of the major players in international commerce, the aim of this thesis was to examine how tourism could be enhanced with augmented and virtual reality. The differences and functional principles of augmented and virtual reality were investigated, general uses were described ...

  16. [Dynamic MR mammography. Multidimensional visualization of contrast medium enhancement in virtual reality].

    Science.gov (United States)

    Englmeier, K H; Griebel, J; Lucht, R; Knopp, M; Siebert, M; Brix, G

    2000-03-01

    The purpose of this study was the development of a method for fast and efficient analysis of dynamic MR images of the female breast. The image data sets were acquired with a saturation-recovery turbo-FLASH sequence which enables the detection of the kinetics of the contrast agent concentration in the whole breast with a high temporal and spatial resolution. In addition, a morphologic 3D-FLASH data set was acquired. The dynamic image datasets were analyzed by a pharmacokinetic model which enables the representation of the relevant functional tissue information by two parameters. In order to display simultaneously morphologic and functional tissue information, we developed a multidimensional visualization system, which enables a practical and intuitive human-computer interface in virtual reality. The developed system allows the fast and efficient analysis of dynamic MR data sets. An important clinical application is the localization and definition of multiple lesions of the female breast.

  17. The Visual Web User Interface Design in Augmented Reality Technology

    OpenAIRE

    Chouyin Hsu; Haui-Chih Shiau

    2013-01-01

    Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient ...

  18. Scientific visualization for enhanced interpretation and communication of geoscientific information

    International Nuclear Information System (INIS)

    Vorauer, A.; Cotesta, L.

    2006-01-01

    Ontario Power Generation's Deep Geologic Repository Technology Program has undertaken applied research into the application of scientific visualization technologies to: i) improve the interpretation and synthesis of complex geoscientific field data; ii) facilitate the development of defensible conceptual site descriptive models; and iii) enhance communication between multi-disciplinary site investigation teams and other stakeholders. Two scientific visualization projects are summarized that benefited from the use of the Gocad earth modelling software and were supported by an immersive virtual reality laboratory: i) the Moderately Fractured Rock experiment at the 125,000 m 3 block scale; and ii) the Sub-regional Flow System Modelling Project at the 100 km 2 scale. (author)

  19. Virtual reality in surgery and medicine.

    Science.gov (United States)

    Chinnock, C

    1994-01-01

    This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time

  20. Visual error augmentation enhances learning in three dimensions.

    Science.gov (United States)

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  1. Visual error augmentation enhances learning in three dimensions

    Directory of Open Access Journals (Sweden)

    Huang Felix

    2011-09-01

    Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  2. Visualizing UAS-collected imagery using augmented reality

    Science.gov (United States)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  3. Visual Realism and Presence in a Virtual Reality Game

    DEFF Research Database (Denmark)

    Hvass, Jonatan Salling; Larsen, Oliver Stevns; Vendelbo, Kasper Bøgelund

    2017-01-01

    Virtual Reality (VR) has finally entered the homes of consumers, and a large number of the available applications are games. This paper presents a between-subjects study (n=50) exploring if vi-sual realism (polygon count and texture resolution) affects pres-ence during a scenario involving gameplay...

  4. Virtual Reality Learning Activities for Multimedia Students to Enhance Spatial Ability

    Directory of Open Access Journals (Sweden)

    Rafael Molina-Carmona

    2018-04-01

    Full Text Available Virtual Reality is an incipient technology that is proving very useful for training different skills. Our hypothesis is that it is possible to design virtual reality learning activities that can help students to develop their spatial ability. To prove the hypothesis, we have conducted an experiment consisting of training the students using an on-purpose learning activity based on a virtual reality application and assessing the possible improvement of the students’ spatial ability through a widely accepted spatial visualization test. The learning activity consists of a virtual environment where some simple polyhedral shapes are shown and manipulated by moving, rotating and scaling them. The students participating in the experiment are divided into a control and an experimental group, carrying out the same learning activity with the only difference of the device used for the interaction: a traditional computer with screen, keyboard and mouse for the control group, and virtual reality goggles with a smartphone for the experimental group. To assess the experience, all the students have completed a spatial visualization test twice: just before performing the activities and four weeks later, once all the activities were performed. Specifically, we have used the well-known and widely used Purdue Spatial Visualization Test—Rotation (PSVT-R, designed to test rotational visualization ability. The results of the test show that there is an improvement in the test results for both groups, but the improvement is significantly higher in the case of the experimental group. The conclusion is that the virtual reality learning activities have shown to improve the spatial ability of the experimental group.

  5. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  6. Enhanced reality live role playing

    OpenAIRE

    Söderberg, Jonas; Waern, Annika; Åkesson, Karl-Petter; Björk, Staffan; Falk, Jennica

    2004-01-01

    Live role-playing is a form of improvisational theatre played for the experience of the performers and without an audience. These games form a challenging application domain for ubiquitous technology. We discuss the design options for enhanced reality live role-playing and the role of technology in live role-playing games.

  7. Soldier-worn augmented reality system for tactical icon visualization

    Science.gov (United States)

    Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared

    2012-06-01

    This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.

  8. Substitutional reality system: a novel experimental platform for experiencing alternative reality.

    Science.gov (United States)

    Suzuki, Keisuke; Wakisaka, Sohei; Fujii, Naotaka

    2012-01-01

    We have developed a novel experimental platform, referred to as a substitutional reality (SR) system, for studying the conviction of the perception of live reality and related metacognitive functions. The SR system was designed to manipulate people's reality by allowing them to experience live scenes (in which they were physically present) and recorded scenes (which were recorded and edited in advance) in an alternating manner without noticing a reality gap. All of the naïve participants (n = 21) successfully believed that they had experienced live scenes when recorded scenes had been presented. Additional psychophysical experiments suggest the depth of visual objects does not affect the perceptual discriminability between scenes, and the scene switch during head movement enhance substitutional performance. The SR system, with its reality manipulation, is a novel and affordable method for studying metacognitive functions and psychiatric disorders.

  9. Color enhanced pipelines for reality-based 3D modeling of on site medium sized archeological artifacts

    Directory of Open Access Journals (Sweden)

    Fabrizio I. Apollonio

    2014-05-01

    Full Text Available The paper describes a color enhanced processing system - applied as case study on an artifact of the Pompeii archaeological area - developed in order to enhance different techniques for reality-based 3D models construction and visualization of archaeological artifacts. This processing allows rendering reflectance properties with perceptual fidelity on a consumer display and presents two main improvements over existing techniques: a. the color definition of the archaeological artifacts; b. the comparison between the range-based and photogrammetry-based pipelines to understand the limits of use and suitability to specific objects.

  10. SoftAR: visually manipulating haptic softness perception in spatial augmented reality.

    Science.gov (United States)

    Punpongsanon, Parinya; Iwai, Daisuke; Sato, Kosuke

    2015-11-01

    We present SoftAR, a novel spatial augmented reality (AR) technique based on a pseudo-haptics mechanism that visually manipulates the sense of softness perceived by a user pushing a soft physical object. Considering the limitations of projection-based approaches that change only the surface appearance of a physical object, we propose two projection visual effects, i.e., surface deformation effect (SDE) and body appearance effect (BAE), on the basis of the observations of humans pushing physical objects. The SDE visualizes a two-dimensional deformation of the object surface with a controlled softness parameter, and BAE changes the color of the pushing hand. Through psychophysical experiments, we confirm that the SDE can manipulate softness perception such that the participant perceives significantly greater softness than the actual softness. Furthermore, fBAE, in which BAE is applied only for the finger area, significantly enhances manipulation of the perception of softness. We create a computational model that estimates perceived softness when SDE+fBAE is applied. We construct a prototype SoftAR system in which two application frameworks are implemented. The softness adjustment allows a user to adjust the softness parameter of a physical object, and the softness transfer allows the user to replace the softness with that of another object.

  11. Immersive virtual reality and environmental noise assessment: An innovative audio–visual approach

    International Nuclear Information System (INIS)

    Ruotolo, Francesco; Maffei, Luigi; Di Gabriele, Maria; Iachini, Tina; Masullo, Massimiliano; Ruggiero, Gennaro; Senese, Vincenzo Paolo

    2013-01-01

    Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study

  12. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    Science.gov (United States)

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  13. In-Situ Visualization for Cultural Heritage Sites using Novel Augmented Reality Technologies

    Directory of Open Access Journals (Sweden)

    Didier Stricker

    2010-05-01

    Full Text Available Mobile Augmented Reality is an ideal technology for presenting information in an attractive, comprehensive and personalized way to visitors of cultural heritage sites. One of the pioneer projects in this area was certainly the European project ArcheoGuide (IST-1999-11306 which developed and evaluated Augmented Reality (AR at a very early stage. Many progresses have been done since then, and novel devices and algorithms offer novel possibilities and functionalities. In this paper we present current research work and discuss different approaches of Mobile AR for cultural heritage. Since this area is very large we focus on the visual aspects of such technologies, namely tracking and computer vision, as well as visualization.

  14. MARCS: Mobile Augmented Reality for Cybersecurity

    OpenAIRE

    Mattina, Brendan Casey

    2017-01-01

    Network analysts have long used two-dimensional security visualizations to make sense of network data. As networks grow larger and more complex, two-dimensional visualizations become more convoluted, potentially compromising user situational awareness of cyber threats. To combat this problem, augmented reality (AR) can be employed to visualize data within a cyber-physical context to restore user perception and improve comprehension; thereby, enhancing cyber situational awareness. Multiple gen...

  15. Immersive virtual reality and environmental noise assessment: An innovative audio–visual approach

    Energy Technology Data Exchange (ETDEWEB)

    Ruotolo, Francesco, E-mail: francesco.ruotolo@unina2.it [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Maffei, Luigi, E-mail: luigi.maffei@unina2.it [Department of Architecture and Industrial Design, Second University of Naples, Abazia di S. Lorenzo, 81031, Aversa (Italy); Di Gabriele, Maria, E-mail: maria.digabriele@unina2.it [Department of Architecture and Industrial Design, Second University of Naples, Abazia di S. Lorenzo, 81031, Aversa (Italy); Iachini, Tina, E-mail: santa.iachini@unina2.it [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Masullo, Massimiliano, E-mail: massimiliano.masullo@unina2.it [Department of Architecture and Industrial Design, Second University of Naples, Abazia di S. Lorenzo, 81031, Aversa (Italy); Ruggiero, Gennaro, E-mail: gennaro.ruggiero@unina2.it [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Senese, Vincenzo Paolo, E-mail: vincenzopaolo.senese@unina2.it [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Psychometric Laboratory, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy)

    2013-07-15

    Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies

  16. EFFECTS OF AUGMENTED REALITY PRESENTATIONS ON CONSUMER'S VISUAL PERCEPTION OF FLOOR PLANS

    OpenAIRE

    Lutheran, April L

    2012-01-01

    Home architects and designers use many types of presentation drawings to convey design ideas. Augmented reality is a relatively new technology that can be used to aid in design and marketing for residential builders. An augmented reality presentation provides a more complete idea of a design than other presentations such as 3D model renderings and hand drawn artist sketches. While designers are accustomed to visualizing 2D plans, this task is difficult for home buyers. This difficulty has bee...

  17. Enhancements to VTK enabling Scientific Visualization in Immersive Environments

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish; Sherman, William; Martin, Ken; Lonie, David; Whiting, Eric; Money, James

    2017-04-01

    Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.

  18. Virtual reality devices integration in scientific visualization software in the VtkVRPN framework

    International Nuclear Information System (INIS)

    Journe, G.; Guilbaud, C.

    2005-01-01

    A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)

  19. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences

    Science.gov (United States)

    Parsons, Thomas D.

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869

  20. Automatic cell identification and visualization using digital holographic microscopy with head mounted augmented reality devices.

    Science.gov (United States)

    O'Connor, Timothy; Rawat, Siddharth; Markman, Adam; Javidi, Bahram

    2018-03-01

    We propose a compact imaging system that integrates an augmented reality head mounted device with digital holographic microscopy for automated cell identification and visualization. A shearing interferometer is used to produce holograms of biological cells, which are recorded using customized smart glasses containing an external camera. After image acquisition, segmentation is performed to isolate regions of interest containing biological cells in the field-of-view, followed by digital reconstruction of the cells, which is used to generate a three-dimensional (3D) pseudocolor optical path length profile. Morphological features are extracted from the cell's optical path length map, including mean optical path length, coefficient of variation, optical volume, projected area, projected area to optical volume ratio, cell skewness, and cell kurtosis. Classification is performed using the random forest classifier, support vector machines, and K-nearest neighbor, and the results are compared. Finally, the augmented reality device displays the cell's pseudocolor 3D rendering of its optical path length profile, extracted features, and the identified cell's type or class. The proposed system could allow a healthcare worker to quickly visualize cells using augmented reality smart glasses and extract the relevant information for rapid diagnosis. To the best of our knowledge, this is the first report on the integration of digital holographic microscopy with augmented reality devices for automated cell identification and visualization.

  1. 3D Flow visualization in virtual reality

    Science.gov (United States)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  2. Meteorological Data Visualization in Multi-User Virtual Reality

    Science.gov (United States)

    Appleton, R.; van Maanen, P. P.; Fisher, W. I.; Krijnen, R.

    2017-12-01

    Due to their complexity and size, visualization of meteorological data is important. It enables the precise examining and reviewing of meteorological details and is used as a communication tool for reporting, education and to demonstrate the importance of the data to policy makers. Specifically for the UCAR community it is important to explore all of such possibilities.Virtual Reality (VR) technology enhances the visualization of volumetric and dynamical data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of the shelf VR hardware enabled us to create a very intuitive and low cost way to visualize meteorological data. A VR viewer has been implemented using multiple HTC Vive head sets and allows visualization and analysis of meteorological data in NetCDF format (e.g. of NCEP North America Model (NAM), see figure). Sources of atmospheric/meteorological data include radar and satellite as well as traditional weather stations. The data includes typical meteorological information such as temperature, humidity, air pressure, as well as those data described by the climate forecast (CF) model conventions (http://cfconventions.org). Other data such as lightning-strike data and ultra-high-resolution satellite data are also becoming available. The users can navigate freely around the data which is presented in a virtual room at a scale of up to 3.5 X 3.5 meters. The multiple users can manipulate the model simultaneously. Possible mutations include scaling/translating, filtering by value and using a slicing tool to cut-off specific sections of the data to get a closer look. The slicing can be done in any direction using the concept of a `virtual knife' in real-time. The users can also scoop out parts of the data and walk though successive states of the model. Future plans are (a.o.) to

  3. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  4. Stepping Into Science Data: Data Visualization in Virtual Reality

    Science.gov (United States)

    Skolnik, S.

    2017-12-01

    Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.

  5. From Vesalius to virtual reality: How embodied cognition facilitates the visualization of anatomy

    Science.gov (United States)

    Jang, Susan

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity

  6. Augmented Reality Based Doppler Lidar Data Visualization: Promises and Challenges

    Science.gov (United States)

    Cherukuru, N. W.; Calhoun, R.

    2016-06-01

    Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.

  7. Augmented Reality Sandbox and Constructivist Approach for Geoscience Teaching and Learning

    OpenAIRE

    Muhammad Nawaz; Sandeep N. Kundu; Farha Sattar

    2017-01-01

    Augmented reality sandbox adds new dimensions to education and learning process. It can be a core component of geoscience teaching and learning to understand the geographic contexts and landform processes. Augmented reality sandbox is a useful tool not only to create an interactive learning environment through spatial visualization but also it can provide an active learning experience to students and enhances the cognition process of learning. Augmented reality sandbox can be used as an inter...

  8. Enhancing a Multi-body Mechanism with Learning-Aided Cues in an Augmented Reality Environment

    International Nuclear Information System (INIS)

    Sidhu, Manjit Singh

    2013-01-01

    Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.

  9. Enhancing a Multi-body Mechanism with Learning-Aided Cues in an Augmented Reality Environment

    Science.gov (United States)

    Singh Sidhu, Manjit

    2013-06-01

    Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.

  10. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    Science.gov (United States)

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  11. Developing augmented reality solutions through user involvement

    OpenAIRE

    Siltanen, Sanni

    2015-01-01

    Augmented reality (AR) technology merges digital information into the real world. It is an effective visualization method; AR enhances user's spatial perception skills and helps to understand spatial dimensions and relationships. It is beneficial for many professional application areas such as assembly, maintenance and repair. AR visualization helps to concretize building and construction projects and interior design plans – also for non-technically oriented people, who might otherwise have d...

  12. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    Science.gov (United States)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  13. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback.

    Science.gov (United States)

    Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T

    2007-07-01

    Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.

  14. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality.

    Science.gov (United States)

    Zenner, Andre; Kruger, Antonio

    2017-04-01

    We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.

  15. Virtual reality enhanced mannequin (VREM) that is well received by resuscitation experts.

    Science.gov (United States)

    Semeraro, Federico; Frisoli, Antonio; Bergamasco, Massimo; Cerchiari, Erga L

    2009-04-01

    The objective of this study was to test acceptance of, and interest in, a newly developed prototype of virtual reality enhanced mannequin (VREM) on a sample of congress attendees who volunteered to participate in the evaluation session and to respond to a specifically designed questionnaire. A commercial Laerdal HeartSim 4000 mannequin was developed to integrate virtual reality (VR) technologies with specially developed virtual reality software to increase the immersive perception of emergency scenarios. To evaluate the acceptance of a virtual reality enhanced mannequin (VREM), we presented it to a sample of 39 possible users. Each evaluation session involved one trainee and two instructors with a standardized procedure and scenario: the operator was invited by the instructor to wear the data-gloves and the head mounted display and was briefly introduced to the scope of the simulation. The instructor helped the operator familiarize himself with the environment. After the patient's collapse, the operator was asked to check the patient's clinical conditions and start CPR. Finally, the patient started to recover signs of circulation and the evaluation session was concluded. Each participant was then asked to respond to a questionnaire designed to explore the trainee's perception in the areas of user-friendliness, realism, and interaction/immersion. Overall, the evaluation of the system was very positive, as was the feeling of immersion and realism of the environment and simulation. Overall, 84.6% of the participants judged the virtual reality experience as interesting and believed that its development could be very useful for healthcare training. The prototype of the virtual reality enhanced mannequin was well-liked, without interfence by interaction devices, and deserves full technological development and validation in emergency medical training.

  16. Augmented reality system

    Science.gov (United States)

    Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng

    2010-08-01

    In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.

  17. Augmented Reality Based Doppler Lidar Data Visualization: Promises and Challenges

    OpenAIRE

    Cherukuru N. W.; Calhoun R.

    2016-01-01

    Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of...

  18. TECHNIQUES AND ALGORITHMS OF INTERACTIVE AUGMENTED REALITY VISUALIZATION: RESEARCH AND DEVELOPMENT

    OpenAIRE

    Kravtsov A. A.

    2015-01-01

    The author performed a research with the purpose of improving visualization of three-dimensional objects by means of augmented reality technology with the use of massively available mobile devices as a platform. This article summarizes the main results and provides suggestions for future research. Since graphical user interfaces made it to the consumer market about 30 years ago, interaction with the computer has not changed significantly. The focus of current user interface techniques is only...

  19. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    Science.gov (United States)

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  20. From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy

    Science.gov (United States)

    Jang, Susan

    2010-01-01

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…

  1. Hybrid Reality Lab Capabilities - Video 2

    Science.gov (United States)

    Delgado, Francisco J.; Noyes, Matthew

    2016-01-01

    Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created

  2. Mobile Mixed-Reality Interfaces That Enhance Human–Robot Interaction in Shared Spaces

    Directory of Open Access Journals (Sweden)

    Jared A. Frank

    2017-06-01

    Full Text Available Although user interfaces with gesture-based input and augmented graphics have promoted intuitive human–robot interactions (HRI, they are often implemented in remote applications on research-grade platforms requiring significant training and limiting operator mobility. This paper proposes a mobile mixed-reality interface approach to enhance HRI in shared spaces. As a user points a mobile device at the robot’s workspace, a mixed-reality environment is rendered providing a common frame of reference for the user and robot to effectively communicate spatial information for performing object manipulation tasks, improving the user’s situational awareness while interacting with augmented graphics to intuitively command the robot. An evaluation with participants is conducted to examine task performance and user experience associated with the proposed interface strategy in comparison to conventional approaches that utilize egocentric or exocentric views from cameras mounted on the robot or in the environment, respectively. Results indicate that, despite the suitability of the conventional approaches in remote applications, the proposed interface approach provides comparable task performance and user experiences in shared spaces without the need to install operator stations or vision systems on or around the robot. Moreover, the proposed interface approach provides users the flexibility to direct robots from their own visual perspective (at the expense of some physical workload and leverages the sensing capabilities of the tablet to expand the robot’s perceptual range.

  3. Augmented Reality Applications for Substation Management by Utilizing Standards-Compliant SCADA Communication

    Directory of Open Access Journals (Sweden)

    Miro Antonijević

    2018-03-01

    Full Text Available Most electrical substations are remotely monitored and controlled by using Supervisory Control and Data Acquisition (SCADA applications. Current SCADA systems have been significantly enhanced by utilizing standardized communication protocols and the most prominent is the IEC 61850 international standard. These enhancements enable improvements in different domains of SCADA systems such as communication engineering, data management and visualization of automation process data in SCADA applications. Process data visualization is usually achieved through Human Machine Interface (HMI screens in substation control centres. However, this visualization method sometimes makes supervision, control and maintenance procedures executed by engineers slow and error-prone because it separates equipment from its automation data. Augmented reality (AR and mixed reality (MR visualization techniques have matured enough to provide new possibilities of displaying relevant data wherever needed. This paper presents a novel methodology for visualizing process related SCADA data to enhance and facilitate human-centric activities in substations such as regular equipment maintenance. The proposed solution utilizes AR visualization techniques together with standards-based communication protocols used in substations. The developed proof-of-concept AR application that enables displaying SCADA data on the corresponding substation equipment with the help of AR markers demonstrates originality and benefits of the proposed visualization method. Additionally, the application enables displaying widgets and 3D models of substation equipment to make the visualization more user-friendly and intuitive. The visualized SCADA data needs to be refreshed considering soft real-time data delivery restrictions. Therefore, the proposed solution is thoroughly tested to demonstrate the applicability of proposed methodology in real substations.

  4. Simulation data analysis by virtual reality system

    International Nuclear Information System (INIS)

    Ohtani, Hiroaki; Mizuguchi, Naoki; Shoji, Mamoru; Ishiguro, Seiji; Ohno, Nobuaki

    2010-01-01

    We introduce new software for analysis of time-varying simulation data and new approach for contribution of simulation to experiment by virtual reality (VR) technology. In the new software, the objects of time-varying field are visualized in VR space and the particle trajectories in the time-varying electromagnetic field are also traced. In the new approach, both simulation results and experimental device data are simultaneously visualized in VR space. These developments enhance the study of the phenomena in plasma physics and fusion plasmas. (author)

  5. Animation Augmented Reality Book Model (AAR Book Model) to Enhance Teamwork

    Science.gov (United States)

    Chujitarom, Wannaporn; Piriyasurawong, Pallop

    2017-01-01

    This study aims to synthesize an Animation Augmented Reality Book Model (AAR Book Model) to enhance teamwork and to assess the AAR Book Model to enhance teamwork. Samples are five specialists that consist of one animation specialist, two communication and information technology specialists, and two teaching model design specialists, selected by…

  6. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    Science.gov (United States)

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  7. Joint evaluation of communication quality and user experience in an audio-visual virtual reality meeting

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Carrozzino, Marcello

    2013-01-01

    The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...

  8. Study of Co-Located and Distant Collaboration with Symbolic Support via a Haptics-Enhanced Virtual Reality Task

    Science.gov (United States)

    Yeh, Shih-Ching; Hwang, Wu-Yuin; Wang, Jin-Liang; Zhan, Shi-Yi

    2013-01-01

    This study intends to investigate how multi-symbolic representations (text, digits, and colors) could effectively enhance the completion of co-located/distant collaborative work in a virtual reality context. Participants' perceptions and behaviors were also studied. A haptics-enhanced virtual reality task was developed to conduct…

  9. Augmented reality som wearable technology

    DEFF Research Database (Denmark)

    Rahn, Annette

    “How Augmented reality can facilitate learning in visualizing human anatomy “ At this station I demonstrate how Augmented reality can be used to visualize the human lungs in situ and as a wearable technology which establish connection between body, image and technology in education. I will show...

  10. Towards Determination of Visual Requirements for Augmented Reality Displays and Virtual Environments for the Airport Tower

    Science.gov (United States)

    Ellis, Stephen R.

    2006-01-01

    The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.

  11. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    Science.gov (United States)

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  12. Learning Science in a Virtual Reality Application: The Impacts of Animated-Virtual Actors' Visual Complexity

    Science.gov (United States)

    Kartiko, Iwan; Kavakli, Manolya; Cheng, Ken

    2010-01-01

    As the technology in computer graphics advances, Animated-Virtual Actors (AVAs) in Virtual Reality (VR) applications become increasingly rich and complex. Cognitive Theory of Multimedia Learning (CTML) suggests that complex visual materials could hinder novice learners from attending to the lesson properly. On the other hand, previous studies have…

  13. Industrial application trends and market perspectives for virtual reality and visual simulation

    Directory of Open Access Journals (Sweden)

    Antonio Valerio Netto

    2004-06-01

    Full Text Available This paper attempts to provide an overview of current market trends in industrial applications of VR (Virtual Reality and VisSim (visual simulation for the next few years. Several market studies recently undertaken are presented and commented. A profile of some companies that are starting to work with these technologies is provided, in an attempt to motivate Brazilian companies into the use of these new technologies by describing successful example applications undertaken by foreign companies.

  14. Experiencing 3D interactions in virtual reality and augmented reality

    NARCIS (Netherlands)

    Martens, J.B.; Qi, W.; Aliakseyeu, D.; Kok, A.J.F.; Liere, van R.; Hoven, van den E.; Ijsselsteijn, W.; Kortuem, G.; Laerhoven, van K.; McClelland, I.; Perik, E.; Romero, N.; Ruyter, de B.

    2004-01-01

    We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical

  15. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    Science.gov (United States)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  16. Virtual reality aided visualization of fluid flow simulations with application in medical education and diagnostics.

    Science.gov (United States)

    Djukic, Tijana; Mandic, Vesna; Filipovic, Nenad

    2013-12-01

    Medical education, training and preoperative diagnostics can be drastically improved with advanced technologies, such as virtual reality. The method proposed in this paper enables medical doctors and students to visualize and manipulate three-dimensional models created from CT or MRI scans, and also to analyze the results of fluid flow simulations. Simulation of fluid flow using the finite element method is performed, in order to compute the shear stress on the artery walls. The simulation of motion through the artery is also enabled. The virtual reality system proposed here could shorten the length of training programs and make the education process more effective. © 2013 Published by Elsevier Ltd.

  17. Visualization of particle trajectories in time-varying electromagnetic fields by CAVE-type virtual reality system

    International Nuclear Information System (INIS)

    Ohno, Nobuaki; Ohtani, Hiroaki; Horiuchi, Ritoku; Matsuoka, Daisuke

    2012-01-01

    The particle kinetic effects play an important role in breaking the frozen-in condition and exciting collisionless magnetic reconnection in high temperature plasmas. Because this effect is originating from a complex thermal motion near reconnection point, it is very important to examine particle trajectories using scientific visualization technique, especially in the presence of plasma instability. We developed interactive visualization environment for the particle trajectories in time-varying electromagnetic fields in the CAVE-type virtual reality system based on VFIVE, which is interactive visualization software for the CAVE system. From the analysis of ion trajectories using the particle simulation data, it was found that time-varying electromagnetic fields around the reconnection region accelerate ions toward the downstream region. (author)

  18. Which technology to investigate visual perception in sport: video vs. virtual reality.

    Science.gov (United States)

    Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit

    2015-02-01

    Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. On-patient see-through augmented reality based on visual SLAM.

    Science.gov (United States)

    Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M

    2017-01-01

    An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.

  20. Virtual reality, augmented reality…I call it i-Reality.

    Science.gov (United States)

    Grossmann, Rafael J

    2015-01-01

    The new term improved reality (i-Reality) is suggested to include virtual reality (VR) and augmented reality (AR). It refers to a real world that includes improved, enhanced and digitally created features that would offer an advantage on a particular occasion (i.e., a medical act). I-Reality may help us bridge the gap between the high demand for medical providers and the low supply of them by improving the interaction between providers and patients.

  1. Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine

    International Nuclear Information System (INIS)

    Ploder, O.; Wagner, A.; Enislidis, G.; Ewers, R.

    1995-01-01

    In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment - the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants. (orig.) [de

  2. Segmented and Detailed Visualization of Anatomical Structures based on Augmented Reality for Health Education and Knowledge Discovery

    Directory of Open Access Journals (Sweden)

    Isabel Cristina Siqueira da Silva

    2017-05-01

    Full Text Available The evolution of technology has changed the face of education, especially when combined with appropriate pedagogical bases. This combination has created innovation opportunities in order to add quality to teaching through new perspectives for traditional methods applied in the classroom. In the Health field, particularly, augmented reality and interaction design techniques can assist the teacher in the exposition of theoretical concepts and/or concepts that need of training at specific medical procedures. Besides, visualization and interaction with Health data, from different sources and in different formats, helps to identify hidden patterns or anomalies, increases the flexibility in the search for certain values, allows the comparison of different units to obtain relative difference in quantities, provides human interaction in real time, etc. At this point, it is noted that the use of interactive visualization techniques such as augmented reality and virtual can collaborate with the process of knowledge discovery in medical and biomedical databases. This work discuss aspects related to the use of augmented reality and interaction design as a tool for teaching anatomy and knowledge discovery, with the proposition of an case study based on mobile application that can display targeted anatomical parts in high resolution and with detail of its parts.

  3. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  4. Recent Development of Augmented Reality in Surgery: A Review

    Science.gov (United States)

    Vávra, P.; Zonča, P.; Ihnát, P.; El-Gendi, A.

    2017-01-01

    Introduction The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice. PMID:29065604

  5. Recent Development of Augmented Reality in Surgery: A Review

    Directory of Open Access Journals (Sweden)

    P. Vávra

    2017-01-01

    Full Text Available Introduction. The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods. We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions. The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice.

  6. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    Science.gov (United States)

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  7. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    Directory of Open Access Journals (Sweden)

    Chih-Chung Ting

    2015-07-01

    Full Text Available Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods.

  8. Mobile Augmented Reality enhances indoor navigation for wheelchair users

    Directory of Open Access Journals (Sweden)

    Luciene Chagas de Oliveira

    Full Text Available Introduction: Individuals with mobility impairments associated with lower limb disabilities often face enormous challenges to participate in routine activities and to move around various environments. For many, the use of wheelchairs is paramount to provide mobility and social inclusion. Nevertheless, they still face a number of challenges to properly function in our society. Among the many difficulties, one in particular stands out: navigating in complex internal environments (indoors. The main objective of this work is to propose an architecture based on Mobile Augmented Reality to support the development of indoor navigation systems dedicated to wheelchair users, that is also capable of recording CAD drawings of the buildings and dealing with accessibility issues for that population. Methods Overall, five main functional requirements are proposed: the ability to allow for indoor navigation by means of Mobile Augmented Reality techniques; the capacity to register and configure building CAD drawings and the position of fiducial markers, points of interest and obstacles to be avoided by the wheelchair user; the capacity to find the best route for wheelchair indoor navigation, taking stairs and other obstacles into account; allow for the visualization of virtual directional arrows in the smartphone displays; and incorporate touch or voice commands to interact with the application. The architecture is proposed as a combination of four layers: User interface; Control; Service; and Infrastructure. A proof-of-concept application was developed and tests were performed with disable volunteers operating manual and electric wheelchairs. Results The application was implemented in Java for the Android operational system. A local database was used to store the test building CAD drawings and the position of fiducial markers and points of interest. The Android Augmented Reality library was used to implement Augmented Reality and the Blender open source

  9. Peripheral visual performance enhancement by neurofeedback training.

    Science.gov (United States)

    Nan, Wenya; Wan, Feng; Lou, Chin Ian; Vai, Mang I; Rosa, Agostinho

    2013-12-01

    Peripheral visual performance is an important ability for everyone, and a positive inter-individual correlation is found between the peripheral visual performance and the alpha amplitude during the performance test. This study investigated the effect of alpha neurofeedback training on the peripheral visual performance. A neurofeedback group of 13 subjects finished 20 sessions of alpha enhancement feedback within 20 days. The peripheral visual performance was assessed by a new dynamic peripheral visual test on the first and last training day. The results revealed that the neurofeedback group showed significant enhancement of the peripheral visual performance as well as the relative alpha amplitude during the peripheral visual test. It was not the case in the non-neurofeedback control group, which performed the tests within the same time frame as the neurofeedback group but without any training sessions. These findings suggest that alpha neurofeedback training was effective in improving peripheral visual performance. To the best of our knowledge, this is the first study to show evidence for performance improvement in peripheral vision via alpha neurofeedback training.

  10. Virtual reality exposure therapy

    OpenAIRE

    Rothbaum, BO; Hodges, L; Kooper, R

    1997-01-01

    It has been proposed that virtual reality (VR) exposure may be an alternative to standard in vivo exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer- generated virtual environment. Virtual reality exposure is potentially an efficient and cost-effective treatment of anxiety disorders. VR exposure therapy reduced the fear of heights in the first control...

  11. Human brain functional MRI and DTI visualization with virtual reality.

    Science.gov (United States)

    Chen, Bin; Moreland, John; Zhang, Jingyu

    2011-12-01

    Magnetic resonance diffusion tensor imaging (DTI) and functional MRI (fMRI) are two active research areas in neuroimaging. DTI is sensitive to the anisotropic diffusion of water exerted by its macromolecular environment and has been shown useful in characterizing structures of ordered tissues such as the brain white matter, myocardium, and cartilage. The diffusion tensor provides two new types of information of water diffusion: the magnitude and the spatial orientation of water diffusivity inside the tissue. This information has been used for white matter fiber tracking to review physical neuronal pathways inside the brain. Functional MRI measures brain activations using the hemodynamic response. The statistically derived activation map corresponds to human brain functional activities caused by neuronal activities. The combination of these two methods provides a new way to understand human brain from the anatomical neuronal fiber connectivity to functional activities between different brain regions. In this study, virtual reality (VR) based MR DTI and fMRI visualization with high resolution anatomical image segmentation and registration, ROI definition and neuronal white matter fiber tractography visualization and fMRI activation map integration is proposed. Rationale and methods for producing and distributing stereoscopic videos are also discussed.

  12. Reality Check: Basics of Augmented, Virtual, and Mixed Reality.

    Science.gov (United States)

    Brigham, Tara J

    2017-01-01

    Augmented, virtual, and mixed reality applications all aim to enhance a user's current experience or reality. While variations of this technology are not new, within the last few years there has been a significant increase in the number of artificial reality devices or applications available to the general public. This column will explain the difference between augmented, virtual, and mixed reality and how each application might be useful in libraries. It will also provide an overview of the concerns surrounding these different reality applications and describe how and where they are currently being used.

  13. Intelligent Virtual Reality and its Impact on Spatial Skills and Academic Achievements

    Directory of Open Access Journals (Sweden)

    Esther Zaretsky

    2005-08-01

    Full Text Available It is known that the training of intelligent virtual reality, through the use of computer games, can improve spatial skills especially visualization and enhances academic achievements. Through an experiment of using Tetris software, two objectives were achieved: developing spatial as well as intelligence skills and enhancing academic achievements, focusing on mathematics. This study followed studies dealing with the impact on putting the learner into action in 3d space software. During teaching a transition from 2d to 3d spatial perception and operation occurred. A positive transfer from 3d virtual reality rotation training to structural induction skills, by means of mental imaging, was also achieved. At the same time the motivation for learning was enhanced, without using extrinsic reinforcements. The duration of concentration while using the intelligent software increased gradually up to 60 minutes.

  14. Virtual Reality Musical Instruments

    DEFF Research Database (Denmark)

    Serafin, Stefania; Erkut, Cumhur; Kojs, Juraj

    2016-01-01

    The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term “virtual musical instruments” has been used for a long time to describe software simulations, extensions of existing musical instruments......, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field...

  15. Integrated visualization of simulation results and experimental devices in virtual-reality space

    International Nuclear Information System (INIS)

    Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi

    2011-01-01

    We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)

  16. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  17. A Virtual Reality Visualization Tool for Neuron Tracing.

    Science.gov (United States)

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio

    2018-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.

  18. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Science.gov (United States)

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  19. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  20. Mobile Platform Augmented Reality for Enhanced Operations on the International Space Station, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — To develop an Augmented Reality system that runs on a small portable device to aid crew in routine maintenance activities by providing enhanced information and...

  1. Use of Virtual Reality Technology to Enhance Undergraduate Learning in Abnormal Psychology

    Science.gov (United States)

    Stark-Wroblewski, Kim; Kreiner, David S.; Boeding, Christopher M.; Lopata, Ashley N.; Ryan, Joseph J.; Church, Tina M.

    2008-01-01

    We examined whether using virtual reality (VR) technology to provide students with direct exposure to evidence-based psychological treatment approaches would enhance their understanding of and appreciation for such treatments. Students enrolled in an abnormal psychology course participated in a VR session designed to help clients overcome the fear…

  2. Virtual reality training improves balance function.

    Science.gov (United States)

    Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng

    2014-09-01

    Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function.

  3. Virtual reality training improves balance function

    Science.gov (United States)

    Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng

    2014-01-01

    Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function. PMID:25368651

  4. Virtual Reality Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Performs basic and applied research in interactive 3D computer graphics, including visual analytics, virtual environments, and augmented reality (AR). The...

  5. Mobile Augmented Reality as Usability to Enhance Nurse Prevent Violence Learning Satisfaction.

    Science.gov (United States)

    Hsu, Han-Jen; Weng, Wei-Kai; Chou, Yung-Lang; Huang, Pin-Wei

    2018-01-01

    Violence in hospitals, nurses are at high risk of patient's aggression in the workplace. This learning course application Mobile Augmented Reality to enhance nurse to prevent violence skill. Increasingly, mobile technologies introduced and integrated into classroom teaching and clinical applications. Improving the quality of learning course and providing new experiences for nurses.

  6. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training

    Science.gov (United States)

    Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A.; Karim, Naz; Merck, Derek L.

    2018-01-01

    Introduction Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular

  7. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training.

    Science.gov (United States)

    Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A; Karim, Naz; Merck, Derek L

    2018-01-01

    Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients' de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based "blind insertion" invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner's AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage

  8. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training

    Directory of Open Access Journals (Sweden)

    Leo Kobayashi

    2017-12-01

    Full Text Available Introduction Augmented reality (AR, mixed reality (MR, and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator and those seeking understanding and insight (patient/family; non-expert; learner. Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1 implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2 use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable

  9. Pairing virtual reality with dynamic posturography serves to differentiate between patients experiencing visual vertigo

    Directory of Open Access Journals (Sweden)

    Streepey Jefferson

    2007-07-01

    Full Text Available Abstract Background To determine if increased visual dependence can be quantified through its impact on automatic postural responses, we have measured the combined effect on the latencies and magnitudes of postural response kinematics of transient optic flow in the pitch plane with platform rotations and translations. Methods Six healthy (29–31 yrs and 4 visually sensitive (27–57 yrs subjects stood on a platform rotated (6 deg of dorsiflexion at 30 deg/sec or translated (5 cm at 5 deg/sec for 200 msec. Subjects either had eyes closed or viewed an immersive, stereo, wide field of view virtual environment (scene moved in upward pitch for a 200 msec period for three 30 sec trials at 5 velocities. RMS values and peak velocities of head, trunk, and head with respect to trunk were calculated. EMG responses of 6 trunk and lower limb muscles were collected and latencies and magnitudes of responses determined. Results No effect of visual velocity was observed in EMG response latencies and magnitudes. Healthy subjects exhibited significant effects (p p Conclusion Differentiation of postural kinematics in visually sensitive subjects when exposed to the combined perturbations suggests that virtual reality technology could be useful for differential diagnosis and specifically designed interventions for individuals whose chief complaint is sensitivity to visual motion.

  10. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    Science.gov (United States)

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  11. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  12. Manifold compositions, music visualization, and scientific sonification in an immersive virtual-reality environment.

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.

    1998-01-05

    An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.

  13. Visual distinctiveness can enhance recency effects.

    Science.gov (United States)

    Bornstein, B H; Neely, C B; LeCompte, D C

    1995-05-01

    Experimental efforts to meliorate the modality effect have included attempts to make the visual stimulus more distinctive. McDowd and Madigan (1991) failed to find an enhanced recency effect in serial recall when the last item was made more distinct in terms of its color. In an attempt to extend this finding, three experiments were conducted in which visual distinctiveness was manipulated in a different manner, by combining the dimensions of physical size and coloration (i.e., whether the stimuli were solid or outlined in relief). Contrary to previous findings, recency was enhanced when the size and coloration of the last item differed from the other items in the list, regardless of whether the "distinctive" item was larger or smaller than the remaining items. The findings are considered in light of other research that has failed to obtain a similar enhanced recency effect, and their implications for current theories of the modality effect are discussed.

  14. Confronting an Augmented Reality

    Science.gov (United States)

    Munnerley, Danny; Bacon, Matt; Wilson, Anna; Steele, James; Hedberg, John; Fitzgerald, Robert

    2012-01-01

    How can educators make use of augmented reality technologies and practices to enhance learning and why would we want to embrace such technologies anyway? How can an augmented reality help a learner confront, interpret and ultimately comprehend reality itself ? In this article, we seek to initiate a discussion that focuses on these questions, and…

  15. Low-cost, smartphone based frequency doubling technology visual field testing using virtual reality (Conference Presentation)

    Science.gov (United States)

    Alawa, Karam A.; Sayed, Mohamed; Arboleda, Alejandro; Durkee, Heather A.; Aguilar, Mariela C.; Lee, Richard K.

    2017-02-01

    Glaucoma is the leading cause of irreversible blindness worldwide. Due to its wide prevalence, effective screening tools are necessary. The purpose of this project is to design and evaluate a system that enables portable, cost effective, smartphone based visual field screening based on frequency doubling technology. The system is comprised of an Android smartphone to display frequency doubling stimuli and handle processing, a Bluetooth remote for user input, and a virtual reality headset to simulate the exam. The LG Nexus 5 smartphone and BoboVR Z3 virtual reality headset were used for their screen size and lens configuration, respectively. The system is capable of running the C-20, N-30, 24-2, and 30-2 testing patterns. Unlike the existing system, the smartphone FDT tests both eyes concurrently by showing the same background to both eyes but only displaying the stimulus to one eye at a time. Both the Humphrey Zeiss FDT and the smartphone FDT were tested on five subjects without a history of ocular disease with the C-20 testing pattern. The smartphone FDT successfully produced frequency doubling stimuli at the correct spatial and temporal frequency. Subjects could not tell which eye was being tested. All five subjects preferred the smartphone FDT to the Humphrey Zeiss FDT due to comfort and ease of use. The smartphone FDT is a low-cost, portable visual field screening device that can be used as a screening tool for glaucoma.

  16. Augmented Reality

    DEFF Research Database (Denmark)

    Kjærgaard, Hanne Wacher; Kjeldsen, Lars Peter Bech; Rahn, Annette

    2015-01-01

    This chapter describes the use of iPad-facilitated application of augmented reality in the teaching of highly complex anatomical and physiological subjects in the training of nurses at undergraduate level. The general aim of the project is to investigate the potentials of this application in terms...... of making the complex content and context of these subjects more approachable to the students through the visualization made possible through the use of this technology. A case study is described in this chapter. Issues and factors required for the sustainable use of the mobile-facilitated application...... of augmented reality are discussed....

  17. Augmented reality: a review.

    Science.gov (United States)

    Berryman, Donna R

    2012-01-01

    Augmented reality is a technology that overlays digital information on objects or places in the real world for the purpose of enhancing the user experience. It is not virtual reality, that is, the technology that creates a totally digital or computer created environment. Augmented reality, with its ability to combine reality and digital information, is being studied and implemented in medicine, marketing, museums, fashion, and numerous other areas. This article presents an overview of augmented reality, discussing what it is, how it works, its current implementations, and its potential impact on libraries.

  18. Sonification and haptic feedback in addition to visual feedback enhances complex motor task learning.

    Science.gov (United States)

    Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter

    2015-03-01

    Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.

  19. Augmented reality in intraventricular neuroendoscopy.

    Science.gov (United States)

    Finger, T; Schaumann, A; Schulz, M; Thomale, Ulrich-W

    2017-06-01

    Individual planning of the entry point and the use of navigation has become more relevant in intraventricular neuroendoscopy. Navigated neuroendoscopic solutions are continuously improving. We describe experimentally measured accuracy and our first experience with augmented reality-enhanced navigated neuroendoscopy for intraventricular pathologies. Augmented reality-enhanced navigated endoscopy was tested for accuracy in an experimental setting. Therefore, a 3D-printed head model with a right parietal lesion was scanned with a thin-sliced computer tomography. Segmentation of the tumor lesion was performed using Scopis NovaPlan navigation software. An optical reference matrix is used to register the neuroendoscope's geometry and its field of view. The pre-planned ROI and trajectory are superimposed in the endoscopic image. The accuracy of the superimposed contour fitting on endoscopically visualized lesion was acquired by measuring the deviation of both midpoints to one another. The technique was subsequently used in 29 cases with CSF circulation pathologies. Navigation planning included defining the entry points, regions of interests and trajectories, superimposed as augmented reality on the endoscopic video screen during intervention. Patients were evaluated for postoperative imaging, reoperations, and possible complications. The experimental setup revealed a deviation of the ROI's midpoint from the real target by 1.2 ± 0.4 mm. The clinical study included 18 cyst fenestrations, ten biopsies, seven endoscopic third ventriculostomies, six stent placements, and two shunt implantations, being eventually combined in some patients. In cases of cyst fenestrations postoperatively, the cyst volume was significantly reduced in all patients by mean of 47%. In biopsies, the diagnostic yield was 100%. Reoperations during a follow-up period of 11.4 ± 10.2 months were necessary in two cases. Complications included one postoperative hygroma and one insufficient

  20. Augmented reality for breast imaging.

    Science.gov (United States)

    Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio

    2018-02-21

    Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented Breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. The contrast agent used was (CA) gadolinium (Gd) injection (0.1 mmol/kg at 2 ml/s) by a programmable power injector. Dicom formated images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into Augmented reality images. Augmented breast imaging (ABI) demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. Augmented breast imaging (ABI) to the breast can improve clinical outcomes, giving an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.

  1. Telemedicine with mobile devices and augmented reality for early postoperative care.

    Science.gov (United States)

    Ponce, Brent A; Brabston, Eugene W; Shin Zu; Watson, Shawna L; Baker, Dustin; Winn, Dennis; Guthrie, Barton L; Shenai, Mahesh B

    2016-08-01

    Advanced features are being added to telemedicine paradigms to enhance usability and usefulness. Virtual Interactive Presence (VIP) is a technology that allows a surgeon and patient to interact in a "merged reality" space, to facilitate both verbal, visual, and manual interaction. In this clinical study, a mobile VIP iOS application was introduced into routine post-operative orthopedic and neurosurgical care. Survey responses endorse the usefulness of this tool, as it relates to The virtual interaction provides needed virtual follow-up in instances where in-person follow-up may be limited, and enhances the subjective patient experience.

  2. Can Driving-Simulator Training Enhance Visual Attention, Cognition, and Physical Functioning in Older Adults?

    OpenAIRE

    Mathias Haeger; Otmar Bock; Daniel Memmert; Stefanie Hüttermann

    2018-01-01

    Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and...

  3. Post-encoding emotional arousal enhances consolidation of item memory, but not reality-monitoring source memory.

    Science.gov (United States)

    Wang, Bo; Sun, Bukuan

    2017-03-01

    The current study examined whether the effect of post-encoding emotional arousal on item memory extends to reality-monitoring source memory and, if so, whether the effect depends on emotionality of learning stimuli and testing format. In Experiment 1, participants encoded neutral words and imagined or viewed their corresponding object pictures. Then they watched a neutral, positive, or negative video. The 24-hour delayed test showed that emotional arousal had little effect on both item memory and reality-monitoring source memory. Experiment 2 was similar except that participants encoded neutral, positive, and negative words and imagined or viewed their corresponding object pictures. The results showed that positive and negative emotional arousal induced after encoding enhanced consolidation of item memory, but not reality-monitoring source memory, regardless of emotionality of learning stimuli. Experiment 3, identical to Experiment 2 except that participants were tested only on source memory for all the encoded items, still showed that post-encoding emotional arousal had little effect on consolidation of reality-monitoring source memory. Taken together, regardless of emotionality of learning stimuli and regardless of testing format of source memory (conjunction test vs. independent test), the facilitatory effect of post-encoding emotional arousal on item memory does not generalize to reality-monitoring source memory.

  4. Novelty enhances visual perception.

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    Full Text Available The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.

  5. Novelty enhances visual perception.

    Science.gov (United States)

    Schomaker, Judith; Meeter, Martijn

    2012-01-01

    The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.

  6. Virtual reality for employability skills

    OpenAIRE

    Minocha, Shailey; Tudor, Ana-Despina

    2017-01-01

    We showed a variety of virtual reality technologies, and through examples, we discussed how virtual reality technology is transforming work styles and workplaces. Virtual reality is becoming pervasive in almost all domains starting from arts, environmental causes to medical education and disaster management training, and to supporting patients with Dementia. Thus, an awareness of the virtual reality technology and its integration in curriculum design will provide and enhance employability ski...

  7. Design and implementation of a 3D ocean virtual reality and visualization engine

    Science.gov (United States)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  8. Transforming Experience: The Potential of Augmented Reality and Virtual Reality for Enhancing Personal and Clinical Change.

    Science.gov (United States)

    Riva, Giuseppe; Baños, Rosa M; Botella, Cristina; Mantovani, Fabrizia; Gaggioli, Andrea

    2016-01-01

    During life, many personal changes occur. These include changing house, school, work, and even friends and partners. However, the daily experience shows clearly that, in some situations, subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: (a) the focus of personal change is reducing the distance between self and reality (conflict); (b) this reduction is achieved through (1) an intense focus on the particular experience creating the conflict or (2) an internal or external reorganization of this experience; (c) personal change requires a progression through a series of different stages that however happen in discontinuous and non-linear ways; and (d) clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper focuses on the two leading virtual technologies - augmented reality (AR) and virtual reality (VR) - exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience by focusing on the high level of personal efficacy and self-reflectiveness generated by their sense of presence and emotional engagement. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering, and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual's worldview.

  9. Transforming Experience: The Potential of Augmented Reality and Virtual Reality for Enhancing Personal and Clinical Change

    Directory of Open Access Journals (Sweden)

    Giuseppe Riva

    2016-09-01

    Full Text Available During our life we undergo many personal changes: we change our house, our school, our work and even our friends and partners. However, our daily experience shows clearly that in some situations subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: a the focus of personal change is reducing the distance between self and reality (conflict; b this reduction is achieved through (1 an intense focus on the particular experience creating the conflict or (2 an internal or external reorganization of this experience; c personal change requires a progression through a series of different stages; d clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper will focus on the two leading virtual technologies – Augmented Reality (AR and Virtual Reality (VR – exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience, by focusing on the high level of self-reflectiveness and personal efficacy induced by their emotional engagement and sense of presence. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual’s worldview.

  10. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    Science.gov (United States)

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Collision avoidance in persons with homonymous visual field defects under virtual reality conditions.

    Science.gov (United States)

    Papageorgiou, Eleni; Hardiess, Gregor; Ackermann, Hermann; Wiethoelter, Horst; Dietz, Klaus; Mallot, Hanspeter A; Schiefer, Ulrich

    2012-01-01

    The aim of the present study was to examine the effect of homonymous visual field defects (HVFDs) on collision avoidance of dynamic obstacles at an intersection under virtual reality (VR) conditions. Overall performance was quantitatively assessed as the number of collisions at a virtual intersection at two difficulty levels. HVFDs were assessed by binocular semi-automated kinetic perimetry within the 90° visual field, stimulus III4e and the area of sparing within the affected hemifield (A-SPAR in deg(2)) was calculated. The effect of A-SPAR, age, gender, side of brain lesion, time since brain lesion and presence of macular sparing on the number of collisions, as well as performance over time were investigated. Thirty patients (10 female, 20 male, age range: 19-71 years) with HVFDs due to unilateral vascular brain lesions and 30 group-age-matched subjects with normal visual fields were examined. The mean number of collisions was higher for patients and in the more difficult level they experienced more collisions with vehicles approaching from the blind side than the seeing side. Lower A-SPAR and increasing age were associated with decreasing performance. However, in agreement with previous studies, wide variability in performance among patients with identical visual field defects was observed and performance of some patients was similar to that of normal subjects. Both patients and healthy subjects displayed equal improvement of performance over time in the more difficult level. In conclusion, our results suggest that visual-field related parameters per se are inadequate in predicting successful collision avoidance. Individualized approaches which also consider compensatory strategies by means of eye and head movements should be introduced. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Virtual reality and 3D animation in forensic visualization.

    Science.gov (United States)

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  13. A teleoperation training simulator with visual and kinesthetic force virtual reality

    Science.gov (United States)

    Kim, Won S.; Schenker, Paul

    1992-01-01

    A force-reflecting teleoperation training simulator with a high-fidelity real-time graphics display has been developed for operator training. A novel feature of this simulator is that it enables the operator to feel contact forces and torques through a force-reflecting controller during the execution of the simulated peg-in-hole task, providing the operator with the feel of visual and kinesthetic force virtual reality. A peg-in-hole task is used in our simulated teleoperation trainer as a generic teleoperation task. A quasi-static analysis of a two-dimensional peg-in-hole task model has been extended to a three-dimensional model analysis to compute contact forces and torques for a virtual realization of kinesthetic force feedback. The simulator allows the user to specify force reflection gains and stiffness (compliance) values of the manipulator hand for both the three translational and the three rotational axes in Cartesian space. Three viewing modes are provided for graphics display: single view, two split views, and stereoscopic view.

  14. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    Science.gov (United States)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  15. Feasibility of Using an Augmented Immersive Virtual Reality Learning Environment to Enhance Music Conducting Skills

    Science.gov (United States)

    Orman, Evelyn K.; Price, Harry E.; Russell, Christine R.

    2017-01-01

    Acquiring nonverbal skills necessary to appropriately communicate and educate members of performing ensembles is essential for wind band conductors. Virtual reality learning environments (VRLEs) provide a unique setting for developing these proficiencies. For this feasibility study, we used an augmented immersive VRLE to enhance eye contact, torso…

  16. Applied virtual reality at the Research Triangle Institute

    Science.gov (United States)

    Montoya, R. Jorge

    1994-01-01

    Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.

  17. Visualization environment and its utilization in the ITBL building

    International Nuclear Information System (INIS)

    Yasuhara, Yuko

    2004-12-01

    In recent years, visualization techniques have become more and more important in various fields. Especially in scientific fields, a large amount of numerical output data crucially needs to be changed into visualized form, because computations have grown to larger and larger scales as well as have become more complicated, so that computed results must be intuitively comprehensible by using various visualization techniques like 3D or stereo image construction. In the visualization room in the ITBL building, a 3-screen Virtual Reality system, a Portable Virtual Reality system, a Mixed Reality system, and Visualization tools like alchemy etc. are installed for the above-mentioned use. These devices enable us to easily change numerical data into visualized images of a virtual reality world with the use of eye-glasses or a head-mount-display device. This article describes the visualization environment in the ITBL building, it's use, and the tasks to be solved. (author)

  18. Engembangan Virtual Class Untuk Pembelajaran Augmented Reality Berbasis Android

    OpenAIRE

    Arief, Rifiana; Umniati, Naeli

    2012-01-01

    Augmanted Reality for android handphone has been a trend among collage students of computer department who join New Media course. To develop this application, the knowladge about visual presentation theory and case study of Augmanted Reality on android phoneneed to be conducted. Learning media through virtual class can facilitate the students' needs in learning and developing Augmanted Reality. The method of this study in developing virtual class for Augmented Reality learning were: a) having...

  19. ENGEMBANGAN VIRTUAL CLASS UNTUK PEMBELAJARAN AUGMENTED REALITY BERBASIS ANDROID

    OpenAIRE

    Rifiana Arief; Naeli Umniati

    2015-01-01

    ABSTRACT Augmanted Reality for android handphone has been a trend among collage students of computer department who join New Media course. To develop this application, the knowladge about visual presentation theory and case study of Augmanted Reality on android phoneneed to be conducted. Learning media through virtual class can facilitate the students’ needs in learning and developing Augmanted Reality. The method of this study in developing virtual class for Augmented Reality learning we...

  20. Associative visual learning by tethered bees in a controlled visual environment.

    Science.gov (United States)

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  1. CLIENT INTERACTION TECHNIQUES WHEN USING AUGMENTED REALITY TECHNOLOGY APPLICATIONS IN SALES

    OpenAIRE

    Kravtsov A. A.; Loyko V. I.

    2015-01-01

    Augmented reality offers unique ways to display visual information, in particular the visualization of threedimensional objects. With AR object can be visualized directly in the context of its operation. As an example can be a piece of furniture rendered in the interior, plants in the garden, the architectural object in the landscape. Modern consumer devices such as smartphones and tablet computers, as well as the algorithmic base lead to the possibility of augmented reality mass usage. The c...

  2. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    Science.gov (United States)

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  3. Virtual reality stimuli for force platform posturography.

    Science.gov (United States)

    Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko

    2002-01-01

    People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.

  4. Enhancing Health-Care Services with Mixed Reality Systems

    Science.gov (United States)

    Stantchev, Vladimir

    This work presents a development approach for mixed reality systems in health care. Although health-care service costs account for 5-15% of GDP in developed countries the sector has been remarkably resistant to the introduction of technology-supported optimizations. Digitalization of data storing and processing in the form of electronic patient records (EPR) and hospital information systems (HIS) is a first necessary step. Contrary to typical business functions (e.g., accounting or CRM) a health-care service is characterized by a knowledge intensive decision process and usage of specialized devices ranging from stethoscopes to complex surgical systems. Mixed reality systems can help fill the gap between highly patient-specific health-care services that need a variety of technical resources on the one side and the streamlined process flow that typical process supporting information systems expect on the other side. To achieve this task, we present a development approach that includes an evaluation of existing tasks and processes within the health-care service and the information systems that currently support the service, as well as identification of decision paths and actions that can benefit from mixed reality systems. The result is a mixed reality system that allows a clinician to monitor the elements of the physical world and to blend them with virtual information provided by the systems. He or she can also plan and schedule treatments and operations in the digital world depending on status information from this mixed reality.

  5. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  6. Measuring user satisfaction for design variations through virtual reality

    NARCIS (Netherlands)

    Orzechowski, M.A.; Timmermans, H.J.P.; Vries, de B.; Timmermans, H.J.P.; Vries, de B.

    2000-01-01

    This paper describes Virtual Reality as an environment to collect information about user satisfaction. Because Virtual Reality (VR) allows visualization with added interactivity, this form of representation bas particular advantages when presenting new designs. The paper reports on the development

  7. Walkable self-overlapping virtual reality maze and map visualization demo

    DEFF Research Database (Denmark)

    Serubugo, Sule; Skantarova, Denisa; Evers, Nicolaj

    2017-01-01

    This paper describes our demonstration of a walkable self-overlapping maze and its corresponding map to facilitate asymmetric collaboration for room-scale virtual reality setups in public places.......This paper describes our demonstration of a walkable self-overlapping maze and its corresponding map to facilitate asymmetric collaboration for room-scale virtual reality setups in public places....

  8. Visual Environment for Designing Interactive Learning Scenarios with Augmented Reality

    Science.gov (United States)

    Mota, José Miguel; Ruiz-Rube, Iván; Dodero, Juan Manuel; Figueiredo, Mauro

    2016-01-01

    Augmented Reality (AR) technology allows the inclusion of virtual elements on a vision of actual physical environment for the creation of a mixed reality in real time. This kind of technology can be used in educational settings. However, the current AR authoring tools present several drawbacks, such as, the lack of a mechanism for tracking the…

  9. A CT-ultrasound-coregistered augmented reality enhanced image-guided surgery system and its preliminary study on brain-shift estimation

    International Nuclear Information System (INIS)

    Huang, C H; Hsieh, C H; Lee, J D; Huang, W C; Lee, S T; Wu, C T; Sun, Y N; Wu, Y T

    2012-01-01

    With the combined view on the physical space and the medical imaging data, augmented reality (AR) visualization can provide perceptive advantages during image-guided surgery (IGS). However, the imaging data are usually captured before surgery and might be different from the up-to-date one due to natural shift of soft tissues. This study presents an AR-enhanced IGS system which is capable to correct the movement of soft tissues from the pre-operative CT images by using intra-operative ultrasound images. First, with reconstructing 2-D free-hand ultrasound images to 3-D volume data, the system applies a Mutual-Information based registration algorithm to estimate the deformation between pre-operative and intra-operative ultrasound images. The estimated deformation transform describes the movement of soft tissues and is then applied to the pre-operative CT images which provide high-resolution anatomical information. As a result, the system thus displays the fusion of the corrected CT images or the real-time 2-D ultrasound images with the patient in the physical space through a head mounted display device, providing an immersive augmented-reality environment. For the performance validation of the proposed system, a brain phantom was utilized to simulate brain-shift scenario. Experimental results reveal that when the shift of an artificial tumor is from 5mm ∼ 12mm, the correction rates can be improved from 32% ∼ 45% to 87% ∼ 95% by using the proposed system.

  10. Virtual Reality as an Educational and Training Tool for Medicine.

    Science.gov (United States)

    Izard, Santiago González; Juanes, Juan A; García Peñalvo, Francisco J; Estella, Jesús Mª Gonçalvez; Ledesma, Mª José Sánchez; Ruisoto, Pablo

    2018-02-01

    Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.

  11. An acceptance model for smart glasses based tourism augmented reality

    Science.gov (United States)

    Obeidy, Waqas Khalid; Arshad, Haslina; Huang, Jiung Yao

    2017-10-01

    Recent mobile technologies have revolutionized the way people experience their environment. Although, there is only limited research on users' acceptance of AR in the cultural tourism context, previous researchers have explored the opportunities of using augmented reality (AR) in order to enhance user experience. Recent AR research lack works that integrates dimensions which are specific to cultural tourism and smart glass specific context. Hence, this work proposes an AR acceptance model in the context of cultural heritage tourism and smart glasses capable of performing augmented reality. Therefore, in this paper we aim to present an AR acceptance model to understand the AR usage behavior and visiting intention for tourists who use Smart Glass based AR at UNESCO cultural heritage destinations in Malaysia. Furthermore, this paper identifies information quality, technology readiness, visual appeal, and facilitating conditions as external variables and key factors influencing visitors' beliefs, attitudes and usage intention.

  12. A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation.

    Science.gov (United States)

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Wang, Guangzhi

    2017-08-01

    Using existing stereoelectroencephalography (SEEG) electrode implantation surgical robot systems, it is difficult to intuitively validate registration accuracy and display the electrode entry points (EPs) and the anatomical structure around the electrode trajectories in the patient space to the surgeon. This paper proposes a prototype system that can realize video see-through augmented reality (VAR) and spatial augmented reality (SAR) for SEEG implantation. The system helps the surgeon quickly and intuitively confirm the registration accuracy, locate EPs and visualize the internal anatomical structure in the image space and patient space. We designed and developed a projector-camera system (PCS) attached to the distal flange of a robot arm. First, system calibration is performed. Second, the PCS is used to obtain the point clouds of the surface of the patient's head, which are utilized for patient-to-image registration. Finally, VAR is produced by merging the real-time video of the patient and the preoperative three-dimensional (3D) operational planning model. In addition, SAR is implemented by projecting the planning electrode trajectories and local anatomical structure onto the patient's scalp. The error of registration, the electrode EPs and the target points are evaluated on a phantom. The fiducial registration error is [Formula: see text] mm (max 1.22 mm), and the target registration error is [Formula: see text] mm (max 1.18 mm). The projection overlay error is [Formula: see text] mm, and the TP error after the pre-warped projection is [Formula: see text] mm. The TP error caused by a surgeon's viewpoint deviation is also evaluated. The presented system can help surgeons quickly verify registration accuracy during SEEG procedures and can provide accurate EP locations and internal structural information to the surgeon. With more intuitive surgical information, the surgeon may have more confidence and be able to perform surgeries with better outcomes.

  13. Enhancing online timeline visualizations with events and images

    Science.gov (United States)

    Pandya, Abhishek; Mulye, Aniket; Teoh, Soon Tee

    2011-01-01

    The use of timeline to visualize time-series data is one of the most intuitive and commonly used methods, and is used for widely-used applications such as stock market data visualization, and tracking of poll data of election candidates over time. While useful, these timeline visualizations are lacking in contextual information of events which are related or cause changes in the data. We have developed a system that enhances timeline visualization with display of relevant news events and their corresponding images, so that users can not only see the changes in the data, but also understand the reasons behind the changes. We have also conducted a user study to test the effectiveness of our ideas.

  14. A programmable display-layer architecture for virtual-reality applications

    NARCIS (Netherlands)

    Smit, F.A.

    2009-01-01

    Two important technical objectives of virtual-reality systems are to provide compelling visuals and effective 3D user interaction. In this respect, modern virtual reality system architectures suffer from a number of short-comings. The reduction of end-to-end latency, crosstalk and judder are

  15. HIPER-REALITAS VISUAL

    Directory of Open Access Journals (Sweden)

    Martadi

    2003-01-01

    Full Text Available At the last of twentieth century%2C the tecnology growth has changed the world display which is formed by the electronic images every where (television%2C film%2C game%2C virtual reality%2C digital photo%2C internet. The digital technology growth has brought the human fantasia throughout the limit%2C created the three dimensioan rooms with the object inside%2C until the stage of the visual reality has been passed throuhg by the visual image manipulation%2C hence%2C it is like the human being step from the real to the fantastic world%2C an imagination which seems like the truth. The problem becomes more complex when we have to face the reality that the technology growth brings the negative impacts. While the tecnology could be satisfy the human desire%2C giving an esctasy fantasia%2C then the moral values is nullified one by one. Criminals%2C pornography%2C come up freely wear the newest formats. Abstract in Bahasa Indonesia : Perkembangan teknologi pada akhir abad ke-20 telah merubah wajah dunia yang dibentuk oleh riuh rendah citraan elektronik (televisi%2C film%2C game%2C virtual reality%2C foto digital%2C internet. Perkembangan teknologi digital telah membawa fantasi manusia menembus batas%2C menciptakan ruang-ruang tiga dimensi berikut obyek-obyek di dalamnya%2C sampai pada tahap di mana realitas visual telah dilampaui dengan manipulasi pencitraan visual%2C sehingga seolah manusia melangkah dari dunia nyata menuju dunia fantasi%2C dunia maya yang tampak nyata. Permasalahannya menjadi semakin rumit ketika kita dihadapkan pada realita bahwa perkembangan teknologi tersebut membawa pula dampak negatif. Pada saat teknologi memuaskan hasrat/nafsu manusia%2C memberikan pesona ekstasi%2C maka nilai-nilai moral seakan rontok satu per satu. Hyper-reality%2C Visual.

  16. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  17. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    Science.gov (United States)

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.

  18. Augmented reality: don't we all wish we lived in one?

    International Nuclear Information System (INIS)

    Hayes, Birchard P.; Michel, Kelly D.; Few, Douglas A.; Gertman, David; Le Blanc, Katya

    2010-01-01

    From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometry systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.

  19. Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

    Directory of Open Access Journals (Sweden)

    Warrick Roseboom

    2011-04-01

    Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.

  20. Real-Time Projection-Based Augmented Reality System for Dynamic Objects in the Performing Arts

    Directory of Open Access Journals (Sweden)

    Jaewoon Lee

    2015-02-01

    Full Text Available This paper describes the case study of applying projection-based augmented reality, especially for dynamic objects in live performing shows, such as plays, dancing, or musicals. Our study aims to project imagery correctly inside the silhouettes of flexible objects, in other words, live actors or the surface of actor’s costumes; the silhouette transforms its own shape frequently. To realize this work, we implemented a special projection system based on the real-time masking technique, that is to say real-time projection-based augmented reality system for dynamic objects in performing arts. We installed the sets on a stage for live performance, and rehearsed particular scenes of a musical. In live performance, using projection-based augmented reality technology enhances technical and theatrical aspects which were not possible with existing video projection techniques. The projected images on the surfaces of actor’s costume could not only express the particular scene of a performance more effectively, but also lead the audience to an extraordinary visual experience.

  1. Working memory can enhance unconscious visual perception.

    Science.gov (United States)

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  2. Emergence of realism: Enhanced visual artistry and high accuracy of visual numerosity representation after left prefrontal damage.

    Science.gov (United States)

    Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro

    2014-05-01

    Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  4. Confronting an augmented reality

    Directory of Open Access Journals (Sweden)

    John Hedberg

    2012-08-01

    Full Text Available How can educators make use of augmented reality technologies and practices to enhance learning and why would we want to embrace such technologies anyway? How can an augmented reality help a learner confront, interpret and ultimately comprehend reality itself? In this article, we seek to initiate a discussion that focuses on these questions, and suggest that they be used as drivers for research into effective educational applications of augmented reality. We discuss how multi-modal, sensorial augmentation of reality links to existing theories of education and learning, focusing on ideas of cognitive dissonance and the confrontation of new realities implied by exposure to new and varied perspectives. We also discuss connections with broader debates brought on by the social and cultural changes wrought by the increased digitalisation of our lives, especially the concept of the extended mind. Rather than offer a prescription for augmentation, our intention is to throw open debate and to provoke deep thinking about what interacting with and creating an augmented reality might mean for both teacher and learner.

  5. Effect of virtual reality on cognition in stroke patients.

    Science.gov (United States)

    Kim, Bo Ryun; Chun, Min Ho; Kim, Lee Suk; Park, Ji Young

    2011-08-01

    To investigate the effect of virtual reality on the recovery of cognitive impairment in stroke patients. Twenty-eight patients (11 males and 17 females, mean age 64.2) with cognitive impairment following stroke were recruited for this study. All patients were randomly assigned to one of two groups, the virtual reality (VR) group (n=15) or the control group (n=13). The VR group received both virtual reality training and computer-based cognitive rehabilitation, whereas the control group received only computer-based cognitive rehabilitation. To measure, activity of daily living cognitive and motor functions, the following assessment tools were used: computerized neuropsychological test and the Tower of London (TOL) test for cognitive function assessment, Korean-Modified Barthel index (K-MBI) for functional status evaluation, and the motricity index (MI) for motor function assessment. All recruited patients underwent these evaluations before rehabilitation and four weeks after rehabilitation. The VR group showed significant improvement in the K-MMSE, visual and auditory continuous performance tests (CPT), forward digit span test (DST), forward and backward visual span tests (VST), visual and verbal learning tests, TOL, K-MBI, and MI scores, while the control group showed significant improvement in the K-MMSE, forward DST, visual and verbal learning tests, trail-making test-type A, TOL, K-MBI, and MI scores after rehabilitation. The changes in the visual CPT and backward VST in the VR group after rehabilitation were significantly higher than those in the control group. Our findings suggest that virtual reality training combined with computer-based cognitive rehabilitation may be of additional benefit for treating cognitive impairment in stroke patients.

  6. Peripersonal Space: An Index of Multisensory Body–Environment Interactions in Real, Virtual, and Mixed Realities

    Directory of Open Access Journals (Sweden)

    Andrea Serino

    2018-01-01

    Full Text Available Human–environment interactions normally occur in the physical milieu and thus by medium of the body and within the space immediately adjacent to and surrounding the body, the peripersonal space (PPS. However, human interactions increasingly occur with or within virtual environments, and hence novel approaches and metrics must be developed to index human–environment interactions in virtual reality (VR. Here, we present a multisensory task that measures the spatial extent of human PPS in real, virtual, and augmented realities. We validated it in a mixed reality (MR ecosystem in which real environment and virtual objects are blended together in order to administer and control visual, auditory, and tactile stimuli in ecologically valid conditions. Within this mixed-reality environment, participants are asked to respond as fast as possible to tactile stimuli on their body, while task-irrelevant visual or audiovisual stimuli approach their body. Results demonstrate that, in analogy with observations derived from monkey electrophysiology and in real environmental surroundings, tactile detection is enhanced when visual or auditory stimuli are close to the body, and not when far from it. We then calculate the location where this multisensory facilitation occurs as a proxy of the boundary of PPS. We observe that mapping of PPS via audiovisual, as opposed to visual alone, looming stimuli results in sigmoidal fits—allowing for the bifurcation between near and far space—with greater goodness of fit. In sum, our approach is able to capture the boundaries of PPS on a spatial continuum, at the individual-subject level, and within a fully controlled and previously laboratory-validated setup, while maintaining the richness and ecological validity of real-life events. The task can therefore be applied to study the properties of PPS in humans and to index the features governing human–environment interactions in virtual or MR. We propose PPS as an

  7. Real-life memory and spatial navigation in patients with focal epilepsy: ecological validity of a virtual reality supermarket task.

    Science.gov (United States)

    Grewe, P; Lahr, D; Kohsik, A; Dyck, E; Markowitsch, H J; Bien, C G; Botsch, M; Piefke, M

    2014-02-01

    Ecological assessment and training of real-life cognitive functions such as visual-spatial abilities in patients with epilepsy remain challenging. Some studies have applied virtual reality (VR) paradigms, but external validity of VR programs has not sufficiently been proven. Patients with focal epilepsy (EG, n=14) accomplished an 8-day program in a VR supermarket, which consisted of learning and buying items on a shopping list. Performance of the EG was compared with that of healthy controls (HCG, n=19). A comprehensive neuropsychological examination was administered. Real-life performance was investigated in a real supermarket. Learning in the VR supermarket was significantly impaired in the EG on different VR measures. Delayed free recall of products did not differ between the EG and the HCG. Virtual reality scores were correlated with neuropsychological measures of visual-spatial cognition, subjective estimates of memory, and performance in the real supermarket. The data indicate that our VR approach allows for the assessment of real-life visual-spatial memory and cognition in patients with focal epilepsy. The multimodal, active, and complex VR paradigm may particularly enhance visual-spatial cognitive resources. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    Science.gov (United States)

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  9. Virtual Reality in the Medical Field

    OpenAIRE

    Motomatsu, Haruka

    2014-01-01

    The objective is to analyze the use of the emerging 3D computer technology of VirtualReality in the use of relieving pain in physically impaired conditions such as burn victims,amputees, and phantom limb patients, during therapy and medical procedures. Virtualtechnology generates a three dimensional visual virtual world in which enables interaction.Comparison will be made between the emerging technology of the Virtual Reality and methodsusually used, which are the use of medicine. Medicine ha...

  10. Novel Web-based Education Platforms for Information Communication utilizing Gamification, Virtual and Immersive Reality

    Science.gov (United States)

    Demir, I.

    2015-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. This presentation showcase information communication interfaces, games, and virtual and immersive reality applications for supporting teaching and learning of concepts in atmospheric and hydrological sciences. The information communication platforms utilizes latest web technologies and allow accessing and visualizing large scale data on the web. The simulation system is a web-based 3D interactive learning environment for teaching hydrological and atmospheric processes and concepts. The simulation systems provides a visually striking platform with realistic terrain and weather information, and water simulation. The web-based simulation system provides an environment for students to learn about the earth science processes, and effects of development and human activity on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users.

  11. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  12. Augmented reality: don't we all wish we lived in one?

    Energy Technology Data Exchange (ETDEWEB)

    Hayes, Birchard P [Los Alamos National Laboratory; Michel, Kelly D [Los Alamos National Laboratory; Few, Douglas A [INL; Gertman, David [INL; Le Blanc, Katya [INL

    2010-01-01

    From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometry systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.

  13. Augmented Reality for Science Education

    DEFF Research Database (Denmark)

    Brandt, Harald; Nielsen, Birgitte Lund; Georgsen, Marianne

    Augmented reality (AR) holds great promise as a learning tool. So far, however, most research has looked at the technology itself – and AR has been used primarily for commercial purposes. As a learning tool, AR supports an inquiry-based approach to science education with a high level of student...... involvement. The AR-sci-project (Augmented Reality for SCIence education) addresses the issue of applying augmented reality in developing innovative science education and enhancing the quality of science teaching and learning....

  14. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    Science.gov (United States)

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Personalized augmented reality for anatomy education.

    Science.gov (United States)

    Ma, Meng; Fallavollita, Pascal; Seelbach, Ina; Von Der Heide, Anna Maria; Euler, Ekkehard; Waschke, Jens; Navab, Nassir

    2016-05-01

    Anatomy education is a challenging but vital element in forming future medical professionals. In this work, a personalized and interactive augmented reality system is developed to facilitate education. This system behaves as a "magic mirror" which allows personalized in-situ visualization of anatomy on the user's body. Real-time volume visualization of a CT dataset creates the illusion that the user can look inside their body. The system comprises a RGB-D sensor as a real-time tracking device to detect the user moving in front of a display. In addition, the magic mirror system shows text information, medical images, and 3D models of organs that the user can interact with. Through the participation of 7 clinicians and 72 students, two user studies were designed to respectively assess the precision and acceptability of the magic mirror system for education. The results of the first study demonstrated that the average precision of the augmented reality overlay on the user body was 0.96 cm, while the results of the second study indicate 86.1% approval for the educational value of the magic mirror, and 91.7% approval for the augmented reality capability of displaying organs in three dimensions. The usefulness of this unique type of personalized augmented reality technology has been demonstrated in this paper. © 2015 Wiley Periodicals, Inc.

  16. Perform light and optic experiments in Augmented Reality

    Science.gov (United States)

    Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan; Javahiraly, Nicolas; Israel, Kai

    2015-10-01

    In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university's laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based. Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one's perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content. The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.

  17. The analysis of visual variables for use in the cartographic design of point symbols for mobile Augmented Reality applications

    Science.gov (United States)

    Halik, Łukasz

    2012-11-01

    The objective of the present deliberations was to systematise our knowledge of static visual variables used to create cartographic symbols, and also to analyse the possibility of their utilisation in the Augmented Reality (AR) applications on smartphone-type mobile devices. This was accomplished by combining the visual variables listed over the years by different researchers. Research approach was to determine the level of usefulness of particular characteristics of visual variables such as selective, associative, quantitative and order. An attempt was made to provide an overview of static visual variables and to describe the AR system which is a new paradigm of the user interface. Changing the approach to the presentation of point objects is caused by applying different perspective in the observation of objects (egocentric view) than it is done on traditional analogue maps (geocentric view). Presented topics will refer to the fast-developing field of cartography, namely mobile cartography. Particular emphasis will be put on smartphone-type mobile devices and their applicability in the process of designing cartographic symbols. Celem artykułu było usystematyzowanie wiedzy na temat statycznych zmiennych wizualnych, które sa kluczowymi składnikami budujacymi sygnatury kartograficzne. Podjeto próbe zestawienia zmiennych wizualnych wyodrebnionych przez kartografów na przestrzeni ostatnich piecdziesieciu lat, zaczynajac od klasyfikacji przedstawionej przez J. Bertin’a. Dokonano analizy stopnia uzytecznosci poszczególnych zmiennych graficznych w aspekcie ich wykorzystania w projektowaniu znaków punktowych dla mobilnych aplikacji tworzonych w technologii Rzeczywistosci Rozszerzonej (Augmented Reality). Zmienne poddano analizie pod wzgledem czterech charakterystyk: selektywnosci, skojarzeniowosci, odzwierciedlenia ilosci oraz porzadku. W artykule zwrócono uwage na odmienne zastosowanie perspektywy pomiedzy tradycyjnymi analogowymi mapami (geocentrycznosc) a

  18. How does the Augmented Reality Manual enhance cognitive activity while doing complex maintenance tasks?: Augmented Tutorial Overlaid Manual (ATOM)

    International Nuclear Information System (INIS)

    Yim, Ho Bin; Seong, Poong Hyun

    2008-01-01

    It has been more than a decade since the concept of Augmented Reality (AR) was introduced. Many related technologies, such as tracking and display, to animate this concept have improved to certain levels. AR is well suited for interaction with the cognitive vision system. In contrast to the virtual reality, AR applications enrich the perceived reality with additional visual information which ranges from text annotation and object highlighting to complex 3D objects. AR has been tested its potentiality in various forms of applications. For example, visitors wear Head Mount Display (HMD) to see virtual guides explaining artifacts in a museum or soldiers are informed geographical features about unfamiliar operation sites. Recently, researchers tried to use AR as a means of teaching or training apparatus; however, there are still some technical obstacles to put this fascinating technology into practice. In this study, we will use Cognitive Load Theory (CLT) to design a manual of pump maintenance and convert it to AR technology to propose a proto type of an on-line AR maintenance manual to prove its possibility as an interactive learning tool

  19. How does the Augmented Reality Manual enhance cognitive activity while doing complex maintenance tasks?: Augmented Tutorial Overlaid Manual (ATOM)

    Energy Technology Data Exchange (ETDEWEB)

    Yim, Ho Bin; Seong, Poong Hyun [Korea Advanced Institute of Technology and Science, Daejeon (Korea, Republic of)

    2008-10-15

    It has been more than a decade since the concept of Augmented Reality (AR) was introduced. Many related technologies, such as tracking and display, to animate this concept have improved to certain levels. AR is well suited for interaction with the cognitive vision system. In contrast to the virtual reality, AR applications enrich the perceived reality with additional visual information which ranges from text annotation and object highlighting to complex 3D objects. AR has been tested its potentiality in various forms of applications. For example, visitors wear Head Mount Display (HMD) to see virtual guides explaining artifacts in a museum or soldiers are informed geographical features about unfamiliar operation sites. Recently, researchers tried to use AR as a means of teaching or training apparatus; however, there are still some technical obstacles to put this fascinating technology into practice. In this study, we will use Cognitive Load Theory (CLT) to design a manual of pump maintenance and convert it to AR technology to propose a proto type of an on-line AR maintenance manual to prove its possibility as an interactive learning tool.

  20. Synchronous Sounds Enhance Visual Sensitivity without Reducing Target Uncertainty

    Directory of Open Access Journals (Sweden)

    Yi-Chuan Chen

    2011-10-01

    Full Text Available We examined the crossmodal effect of the presentation of a simultaneous sound on visual detection and discrimination sensitivity using the equivalent noise paradigm (Dosher & Lu, 1998. In each trial, a tilted Gabor patch was presented in either the first or second of two intervals consisting of dynamic 2D white noise with one of seven possible contrast levels. The results revealed that the sensitivity of participants' visual detection and discrimination performance were both enhanced by the presentation of a simultaneous sound, though only close to the noise level at which participants' target contrast thresholds started to increase with the increasing noise contrast. A further analysis of the psychometric function at this noise level revealed that the increase in sensitivity could not be explained by the reduction of participants' uncertainty regarding the onset time of the visual target. We suggest that this crossmodal facilitatory effect may be accounted for by perceptual enhancement elicited by a simultaneously-presented sound, and that the crossmodal facilitation was easier to observe when the visual system encountered a level of noise that happened to be close to the level of internal noise embedded within the system.

  1. Manipulating the fidelity of lower extremity visual feedback to identify obstacle negotiation strategies in immersive virtual reality.

    Science.gov (United States)

    Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M

    2017-07-01

    The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.

  2. The Influences of the 2D Image-Based Augmented Reality and Virtual Reality on Student Learning

    Science.gov (United States)

    Liou, Hsin-Hun; Yang, Stephen J. H.; Chen, Sherry Y.; Tarng, Wernhuar

    2017-01-01

    Virtual reality (VR) learning environments can provide students with concepts of the simulated phenomena, but users are not allowed to interact with real elements. Conversely, augmented reality (AR) learning environments blend real-world environments so AR could enhance the effects of computer simulation and promote students' realistic experience.…

  3. Direct Manipulation in Virtual Reality

    Science.gov (United States)

    Bryson, Steve

    2003-01-01

    Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.

  4. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    Science.gov (United States)

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  5. Transduction between worlds: using virtual and mixed reality for earth and planetary science

    Science.gov (United States)

    Hedley, N.; Lochhead, I.; Aagesen, S.; Lonergan, C. D.; Benoy, N.

    2017-12-01

    Virtual reality (VR) and augmented reality (AR) have the potential to transform the way we visualize multidimensional geospatial datasets in support of geoscience research, exploration and analysis. The beauty of virtual environments is that they can be built at any scale, users can view them at many levels of abstraction, move through them in unconventional ways, and experience spatial phenomena as if they had superpowers. Similarly, augmented reality allows you to bring the power of virtual 3D data visualizations into everyday spaces. Spliced together, these interface technologies hold incredible potential to support 21st-century geoscience. In my ongoing research, my team and I have made significant advances to connect data and virtual simulations with real geographic spaces, using virtual environments, geospatial augmented reality and mixed reality. These research efforts have yielded new capabilities to connect users with spatial data and phenomena. These innovations include: geospatial x-ray vision; flexible mixed reality; augmented 3D GIS; situated augmented reality 3D simulations of tsunamis and other phenomena interacting with real geomorphology; augmented visual analytics; and immersive GIS. These new modalities redefine the ways in which we can connect digital spaces of spatial analysis, simulation and geovisualization, with geographic spaces of data collection, fieldwork, interpretation and communication. In a way, we are talking about transduction between real and virtual worlds. Taking a mixed reality approach to this, we can link real and virtual worlds. This paper presents a selection of our 3D geovisual interface projects in terrestrial, coastal, underwater and other environments. Using rigorous applied geoscience data, analyses and simulations, our research aims to transform the novelty of virtual and augmented reality interface technologies into game-changing mixed reality geoscience.

  6. Virtual reality training to enhance behavior and cognitive function among children with attention-deficit/hyperactivity disorder: brief report.

    Science.gov (United States)

    Shema-Shiratzky, Shirley; Brozgol, Marina; Cornejo-Thumm, Pablo; Geva-Dayan, Karen; Rotstein, Michael; Leitner, Yael; Hausdorff, Jeffrey M; Mirelman, Anat

    2018-05-17

    To examine the feasibility and efficacy of a combined motor-cognitive training using virtual reality to enhance behavior, cognitive function and dual-tasking in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Fourteen non-medicated school-aged children with ADHD, received 18 training sessions during 6 weeks. Training included walking on a treadmill while negotiating virtual obstacles. Behavioral symptoms, cognition and gait were tested before and after the training and at 6-weeks follow-up. Based on parental report, there was a significant improvement in children's social problems and psychosomatic behavior after the training. Executive function and memory were improved post-training while attention was unchanged. Gait regularity significantly increased during dual-task walking. Long-term training effects were maintained in memory and executive function. Treadmill-training augmented with virtual-reality is feasible and may be an effective treatment to enhance behavior, cognitive function and dual-tasking in children with ADHD.

  7. Cholinergic pairing with visual activation results in long-term enhancement of visual evoked potentials.

    Directory of Open Access Journals (Sweden)

    Jun Il Kang

    Full Text Available Acetylcholine (ACh contributes to learning processes by modulating cortical plasticity in terms of intensity of neuronal activity and selectivity properties of cortical neurons. However, it is not known if ACh induces long term effects within the primary visual cortex (V1 that could sustain visual learning mechanisms. In the present study we analyzed visual evoked potentials (VEPs in V1 of rats during a 4-8 h period after coupling visual stimulation to an intracortical injection of ACh analog carbachol or stimulation of basal forebrain. To clarify the action of ACh on VEP activity in V1, we individually pre-injected muscarinic (scopolamine, nicotinic (mecamylamine, alpha7 (methyllycaconitine, and NMDA (CPP receptor antagonists before carbachol infusion. Stimulation of the cholinergic system paired with visual stimulation significantly increased VEP amplitude (56% during a 6 h period. Pre-treatment with scopolamine, mecamylamine and CPP completely abolished this long-term enhancement, while alpha7 inhibition induced an instant increase of VEP amplitude. This suggests a role of ACh in facilitating visual stimuli responsiveness through mechanisms comparable to LTP which involve nicotinic and muscarinic receptors with an interaction of NMDA transmission in the visual cortex.

  8. Visualization of disciplinary profiles: Enhanced science overlay maps

    NARCIS (Netherlands)

    Carley, S.; Porter, A.L.; Rafols, I.; Leydesdorff, L.

    Purpose The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps.

  9. Applied Augmented Reality for High Precision Maintenance

    Science.gov (United States)

    Dever, Clark

    Augmented Reality had a major consumer breakthrough this year with Pokemon Go. The underlying technologies that made that app a success with gamers can be applied to improve the efficiency and efficacy of workers. This session will explore some of the use cases for augmented reality in an industrial environment. In doing so, the environmental impacts and human factors that must be considered will be explored. Additionally, the sensors, algorithms, and visualization techniques used to realize augmented reality will be discussed. The benefits of augmented reality solutions in industrial environments include automated data recording, improved quality assurance, reduction in training costs and improved mean-time-to-resolution. As technology continues to follow Moore's law, more applications will become feasible as performance-per-dollar increases across all system components.

  10. Crime Scenes as Augmented Reality

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    Using the concept of augmented reality, this article will investigate how places in various ways have become augmented by means of different mediatization strategies. Augmentation of reality implies an enhancement of the places' emotional character: a certain mood, atmosphere or narrative surplus......, physical damage: they are all readable and interpretable signs. As augmented reality the crime scene carries a narrative which at first is hidden and must be revealed. Due to the process of investigation and the detective's ability to reason and deduce, the crime scene as place is reconstructed as virtual...

  11. Social facilitation in virtual reality-enhanced exercise: competitiveness moderates exercise effort of older adults

    Directory of Open Access Journals (Sweden)

    Anderson-Hanley C

    2011-10-01

    Full Text Available Cay Anderson-Hanley1,2, Amanda L Snyder1, Joseph P Nimon1, Paul J Arciero1,21Healthy Aging and Neuropsychology Lab, Department of Psychology, Union College, Schenectady, NY, USA; 2Health and Exercise Sciences Department, Skidmore College, Saratoga Springs, NY, USAAbstract: This study examined the effect of virtual social facilitation and competitiveness on exercise effort in exergaming older adults. Fourteen exergaming older adults participated. Competitiveness was assessed prior to the start of exercise. Participants were trained to ride a “cybercycle;” a virtual reality-enhanced stationary bike with interactive competition. After establishing a cybercycling baseline, competitive avatars were introduced. Pedaling effort (watts was assessed. Repeated measures ANOVA revealed a significant group (high vs low competitiveness X time (pre- to post-avatar interaction (F[1,12] = 13.1, P = 0.003. Virtual social facilitation increased exercise effort among more competitive exercisers. Exercise programs that match competitiveness may maximize exercise effort.Keywords: exercise, aging, virtual reality, competitiveness, social facilitation, exercise intensity

  12. Location-Based Learning through Augmented Reality

    Science.gov (United States)

    Chou, Te-Lien; Chanlin, Lih-Juan

    2014-01-01

    A context-aware and mixed-reality exploring tool cannot only effectively provide an information-rich environment to users, but also allows them to quickly utilize useful resources and enhance environment awareness. This study integrates Augmented Reality (AR) technology into smartphones to create a stimulating learning experience at a university…

  13. Stereoscopic augmented reality for laparoscopic surgery.

    Science.gov (United States)

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and

  14. Virtual Reality as Innovative Approach to the Interior Designing

    Science.gov (United States)

    Kaleja, Pavol; Kozlovská, Mária

    2017-06-01

    We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies

  15. Visual Hallucinations as Incidental Negative Effects of Virtual Reality on Parkinson’s Disease Patients: A Link with Neurodegeneration?

    Directory of Open Access Journals (Sweden)

    Giovanni Albani

    2015-01-01

    Full Text Available We followed up a series of 23 Parkinson’s disease (PD patients who had performed an immersive virtual reality (VR protocol eight years before. On that occasion, six patients incidentally described visual hallucinations (VH with occurrences of images not included in the virtual environment. Curiously, in the following years, only these patients reported the appearance of VH later in their clinical history, while the rest of the group did not. Even considering the limited sample size, we may argue that VR immersive systems can induce unpleasant effects in PD patients who are predisposed to a cognitive impairment.

  16. Virtual reality in pediatric psychology

    OpenAIRE

    Parsons, T. D.; Riva, G.; Parsons, S. J.; Mantovani, F.; Newbutt, N.; Lin, L.; Venturini, E.; Hall, T.

    2017-01-01

    Virtual reality technologies allow for controlled simulations of affectively engaging background narratives. These virtual environments offer promise for enhancing emotionally relevant experiences and social interactions. Within this context virtual reality can allow instructors, therapists, neuropsychologists, and service providers to offer safe, repeatable, and diversifiable interventions that can benefit assessments and learning in both typically developing children and children with disab...

  17. The Rhetoric of the Real: Stereotypes of Rural Youth in American Reality Television and Stock Photography

    Science.gov (United States)

    Massey, Carissa

    2017-01-01

    Through an examination of the visual rhetoric of identity presented by reality shows, especially "Here Comes Honey Boo Boo," this paper explores ways in which American reality television and related media images construct, deploy, and reiterate visual stereotypes about whites from rural regions of the United States. Its focus is the…

  18. Who’s That Girl? Handheld Augmented Reality for Printed Photo Books

    OpenAIRE

    Henze , Niels; Boll , Susanne

    2011-01-01

    Part 1: Long and Short Papers; International audience; Augmented reality on mobile phones has recently made major progress. Lightweight, markerless object recognition and tracking makes handheld Augmented Reality feasible for new application domains. As this field is technology driven the interface design has mostly been neglected. In this paper we investigate visualization techniques for augmenting printed documents using handheld Augmented Reality. We selected the augmentation of printed ph...

  19. Simulators and virtual reality in surgical education.

    Science.gov (United States)

    Chou, Betty; Handa, Victoria L

    2006-06-01

    This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.

  20. Display technologies for augmented reality

    Science.gov (United States)

    Lee, Byoungho; Lee, Seungjae; Jang, Changwon; Hong, Jong-Young; Li, Gang

    2018-02-01

    With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.

  1. Multi-layered see-through movie in diminished reality

    Science.gov (United States)

    Uematsu, Yuko; Hashimoto, Takanori; Inoue, Takuya; Shimizu, Naoki; Saito, Hideo

    2012-03-01

    This paper presents generating a multi-layered see-through movie for an auto-stereoscopic display. This work is based on Diminished Reality (DR), which is one of the research fields of Augmented Reality (AR). In the usual AR, some virtual objects are added on the real world. On the other hand, DR removes some real objects from the real world. Therefore, the background is visualized instead of the real objects (obstacles) to be removed. We use multiple color cameras and one TOF depth camera. The areas of obstacles are defined by using the depth camera based on the distance of obstacles. The background behind the obstacles is recovered by planarprojection of multiple cameras. Then, the recovered background is overlaid onto the removed obstacles. For visualizing it through the auto-stereoscopic display, the scene is divided into multiple layers such as obstacles and background. The pixels corresponding to the obstacles are not visualized or visualized semi-transparently at the center viewpoints. Therefore, we can see that the obstacles are diminished according to the viewpoints.

  2. Aplikasi Web Augmented Reality Villa

    Directory of Open Access Journals (Sweden)

    Gede Yudha Prema Pangestu

    2017-07-01

    Full Text Available Bali is one of the highly developed tourist destination in Indonesia. The arrival of tourists having holiday in Bali led to increase residential needs with complete amenities. The occupancy rate of hotel and villa in Bali is increase significantlly during the long vacation. The emergence of new villa and hotel occupancy raises the level of competition in business, so it needs a correct use good marketing communication strategy in marketing the product in order to attract the attention of consumers. Web Application Augmented Reality Villa can help visualize the residential villa in three-dimensional shapes that look more attractive and practical. The use of brochures as written information and the application of augmented reality technology on the Web Application Augmented Reality Villa aims to develop an application that can provide information about the villa to visitors. Web Application uses Augmented Reality Villa designed by FlarToolkit library. Based on the test results show the application can display 3-dimensional objects by scanning marker villa in a brochure which already contain marker.

  3. APPLICATION OF AUGMENTED REALITY IN FACADE REDESIGN PRESENTATION

    Directory of Open Access Journals (Sweden)

    PEJIĆ Petar

    2015-06-01

    Full Text Available Augmented Reality (AR is a computer technology where the perception of the user is enhanced by the seamless blending between real environment and computer-generated virtual objects coexisting in the same space. When it comes to a redesign of the existing facades, it is necessary to create a visual presentation of the proposed changes. For this reason, contemporary architectural approach assumes creating a digital 3D model of the newly designed facade. Since the façade redesign is a real world change, the AR can be used for newly designed facade project presentation. In this paper a case study of the AR application for façade redesign presentation of a single family house located in Babušnica (Serbia is presented.

  4. Virtual reality hardware for use in interactive 3D data fusion and visualization

    Science.gov (United States)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  5. Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation.

    Science.gov (United States)

    Kiryu, Tohru; So, Richard H Y

    2007-09-25

    Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution.

  6. Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation

    Directory of Open Access Journals (Sweden)

    So Richard HY

    2007-09-01

    Full Text Available Abstract Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution.

  7. Virtual Reality in education and for employability

    OpenAIRE

    Minocha, Shailey; Tudor, Ana-Despina

    2017-01-01

    Virtual reality is becoming pervasive in several domains - in arts and film-making, for environmental causes, in medical education, in disaster management training, in sports broadcasting, in entertainment, and in supporting patients with dementia. An awareness of virtual reality technology and its integration in curriculum design will provide and enhance employability skills for current and future workplaces.\\ud \\ud In this webinar, we will describe the evolution of virtual reality technolog...

  8. VIRTUAL REALITY AS A SPHERE OF FICTIONS

    OpenAIRE

    V. A. Abramova

    2017-01-01

    In post-nonclassical science in studying of spontaneous systems it is important to consider a narrow orientation of perception in the solution of specific objectives, in this context, perception of symbolical transformations at various levels – subjective and objective. The virtual reality widespread now thanks to enhancement of information and communication technologies consists of hypertrophied effects of virtualization of reality where the virtual image has nothing in common with reality, ...

  9. Multifocal visual evoked responses to dichoptic stimulation using virtual reality goggles: Multifocal VER to dichoptic stimulation.

    Science.gov (United States)

    Arvind, Hemamalini; Klistorner, Alexander; Graham, Stuart L; Grigg, John R

    2006-05-01

    Multifocal visual evoked potentials (mfVEPs) have demonstrated good diagnostic capabilities in glaucoma and optic neuritis. This study aimed at evaluating the possibility of simultaneously recording mfVEP for both eyes with dichoptic stimulation using virtual reality goggles and also to determine the stimulus characteristics that yield maximum amplitude. ten healthy volunteers were recruited and temporally sparse pattern pulse stimuli were presented dichoptically using virtual reality goggles. Experiment 1 involved recording responses to dichoptically presented checkerboard stimuli and also confirming true topographic representation by switching off specific segments. Experiment 2 involved monocular stimulation and comparison of amplitude with Experiment 1. In Experiment 3, orthogonally oriented gratings were dichoptically presented. Experiment 4 involved dichoptic presentation of checkerboard stimuli at different levels of sparseness (5.0 times/s, 2.5 times/s, 1.66 times/s and 1.25 times/s), where stimulation of corresponding segments of two eyes were separated by 16.7, 66.7,116.7 & 166.7 ms respectively. Experiment 1 demonstrated good traces in all regions and confirmed topographic representation. However, there was suppression of amplitude of responses to dichoptic stimulation by 17.9+/-5.4% compared to monocular stimulation. Experiment 3 demonstrated similar suppression between orthogonal and checkerboard stimuli (p = 0.08). Experiment 4 demonstrated maximum amplitude and least suppression (4.8%) with stimulation at 1.25 times/s with 166.7 ms separation between eyes. It is possible to record mfVEP for both eyes during dichoptic stimulation using virtual reality goggles, which present binocular simultaneous patterns driven by independent sequences. Interocular suppression can be almost eliminated by using a temporally sparse stimulus of 1.25 times/s with a separation of 166.7 ms between stimulation of corresponding segments of the two eyes.

  10. Game-Based Evacuation Drill Using Augmented Reality and Head-Mounted Display

    Science.gov (United States)

    Kawai, Junya; Mitsuhara, Hiroyuki; Shishibori, Masami

    2016-01-01

    Purpose: Evacuation drills should be more realistic and interactive. Focusing on situational and audio-visual realities and scenario-based interactivity, the authors have developed a game-based evacuation drill (GBED) system that presents augmented reality (AR) materials on tablet computers. The paper's current research purpose is to improve…

  11. Preoperative surgical planning and simulation of complex cranial base tumors in virtual reality

    Institute of Scientific and Technical Information of China (English)

    YI Zhi-qiang; LI Liang; MO Da-peng; ZHANG Jia-yong; ZHANG Yang; BAO Sheng-de

    2008-01-01

    @@ The extremely complex anatomic relationships among bone,tumor,blood vessels and cranial nerves remains a big challenge for cranial base tumor surgery.Therefore.a good understanding of the patient specific anatomy and a preoperative planning are helpful and crocial for the neurosurgeons.Three dimensional (3-D) visualization of various imaging techniques have been widely explored to enhance the comprehension of volumetric data for surgical planning.1 We used the Destroscope Virtual Reality (VR) System (Singapore,Volume Interaction Pte Ltd,software:RadioDexterTM 1.0) to optimize preoperative plan in the complex cranial base tumors.This system uses patient-specific,coregistered,fused radiology data sets that may be viewed stereoscopically and can be manipulated in a virtual reality environment.This article describes our experience with the Destroscope VR system in preoperative surgical planning and simulation for 5 patients with complex cranial base tumors and evaluates the clinical usefulness of this system.

  12. System architecture of a mixed reality framework

    OpenAIRE

    Seibert, Helmut; Dähne, Patrick

    2006-01-01

    In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instea...

  13. Development of a Real-Time Detection System for Augmented Reality Driving

    OpenAIRE

    Hsu, Kuei-Shu; Wang, Chia-Sui; Jiang, Jinn-Feng; Wei, Hung-Yuan

    2015-01-01

    Augmented reality technology is applied so that driving tests may be performed in various environments using a virtual reality scenario with the ultimate goal of improving visual and interactive effects of simulated drivers. Environmental conditions simulating a real scenario are created using an augmented reality structure, which guarantees the test taker’s security since they are not subject to real-life elements and dangers. Furthermore, the accuracy of tests conducted through virtual real...

  14. Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit

    Science.gov (United States)

    Chiu, Jennifer L.; Linn, Marcia C.

    2014-01-01

    This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent…

  15. Augmented Reality as a Countermeasure for Sleep Deprivation.

    Science.gov (United States)

    Baumeister, James; Dorrlan, Jillian; Banks, Siobhan; Chatburn, Alex; Smith, Ross T; Carskadon, Mary A; Lushington, Kurt; Thomas, Bruce H

    2016-04-01

    Sleep deprivation is known to have serious deleterious effects on executive functioning and job performance. Augmented reality has an ability to place pertinent information at the fore, guiding visual focus and reducing instructional complexity. This paper presents a study to explore how spatial augmented reality instructions impact procedural task performance on sleep deprived users. The user study was conducted to examine performance on a procedural task at six time points over the course of a night of total sleep deprivation. Tasks were provided either by spatial augmented reality-based projections or on an adjacent monitor. The results indicate that participant errors significantly increased with the monitor condition when sleep deprived. The augmented reality condition exhibited a positive influence with participant errors and completion time having no significant increase when sleep deprived. The results of our study show that spatial augmented reality is an effective sleep deprivation countermeasure under laboratory conditions.

  16. Embryonic staging using a 3D virtual reality system

    NARCIS (Netherlands)

    C.M. Verwoerd-Dikkeboom (Christine); A.H.J. Koning (Anton); P.J. van der Spek (Peter); N. Exalto (Niek); R.P.M. Steegers-Theunissen (Régine)

    2008-01-01

    textabstractBACKGROUND: The aim of this study was to demonstrate that Carnegie Stages could be assigned to embryos visualized with a 3D virtual reality system. METHODS: We analysed 48 3D ultrasound scans of 19 IVF/ICSI pregnancies at 7-10 weeks' gestation. These datasets were visualized as 3D

  17. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    Science.gov (United States)

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  18. Telescopic multi-resolution augmented reality

    Science.gov (United States)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  19. Virtual reality simulators and training in laparoscopic surgery.

    Science.gov (United States)

    Yiannakopoulou, Eugenia; Nikiteas, Nikolaos; Perrea, Despina; Tsigris, Christos

    2015-01-01

    Virtual reality simulators provide basic skills training without supervision in a controlled environment, free of pressure of operating on patients. Skills obtained through virtual reality simulation training can be transferred on the operating room. However, relative evidence is limited with data available only for basic surgical skills and for laparoscopic cholecystectomy. No data exist on the effect of virtual reality simulation on performance on advanced surgical procedures. Evidence suggests that performance on virtual reality simulators reliably distinguishes experienced from novice surgeons Limited available data suggest that independent approach on virtual reality simulation training is not different from proctored approach. The effect of virtual reality simulators training on acquisition of basic surgical skills does not seem to be different from the effect the physical simulators. Limited data exist on the effect of virtual reality simulation training on the acquisition of visual spatial perception and stress coping skills. Undoubtedly, virtual reality simulation training provides an alternative means of improving performance in laparoscopic surgery. However, future research efforts should focus on the effect of virtual reality simulation on performance in the context of advanced surgical procedure, on standardization of training, on the possibility of synergistic effect of virtual reality simulation training combined with mental training, on personalized training. Copyright © 2014 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  20. The Internet and medical collaboration using virtual reality.

    Science.gov (United States)

    Liang, Wen Yau; O'Grady, Peter

    2003-01-01

    Computed Tomography (CT) provides a large amount of data but the presentation of the data to a physician can be less than satisfactory. Ideally, the image data should be available to physicians in interactive 3D to allow for improved visualization, planning and diagnosis. A virtual reality representation that not only allows for the manipulation of the image but also allows for the user to, in effect, move inside the image remotely would be ideal. In this paper the research associated with virtual reality is discussed. A formalism is then presented to create, from the CT data, the virtual reality world in the Virtual Reality Modeling Language. An implementation is described of this formalism that uses the Internet to allow for users in remote locations to view and manipulate the virtual worlds.

  1. Rapid visualization of latent fingermarks using gold seed-mediated enhancement

    Directory of Open Access Journals (Sweden)

    Chia-Hao Su

    2016-11-01

    Full Text Available Abstract Background Fingermarks are one of the most important and useful forms of physical evidence in forensic investigations. However, latent fingermarks are not directly visible, but can be visualized due to the presence of other residues (such as inorganic salts, proteins, polypeptides, enzymes and human metabolites which can be detected or recognized through various strategies. Convenient and rapid techniques are still needed to provide obvious contrast between the background and the fingermark ridges and to then visualize latent fingermark with a high degree of selectivity and sensitivity. Results In this work, lysozyme-binding aptamer-conjugated Au nanoparticles (NPs are used to recognize and target lysozyme in the fingermark ridges, and Au+-complex solution is used as a growth agent to reduce Au+ from Au+ to Au0 on the surface of the Au NPs. Distinct fingermark patterns were visualized on a range of professional forensic within 3 min; the resulting images could be observed by the naked eye without background interference. The entire processes from fingermark collection to visualization only entails two steps and can be completed in less than 10 min. The proposed method provides cost and time savings over current fingermark visualization methods. Conclusions We report a simple, inexpensive, and fast method for the rapid visualization of latent fingermarks on the non-porous substrates using Au seed-mediated enhancement. Au seed-mediated enhancement is used to achieve the rapid visualization of latent fingermarks on non-porous substrates by the naked eye without the use of expensive or sophisticated instruments. The proposed approach offers faster detection and visualization of latent fingermarks than existing methods. The proposed method is expected to increase detection efficiency for latent fingermarks and reduce time requirements and costs for forensic investigations.

  2. Clinical Utility of Virtual Reality in Pain Management: A Comprehensive Research Review from 2009 to 2016

    OpenAIRE

    Matsangidou, Maria; Ang, Chee Siang; Sakel, Mohamed

    2017-01-01

    Virtual Reality is a technology that allows users to experience a computer-simulated reality with visual, auditory, tactile and olfactory interactions. In the past decades, there have been considerable interests in using Virtual Reality for clinical purposes, including pain management. This article provides a systematic review of research on Virtual Reality and pain management, with an aim to understand the feasibilities of current Virtual Reality technologies and content design approaches in...

  3. Saccade-induced image motion cannot account for post-saccadic enhancement of visual processing in primate MST

    Directory of Open Access Journals (Sweden)

    Shaun L Cloherty

    2015-09-01

    Full Text Available Primates use saccadic eye movements to make gaze changes. In many visual areas, including the dorsal medial superior temporal area (MSTd of macaques, neural responses to visual stimuli are reduced during saccades but enhanced afterwards. How does this enhancement arise – from an internal mechanism associated with saccade generation or through visual mechanisms activated by the saccade sweeping the image of the visual scene across the retina? Spontaneous activity in MSTd is elevated even after saccades made in darkness, suggesting a central mechanism for post-saccadic enhancement. However, based on the timing of this effect, it may arise from a different mechanism than occurs in normal vision. Like neural responses in MSTd, initial ocular following eye speed is enhanced after saccades, with evidence suggesting both internal and visually mediated mechanisms. Here we recorded from visual neurons in MSTd and measured responses to motion stimuli presented soon after saccades and soon after simulated saccades – saccade-like displacements of the background image during fixation. We found that neural responses in MSTd were enhanced when preceded by real saccades but not when preceded by simulated saccades. Furthermore, we also observed enhancement following real saccades made across a blank screen that generated no motion signal within the recorded neurons’ receptive fields. We conclude that in MSTd the mechanism leading to post-saccadic enhancement has internal origins.

  4. Enhanced dimension-specific visual working memory in grapheme–color synesthesia☆

    Science.gov (United States)

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-01-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme–color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. PMID:23892185

  5. Virtual Reality: A New Learning Environment.

    Science.gov (United States)

    Ferrington, Gary; Loge, Kenneth

    1992-01-01

    Discusses virtual reality (VR) technology and its possible uses in military training, medical education, industrial design and development, the media industry, and education. Three primary applications of VR in the learning process--visualization, simulation, and construction of virtual worlds--are described, and pedagogical and moral issues are…

  6. Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.

    Science.gov (United States)

    Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A

    2013-01-01

    Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

  7. Augmenting real-time video with virtual models for enhanced visualization for simulation, teaching, training and guidance

    Science.gov (United States)

    Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.

    2015-03-01

    In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.

  8. A prototype to study cognitive and aesthetic aspects of mixed reality technologies

    OpenAIRE

    Gunia, Artur; Indurkhya, Bipin

    2017-01-01

    Mixed reality systems integrate virtual reality with real-world perception and cognition to offer enhanced interaction possibilities with the environment. Our aim is to demonstrate that mixed reality technologies strongly affect our aesthetic sense and mental models. So, in designing such technologies, we need to incorporate perspectives from different disciplines. We present different approaches and implementations of cognitive enhancement and cognitive technologies, consider some practical ...

  9. Schizophrenia and visual backward masking: a general deficit of target enhancement

    Directory of Open Access Journals (Sweden)

    Michael H Herzog

    2013-05-01

    Full Text Available The obvious symptoms of schizophrenia are of cognitive and psychopathological nature. However, schizophrenia affects also visual processing which becomes particularly evident when stimuli are presented for short durations and are followed by a masking stimulus. Visual deficits are of great interest because they might be related to the genetic variations underlying the disease (endophenotype concept. Visual masking deficits are usually attributed to specific dysfunctions of the visual system such as a hypo- or hyper-active magnocellular system. Here, we propose that visual deficits are a manifestation of a general deficit related to the enhancement of weak neural signals as occurring in all other sorts of information processing. We summarize previous findings with the shine-through masking paradigm where a shortly presented vernier target is followed by a masking grating. The mask deteriorates visual processing of schizophrenic patients by almost an order of magnitude compared to healthy controls. We propose that these deficits are caused by dysfunctions of attention and the cholinergic system leading to weak neural activity corresponding to the vernier. High density electrophysiological recordings (EEG show that indeed neural activity is strongly reduced in schizophrenic patients which we attribute to the lack of vernier enhancement. When only the masking grating is presented, EEG responses are roughly comparable between patients and control. Our hypothesis is supported by findings relating visual masking to genetic deviants of the nicotinic 7 receptor (CHRNA7.

  10. FEATURES OF USING AUGMENTED REALITY TECHNOLOGY TO SUPPORT EDUCATIONAL PROCESSES

    Directory of Open Access Journals (Sweden)

    Yury A. Kravchenko

    2014-01-01

    Full Text Available The paper discusses the concept and technology of augmented reality, the rationale given the relevance and timeliness of its use to support educational processes. Paper is a survey and study of the possibility of using augmented reality technology in education. Architecture is proposed and constructed algorithms of the software system management QR-codes media objects. An overview of the features and uses of augmented reality technology to support educational processes is displayed, as an option of a new form of visual demonstration of complex objects, models and processes. 

  11. Change Blindness Phenomena for Virtual Reality Display Systems.

    Science.gov (United States)

    Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete

    2011-09-01

    In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.

  12. (Re-)Examination of Multimodal Augmented Reality

    NARCIS (Netherlands)

    Rosa, N.E.; Werkhoven, P.J.; Hürst, W.O.

    2016-01-01

    The majority of augmented reality (AR) research has been concerned with visual perception, however the move towards multimodality is imminent. At the same time, there is no clear vision of what multimodal AR is. The purpose of this position paper is to consider possible ways of examining AR other

  13. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner

    Science.gov (United States)

    Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.

    2013-01-01

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388

  14. Simulated maintenance a virtual reality

    International Nuclear Information System (INIS)

    Lirvall, P.

    1995-01-01

    The article describes potential applications of personal computer-based virtual reality software. The applications are being investigated by Atomic Energy of Canada Limited's (AECL) Chalk River Laboratories for the Canadian deuterium-uranium (Candu) reactor. Objectives include: (1) reduction of outage duration and improved safety, (2) cost-effective and safe maintenance of equipment, (3) reduction of exposure times and identification of overexposure situations, (4) cost-effective training in a virtual control room simulator, (5) human factors evaluation of design interface, and (6) visualization of conceptual and detailed designs of critical nuclear field environments. A demonstration model of a typical reactor control room, the use of virtual reality in outage planning, and safety issues are outlined

  15. Use of display technologies for augmented reality enhancement

    Science.gov (United States)

    Harding, Kevin

    2016-06-01

    Augmented reality (AR) is seen as an important tool for the future of user interfaces as well as training applications. An important application area for AR is expected to be in the digitization of training and worker instructions used in the Brilliant Factory environment. The transition of work instructions methods from printed pages in a book or taped to a machine to virtual simulations is a long step with many challenges along the way. A variety of augmented reality tools are being explored today for industrial applications that range from simple programmable projections in the work space to 3D displays and head mounted gear. This paper will review where some of these tool are today and some of the pros and cons being considered for the future worker environment.

  16. Enhancing Nuclear Training with 3D Visualization

    International Nuclear Information System (INIS)

    Gagnon, V.; Gagnon, B.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  17. Applying virtual reality to remote control of mobile robot

    Directory of Open Access Journals (Sweden)

    Chen Chin-Shan

    2017-01-01

    Full Text Available The purpose of this research is based on virtual reality to assisted pick and place tasks. Virtual reality can be utilized to control remote robot for pick and place element. The operator monitored and controlled the situation information of working site by Human Machine Interface. Therefore, we worked in harsh or dangerous environments that thing can be avoided. The procedure to operate mobile robot in virtual reality describes as follow: An experiment site with really experimental equipment is first established. Then, the experimental equipment and scene modeling are input to virtual reality for establishing a environment similar to the reality. Finally, the remote mobile robot is controlled to operate pick and place tasks through wireless communication by the object operation in virtual reality. The robot consists of a movable robot platform and robotic arm. The virtual reality is constructed by EON software; the Human Machine Interface is established by Visual Basic. The wireless connection is equipped the wireless Bluetooth, which is set the PC and PLC controller. With experimental tests to verify the robot in virtual reality and the wireless remote control, the robot could be operated and controlled to successfully complete pick and place tasks in reality by Human Machine Interface.

  18. Development of a Real-Time Detection System for Augmented Reality Driving

    Directory of Open Access Journals (Sweden)

    Kuei-Shu Hsu

    2015-01-01

    Full Text Available Augmented reality technology is applied so that driving tests may be performed in various environments using a virtual reality scenario with the ultimate goal of improving visual and interactive effects of simulated drivers. Environmental conditions simulating a real scenario are created using an augmented reality structure, which guarantees the test taker’s security since they are not subject to real-life elements and dangers. Furthermore, the accuracy of tests conducted through virtual reality is not influenced by either environmental or human factors. Driver posture is captured in real time using Kinect’s depth perception function and then applied to driving simulation effects that are emulated by Unity3D’s gaming technology. Subsequently, different driving models may be collected through different drivers. In this research, nearly true and realistic street environments are simulated to evaluate driver behavior. A variety of different visual effects are easily available to effectively reduce error rates, thereby significantly improving test security as well as the reliability and reality of this project. Different situation designs are simulated and evaluated to increase development efficiency and build more security verification test platforms using such technology in conjunction with driving tests, vehicle fittings, environmental factors, and so forth.

  19. Temporal Coherence Strategies for Augmented Reality Labeling

    DEFF Research Database (Denmark)

    Madsen, Jacob Boesen; Tatzgern, Markus; Madsen, Claus B.

    2016-01-01

    Temporal coherence of annotations is an important factor in augmented reality user interfaces and for information visualization. In this paper, we empirically evaluate four different techniques for annotation. Based on these findings, we follow up with subjective evaluations in a second experiment...

  20. Applying Augmented Reality in practical classes for engineering students

    Science.gov (United States)

    Bazarov, S. E.; Kholodilin, I. Yu; Nesterov, A. S.; Sokhina, A. V.

    2017-10-01

    In this article the Augmented Reality application for teaching engineering students of electrical and technological specialties is introduced. In order to increase the motivation for learning and the independence of students, new practical guidelines on Augmented Reality were developed in the application to practical classes. During the application development, the authors used software such as Unity 3D and Vuforia. The Augmented Reality content consists of 3D-models, images and animations, which are superimposed on real objects, helping students to study specific tasks. A user who has a smartphone, a tablet PC, or Augmented Reality glasses can visualize on-screen virtual objects added to a real environment. Having analyzed the current situation in higher education: the learner’s interest in studying, their satisfaction with the educational process, and the impact of the Augmented Reality application on students, a questionnaire was developed and offered to students; the study involved 24 learners.

  1. Alternative realities : from augmented reality to mobile mixed reality

    OpenAIRE

    Claydon, Mark

    2015-01-01

    This thesis provides an overview of (mobile) augmented and mixed reality by clarifying the different concepts of reality, briefly covering the technology behind mobile augmented and mixed reality systems, conducting a concise survey of existing and emerging mobile augmented and mixed reality applications and devices. Based on the previous analysis and the survey, this work will next attempt to assess what mobile augmented and mixed reality could make possible, and what related applications an...

  2. Mobile computation offloading architecture for mobile augmented reality, case study: Visualization of cetacean skeleton

    OpenAIRE

    Belen G. Rodriguez-Santana; Amilcar Meneses Viveros; Blanca Esther Carvajal-Gamez; Diana Carolina Trejo-Osorio

    2016-01-01

    Augmented Reality applications can serve as teach-ing tools in different contexts of use. Augmented reality appli-cation on mobile devices can help to provide tourist information on cities or to give information on visits to museums. For example, during visits to museums of natural history, applications of augmented reality on mobile devices can be used by some visitors to interact with the skeleton of a whale. However, making rendering heavy models can be computationally infeasible on device...

  3. Transforming Polar Research with Google Glass Augmented Reality (Invited)

    Science.gov (United States)

    Ruthkoski, T.

    2013-12-01

    Augmented reality is a new technology with the potential to accelerate the advancement of science, particularly in geophysical research. Augmented reality is defined as a live, direct or indirect, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. When paired with advanced computing techniques on cloud resources, augmented reality has the potential to improve data collection techniques, visualizations, as well as in-situ analysis for many areas of research. Google is currently a pioneer of augmented reality technology and has released beta versions of their wearable computing device, Google Glass, to a select number of developers and beta testers. This community of 'Glass Explorers' is the vehicle from which Google shapes the future of their augmented reality device. Example applications of Google Glass in geophysical research range from use as a data gathering interface in harsh climates to an on-site visualization and analysis tool. Early participation in the shaping of the Google Glass device is an opportunity for researchers to tailor this new technology to their specific needs. The purpose of this presentation is to provide geophysical researchers with a hands-on first look at Google Glass and its potential as a scientific tool. Attendees will be given an overview of the technical specifications as well as a live demonstration of the device. Potential applications to geophysical research in polar regions will be the primary focus. The presentation will conclude with an open call to participate, during which attendees may indicate interest in developing projects that integrate Google Glass into their research. Application Mockup: Penguin Counter Google Glass Augmented Reality Device

  4. Enhanced dimension-specific visual working memory in grapheme-color synesthesia.

    Science.gov (United States)

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-10-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme-color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Mobile Augmented Reality Support for Architects based on feature Tracking Techniques

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Nielsen, Mikkel Bang; Kramp, Gunnar

    2004-01-01

    This paper presents a mobile Augmented Reality (AR) system called the SitePack supporting architects in visualizing 3D models in real-time on site. We describe how vision based feature tracking techniques can help architects making decisions on site concerning visual impact assessment. The AR sys...

  6. Visualization and labeling of point clouds in virtual reality

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Sun, Yongbin; Greenwald, Scott W.

    2017-01-01

    We present a Virtual Reality (VR) application for labeling and handling point cloud data sets. A series of room-scale point clouds are recorded as a video sequence using a Microsoft Kinect. The data can be played and paused, and frames can be skipped just like in a video player. The user can walk...

  7. A Multiplayer Learning Game based on Mixed Reality to Enhance Awareness on Archaeology

    Directory of Open Access Journals (Sweden)

    Mathieu Loiseau

    2014-08-01

    Full Text Available Our research deals with the development of a new type of game-based learning environment: (MMORPG based on mixed reality, applied in the archaeological domain. In this paper, we propose a learning scenario that enhances players’ motivation thanks to individual, collaborative and social activities and that offers a continuous experience between the virtual environment and real places (archaeological sites, museum. After describing the challenge to a rich multidisciplinary approach involving both computer scientists and archaeologists, we present two types of game: multiplayer online role-playing games and mixed reality games. We build on the specificities of these games to make the design choices described in the paper. We also present three modular features we have developed to support independently three activities of the scenario. The proposed approach aims at raising awareness among people on the scientific approach in Archaeology, by providing them information in the virtual environment and encouraging them to go on real sites. We finally discuss the issues raised by this work, such as the tensions between the perceived individual, team and community utilities, as well as the choice of the entering point in the learning scenario (real or virtual for the players’ involvement in the game.

  8. Aspects of User Experience in Augmented Reality

    DEFF Research Database (Denmark)

    Madsen, Jacob Boesen

    in human factors related to Augmented Reality. This is investigated partly as how Augmented Reality applications are used in unsupervised settings, and partly in specific evaluations related to user performance in supervised settings. The thesis starts by introducing Augmented Reality to the reader......, followed by a presentation of the technical areas related to the field, and different human factor areas. As a contribution to the research area, this thesis presents five separate, but sequential, papers within the area of Augmented Reality.......In Augmented Reality applications, the real environment is annotated or enhanced with computer-generated graphics. This is a topic that has been researched in the recent decades, but for many people this is a brand new and never heard of topic. The main focus of this thesis is investigations...

  9. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    Science.gov (United States)

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. RealityConvert: a tool for preparing 3D models of biochemical structures for augmented and virtual reality.

    Science.gov (United States)

    Borrel, Alexandre; Fourches, Denis

    2017-12-01

    There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Augmented Reality in Scientific Publications-Taking the Visualization of 3D Structures to the Next Level.

    Science.gov (United States)

    Wolle, Patrik; Müller, Matthias P; Rauh, Daniel

    2018-03-16

    The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.

  12. How virtual reality may enhance training in obstetrics and gynecology.

    Science.gov (United States)

    Letterie, Gerard S

    2002-09-01

    Contemporary training in obstetrics and gynecology is aimed at the acquisition of a complex set of skills oriented to both the technical and personal aspects of patient care. The ability to create clinical simulations through virtual reality (VR) may facilitate the accomplishment of these goals. The purpose of this paper is 2-fold: (1) to review the circumstances and equipment in industry, science, and education in which VR has been successfully applied, and (2) to explore the possible role of VR for training in obstetrics and gynecology and to suggest innovative and unique approaches to enhancing this training. Qualitative assessment of the literature describing successful applications of VR in industry, law enforcement, military, and medicine from 1995 to 2000. Articles were identified through a computer-based search using Medline, Current Contents, and cross referencing bibliographies of articles identified through the search. One hundred and fifty-four articles were reviewed. This review of contemporary literature suggests that VR has been successfully used to simulate person-to-person interactions for training in psychiatry and the social sciences in a variety of circumstances by using real-time simulations of personal interactions, and to launch 3-dimensional trainers for surgical simulation. These successful applications and simulations suggest that this technology may be helpful and should be evaluated as an educational modality in obstetrics and gynecology in two areas: (1) counseling in circumstances ranging from routine preoperative informed consent to intervention in more acute circumstances such as domestic violence or rape, and (2) training in basic and advanced surgical skills for both medical students and residents. Virtual reality is an untested, but potentially useful, modality for training in obstetrics and gynecology. On the basis of successful applications in other nonmedical and medical areas, VR may have a role in teaching essential elements

  13. Virtual reality devices integration in scientific visualization software in the VtkVRPN framework; Integration de peripheriques de realite virtuelle dans des applications de visualisation scientifique au sein de la plate-forme VtkVRPN

    Energy Technology Data Exchange (ETDEWEB)

    Journe, G.; Guilbaud, C

    2005-07-01

    A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)

  14. Can Driving-Simulator Training Enhance Visual Attention, Cognition, and Physical Functioning in Older Adults?

    Directory of Open Access Journals (Sweden)

    Mathias Haeger

    2018-01-01

    Full Text Available Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.

  15. Can Driving-Simulator Training Enhance Visual Attention, Cognition, and Physical Functioning in Older Adults?

    Science.gov (United States)

    Haeger, Mathias; Bock, Otmar; Memmert, Daniel; Hüttermann, Stefanie

    2018-01-01

    Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women) took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week) on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.

  16. Augmented Reality and Virtual Reality in Physical and Online Retailing:A Review, Synthesis and Research Agenda

    OpenAIRE

    Bonetti, Francesca; Wamaby, Gary; Quinn, Lee

    2017-01-01

    Augmented reality (AR) and virtual reality (VR) have emerged as rapidly developing technologies used in both physical and online retailing to enhance the selling environment and shopping experience. However, academic research on, and practical applications of, AR and VR in retail are still fragmented, and this state of affairs is arguably attributable to the interdisciplinary origins of the topic. Undertaking a comparative chronological analysis of AR and VR research and applications in a ret...

  17. [Display technologies for augmented reality in medical applications].

    Science.gov (United States)

    Eck, Ulrich; Winkler, Alexander

    2018-04-01

    One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.

  18. Using virtual reality technology for aircraft visual inspection training: presence and comparison studies.

    Science.gov (United States)

    Vora, Jeenal; Nair, Santosh; Gramopadhye, Anand K; Duchowski, Andrew T; Melloy, Brian J; Kanki, Barbara

    2002-11-01

    The aircraft maintenance industry is a complex system consisting of several interrelated human and machine components. Recognizing this, the Federal Aviation Administration (FAA) has pursued human factors related research. In the maintenance arena the research has focused on the aircraft inspection process and the aircraft inspector. Training has been identified as the primary intervention strategy to improve the quality and reliability of aircraft inspection. If training is to be successful, it is critical that we provide aircraft inspectors with appropriate training tools and environment. In response to this need, the paper outlines the development of a virtual reality (VR) system for aircraft inspection training. VR has generated much excitement but little formal proof that it is useful. However, since VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. To address this important issue, this research measured the degree of immersion and presence felt by subjects in a virtual environment simulator. Specifically, it conducted two controlled studies using the VR system developed for visual inspection task of an aft-cargo bay at the VR Lab of Clemson University. Beyond assembling the visual inspection virtual environment, a significant goal of this project was to explore subjective presence as it affects task performance. The results of this study indicated that the system scored high on the issues related to the degree of presence felt by the subjects. As a next logical step, this study, then, compared VR to an existing PC-based aircraft inspection simulator. The results showed that the VR system was better and preferred over the PC-based training tool.

  19. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.

    Science.gov (United States)

    Wright, W Geoffrey

    2014-01-01

    Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  20. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds

    Directory of Open Access Journals (Sweden)

    W. Geoffrey Wright

    2014-04-01

    Full Text Available Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS. This mini-review focuses on the use of virtual environments (VE to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  1. Interactive Learning Environment: Web-based Virtual Hydrological Simulation System using Augmented and Immersive Reality

    Science.gov (United States)

    Demir, I.

    2014-12-01

    Recent developments in internet technologies make it possible to manage and visualize large data on the web. Novel visualization techniques and interactive user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The hydrological simulation system is a web-based 3D interactive learning environment for teaching hydrological processes and concepts. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create or load predefined scenarios, control environmental parameters, and evaluate environmental mitigation alternatives. The web-based simulation system provides an environment for students to learn about the hydrological processes (e.g. flooding and flood damage), and effects of development and human activity in the floodplain. The system utilizes latest web technologies and graphics processing unit (GPU) for water simulation and object collisions on the terrain. Users can access the system in three visualization modes including virtual reality, augmented reality, and immersive reality using heads-up display. The system provides various scenarios customized to fit the age and education level of various users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various visualization and interaction modes.

  2. How virtual reality works: illusions of vision in "real" and virtual environments

    Science.gov (United States)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  3. Enhanced visual statistical learning in adults with autism

    Science.gov (United States)

    Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József

    2014-01-01

    Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115

  4. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment

    Science.gov (United States)

    Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience. PMID:29390023

  5. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment.

    Science.gov (United States)

    Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg

    2018-01-01

    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.

  6. The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment.

    Directory of Open Access Journals (Sweden)

    Natalia Cooper

    Full Text Available Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound are easier to present than others (object weight, vestibular cues so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times and subjective (ratings of presence performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.

  7. An augmented reality system for upper-limb post-stroke motor rehabilitation: a feasibility study.

    Science.gov (United States)

    Assis, Gilda Aparecida de; Corrêa, Ana Grasielle Dionísio; Martins, Maria Bernardete Rodrigues; Pedrozo, Wendel Goes; Lopes, Roseli de Deus

    2016-08-01

    To determine the clinical feasibility of a system based on augmented reality for upper-limb (UL) motor rehabilitation of stroke participants. A physiotherapist instructed the participants to accomplish tasks in augmented reality environment, where they could see themselves and their surroundings, as in a mirror. Two case studies were conducted. Participants were evaluated pre- and post-intervention. The first study evaluated the UL motor function using Fugl-Meyer scale. Data were compared using non-parametric sign tests and effect size. The second study used the gain of motion range of shoulder flexion and abduction assessed by computerized biophotogrammetry. At a significance level of 5%, Fugl-Meyer scores suggested a trend for greater UL motor improvement in the augmented reality group than in the other. Moreover, effect size value 0.86 suggested high practical significance for UL motor rehabilitation using the augmented reality system. System provided promising results for UL motor rehabilitation, since enhancements have been observed in the shoulder range of motion and speed. Implications for Rehabilitation Gain of range of motion of flexion and abduction of the shoulder of post-stroke patients can be achieved through an augmented reality system containing exercises to promote the mental practice. NeuroR system provides a mental practice method combined with visual feedback for motor rehabilitation of chronic stroke patients, giving the illusion of injured upper-limb (UL) movements while the affected UL is resting. Its application is feasible and safe. This system can be used to improve UL rehabilitation, an additional treatment past the traditional period of the stroke patient hospitalization and rehabilitation.

  8. Toward Functional Augmented Reality in Marine Navigation : A Cognitive Work Analysis

    NARCIS (Netherlands)

    Procee, S.; Borst, C.; van Paassen, M.M.; Mulder, M.; Bertram, V.

    2017-01-01

    Augmented Reality, (AR) also known as vision-overlay, can help the navigator to visually detect a dangerous target by the overlay of a synthetic image, thus providing a visual cue over the real world. This is the first paper of a series about the practicalities and consequences of implementing AR in

  9. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    Science.gov (United States)

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  10. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    KAUST Repository

    Bach, Benjamin

    2017-08-29

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user\\'s real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  11. The Potential of Using Virtual Reality Technology in Physical Activity Settings

    Science.gov (United States)

    Pasco, Denis

    2013-01-01

    In recent years, virtual reality technology has been successfully used for learning purposes. The purposes of the article are to examine current research on the role of virtual reality in physical activity settings and discuss potential application of using virtual reality technology to enhance learning in physical education. The article starts…

  12. Molecular simulations and visualization: introduction and overview.

    Science.gov (United States)

    Hirst, Jonathan D; Glowacki, David R; Baaden, Marc

    2014-01-01

    Here we provide an introduction and overview of current progress in the field of molecular simulation and visualization, touching on the following topics: (1) virtual and augmented reality for immersive molecular simulations; (2) advanced visualization and visual analytic techniques; (3) new developments in high performance computing; and (4) applications and model building.

  13. Using Immersive Visualizations to Improve Decision Making and Enhancing Public Understanding of Earth Resource and Climate Issues

    Science.gov (United States)

    Yu, K. C.; Raynolds, R. G.; Dechesne, M.

    2008-12-01

    New visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. We have impacted the community through topical policy presentations at both state and city levels, adult education classes at the Denver Museum of Nature and Science (DMNS), and public lectures at DMNS. We have constructed three-dimensional models from well data and surface observations which allow policy makers to better understand the distribution of groundwater in sandstone aquifers of the Denver Basin. Our presentations to local governments in the Denver metro area have allowed resource managers to better project future ground water depletion patterns, and to encourage development of alternative sources. DMNS adult education classes on water resources, geography, and regional geology, as well as public lectures on global issues such as earthquakes, tsunamis, and resource depletion, have utilized the visualizations developed from these research models. In addition to presenting GIS models in traditional lectures, we have also made use of the immersive display capabilities of the digital "fulldome" Gates Planetarium at DMNS. The real-time Uniview visualization application installed at Gates was designed for teaching astronomy, but it can be re-purposed for displaying our model datasets in the context of the Earth's surface. The 17-meter diameter dome of the Gates Planetarium allows an audience to have an immersive experience---similar to virtual reality CAVEs employed by the oil exploration industry---that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend

  14. A telescope with augmented reality functions

    Science.gov (United States)

    Hou, Qichao; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian

    2016-10-01

    This study introduces a telescope with virtual reality (VR) and augmented reality (AR) functions. In this telescope, information on the micro-display screen is integrated to the reticule of telescope through a beam splitter and is then received by the observer. The design and analysis of telescope optical system with AR and VR ability is accomplished and the opto-mechanical structure is designed. Finally, a proof-of-concept prototype is fabricated and demonstrated. The telescope has an exit pupil diameter of 6 mm at an eye relief of 19 mm, 6° field of view, 5 to 8 times visual magnification , and a 30° field of view of the virtual image.

  15. Development of reactor design aid tool using virtual reality technology

    International Nuclear Information System (INIS)

    Mizuguchi, N.; Tamura, Y.; Imagawa, S.; Sagara, A.; Hayashi, T.

    2006-01-01

    A new type of aid system for fusion reactor design, to which the virtual reality (VR) visualization and sonification techniques are applied, is developed. This system provides us with an intuitive interaction environment in the VR space between the observer and the designed objects constructed by the conventional 3D computer-aided design (CAD) system. We have applied the design aid tool to the heliotron-type fusion reactor design activity FFHR2m [A. Sagara, S. Imagawa, O. Mitarai, T. Dolan, T. Tanaka, Y. Kubota, et al., Improved structure and long -life blanket concepts for heliotron reactors, Nucl. Fusion 45 (2005) 258-263] on the virtual reality system CompleXcope [Y. Tamura, A. Kageyama, T. Sato, S. Fujiwara, H. Nakamura, Virtual reality system to visualize and auralize numerical imulation data, Comp. Phys. Comm. 142 (2001) 227-230] of the National Institute for Fusion Science, Japan, and have evaluated its performance. The tool includes the functions of transfer of the observer, translation and scaling of the objects, recording of the operations and the check of interference

  16. A Preliminary Study of Users' Experiences of Meditation in Virtual Reality

    DEFF Research Database (Denmark)

    Andersen, Thea Louise Strange; Anisimovaite, Gintare; Christiansen, Anders Schultz

    2017-01-01

    This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects for the particip......This poster describes a between-groups study (n=24) exploring the use of virtual reality (VR) for facilitating focused meditation. Half of the participants were exposed to a meditation session combing the sound of a guiding voice and a visual environment including virtual objects...... differences were found between the two conditions. This finding may be revealing in regards to the usefulness of VR-based meditation....

  17. Social facilitation in virtual reality-enhanced exercise: competitiveness moderates exercise effort of older adults.

    Science.gov (United States)

    Anderson-Hanley, Cay; Snyder, Amanda L; Nimon, Joseph P; Arciero, Paul J

    2011-01-01

    This study examined the effect of virtual social facilitation and competitiveness on exercise effort in exergaming older adults. Fourteen exergaming older adults participated. Competitiveness was assessed prior to the start of exercise. Participants were trained to ride a "cybercycle;" a virtual reality-enhanced stationary bike with interactive competition. After establishing a cybercycling baseline, competitive avatars were introduced. Pedaling effort (watts) was assessed. Repeated measures ANOVA revealed a significant group (high vs low competitiveness) × time (pre- to post-avatar) interaction (F[1,12] = 13.1, P = 0.003). Virtual social facilitation increased exercise effort among more competitive exercisers. Exercise programs that match competitiveness may maximize exercise effort.

  18. Enhancing User Experiences of Mobile-Based Augmented Reality via Spatial Augmented Reality: Designs and Architectures of Projector-Camera Devices

    Directory of Open Access Journals (Sweden)

    Thitirat Siriborvornratanakul

    2018-01-01

    Full Text Available As smartphones, tablet computers, and other mobile devices have continued to dominate our digital world ecosystem, there are many industries using mobile or wearable devices to perform Augmented Reality (AR functions in their workplaces in order to increase productivity and decrease unnecessary workloads. Mobile-based AR can basically be divided into three main types: phone-based AR, wearable AR, and projector-based AR. Among these, projector-based AR or Spatial Augmented Reality (SAR is the most immature and least recognized type of AR for end users. This is because there are a small number of commercial products providing projector-based AR functionalities in a mobile manner. Also, prices of mobile projectors are still relatively high. Moreover, there are still many technical problems regarding projector-based AR that have been left unsolved. Nevertheless, it is projector-based AR that has potential to solve a fundamental problem shared by most mobile-based AR systems. Also the always-visible nature of projector-based AR is one good answer for solving current user experience issues of phone-based AR and wearable AR systems. Hence, in this paper, we analyze what are the user experience issues and technical issues regarding common mobile-based AR systems, recently widespread phone-based AR systems, and rising wearable AR systems. Then for each issue, we propose and explain a new solution of how using projector-based AR can solve the problems and/or help enhance its user experiences. Our proposed framework includes hardware designs and architectures as well as a software computing paradigm towards mobile projector-based AR systems. The proposed design is evaluated by three experts using qualitative and semiquantitative research approaches.

  19. Enhancement and suppression in the visual field under perceptual load.

    Science.gov (United States)

    Parks, Nathan A; Beck, Diane M; Kramer, Arthur F

    2013-01-01

    The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  20. Enhancement and Suppression in the Visual Field under Perceptual Load

    Directory of Open Access Journals (Sweden)

    Nathan A Parks

    2013-05-01

    Full Text Available The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task – greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs in conjunction with time-domain event-related potentials (ERPs to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2°, 6°, or 11° during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3Hz was attenuated under high perceptual load (relative to low load at the most proximal (2° eccentricity but not at more eccentric locations (6˚ or 11˚. Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  1. Gunner Goggles: Implementing Augmented Reality into Medical Education.

    Science.gov (United States)

    Wang, Leo L; Wu, Hao-Hua; Bilici, Nadir; Tenney-Soeiro, Rebecca

    2016-01-01

    There is evidence that both smartphone and tablet integration into medical education has been lacking. At the same time, there is a niche for augmented reality (AR) to improve this process through the enhancement of textbook learning. Gunner Goggles is an attempt to enhance textbook learning in shelf exam preparatory review with augmented reality. Here we describe our initial prototype and detail the process by which augmented reality was implemented into our textbook through Layar. We describe the unique functionalities of our textbook pages upon augmented reality implementation, which includes links, videos and 3D figures, and surveyed 24 third year medical students for their impression of the technology. Upon demonstrating an initial prototype textbook chapter, 100% (24/24) of students felt that augmented reality improved the quality of our textbook chapter as a learning tool. Of these students, 92% (22/24) agreed that their shelf exam review was inadequate and 19/24 (79%) felt that a completed Gunner Goggles product would have been a viable alternative to their shelf exam review. Thus, while students report interest in the integration of AR into medical education test prep, future investigation into how the use of AR can improve performance on exams is warranted.

  2. Performance of a visuomotor walking task in an augmented reality training setting

    NARCIS (Netherlands)

    Haarman, Juliet A.M.; Choi, Julia T.; Buurke, Jaap H.; Rietman, Johan S.; Reenalda, Jasper

    2017-01-01

    Visual cues can be used to train walking patterns. Here, we studied the performance and learning capacities of healthy subjects executing a high-precision visuomotor walking task, in an augmented reality training set-up. A beamer was used to project visual stepping targets on the walking surface of

  3. Speculations on the representation of architecture in virtual reality

    DEFF Research Database (Denmark)

    Hermund, Anders; Klint, Lars; Bundgård, Ture Slot

    2017-01-01

    to the visual field of perception. However, this should not necessarily imply an acceptance of the dominance of vision over the other senses, and the much-criticized retinal architecture with its inherent loss of plasticity. Recent neurology studies indicate that 3D representation models in virtual reality...... are less demanding on the brain’s working memory than 3D models seen on flat two-dimensional screens. This paper suggests that virtual reality representational architectural models can, if used correctly, significantly improve the imaginative role of architectural representation....

  4. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.

    Science.gov (United States)

    Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer

    2017-01-01

    Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p  = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and

  5. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion

    Directory of Open Access Journals (Sweden)

    Daniel S. Harvie

    2017-02-01

    Full Text Available Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17. That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The Mo

  6. Preliminary development of augmented reality systems for spinal surgery

    Science.gov (United States)

    Nguyen, Nhu Q.; Ramjist, Joel M.; Jivraj, Jamil; Jakubovic, Raphael; Deorajh, Ryan; Yang, Victor X. D.

    2017-02-01

    Surgical navigation has been more actively deployed in open spinal surgeries due to the need for improved precision during procedures. This is increasingly difficult in minimally invasive surgeries due to the lack of visual cues caused by smaller exposure sites, and increases a surgeon's dependence on their knowledge of anatomical landmarks as well as the CT or MRI images. The use of augmented reality (AR) systems and registration technologies in spinal surgeries could allow for improvements to techniques by overlaying a 3D reconstruction of patient anatomy in the surgeon's field of view, creating a mixed reality visualization. The AR system will be capable of projecting the 3D reconstruction onto a field and preliminary object tracking on a phantom. Dimensional accuracy of the mixed media will also be quantified to account for distortions in tracking.

  7. CARE: Creating Augmented Reality in Education

    Science.gov (United States)

    Latif, Farzana

    2012-01-01

    This paper explores how Augmented Reality using mobile phones can enhance teaching and learning in education. It specifically examines its application in two cases, where it is identified that the agility of mobile devices and the ability to overlay context specific resources offers opportunities to enhance learning that would not otherwise exist.…

  8. Virtual reality and hallucination: a technoetic perspective

    Science.gov (United States)

    Slattery, Diana R.

    2008-02-01

    Virtual Reality (VR), especially in a technologically focused discourse, is defined by a class of hardware and software, among them head-mounted displays (HMDs), navigation and pointing devices; and stereoscopic imaging. This presentation examines the experiential aspect of VR. Putting "virtual" in front of "reality" modifies the ontological status of a class of experience-that of "reality." Reality has also been modified [by artists, new media theorists, technologists and philosophers] as augmented, mixed, simulated, artificial, layered, and enhanced. Modifications of reality are closely tied to modifications of perception. Media theorist Roy Ascott creates a model of three "VR's": Verifiable Reality, Virtual Reality, and Vegetal (entheogenically induced) Reality. The ways in which we shift our perceptual assumptions, create and verify illusions, and enter "the willing suspension of disbelief" that allows us entry into imaginal worlds is central to the experience of VR worlds, whether those worlds are explicitly representational (robotic manipulations by VR) or explicitly imaginal (VR artistic creations). The early rhetoric surrounding VR was interwoven with psychedelics, a perception amplified by Timothy Leary's presence on the historic SIGGRAPH panel, and the Wall Street Journal's tag of VR as "electronic LSD." This paper discusses the connections-philosophical, social-historical, and psychological-perceptual between these two domains.

  9. Social Augmented Reality: Enhancing Context-Dependent Communication and Informal Learning at Work

    Science.gov (United States)

    Pejoska, Jana; Bauters, Merja; Purma, Jukka; Leinonen, Teemu

    2016-01-01

    Our design proposal of social augmented reality (SoAR) grows from the observed difficulties of practical applications of augmented reality (AR) in workplace learning. In our research we investigated construction workers doing physical work in the field and analyzed the data using qualitative methods in various workshops. The challenges related to…

  10. The Effect of an Augmented Reality Enhanced Mathematics Lesson on Student Achievement and Motivation

    Science.gov (United States)

    Estapa, Anne; Nadolny, Larysa

    2015-01-01

    The purpose of the study was to assess student achievement and motivation during a high school augmented reality mathematics activity focused on dimensional analysis. Included in this article is a review of the literature on the use of augmented reality in mathematics and the combination of print with augmented reality, also known as interactive…

  11. Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality.

    Science.gov (United States)

    Pritchard, Stephen C; Zopf, Regine; Polito, Vince; Kaplan, David M; Williams, Mark A

    2016-01-01

    The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual-tactile synchrony , and visual-proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.

  12. Virtual Reality and Simulation in Neurosurgical Training.

    Science.gov (United States)

    Bernardo, Antonio

    2017-10-01

    Recent biotechnological advances, including three-dimensional microscopy and endoscopy, virtual reality, surgical simulation, surgical robotics, and advanced neuroimaging, have continued to mold the surgeon-computer relationship. For developing neurosurgeons, such tools can reduce the learning curve, improve conceptual understanding of complex anatomy, and enhance visuospatial skills. We explore the current and future roles and application of virtual reality and simulation in neurosurgical training. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A standardized set of 3-D objects for virtual reality research and applications.

    Science.gov (United States)

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  14. A Control Room Design Support system using virtual reality

    International Nuclear Information System (INIS)

    Sakuma, Akira; Fukumoto, Akira; Hatanaka, Takahiro; Saijou, Nobuyuki; Masugi, Tsuyoshi

    1999-01-01

    To enhance the efficiency of design and evaluation of the control and monitoring system in the main control room of nuclear power plants, we have been developing a COntrol Room Design Support system (CORDS) using virtual reality technology. Using CORDS, vendor designers and customers can visually check and review human interface design of the proposed control and monitoring systems. The geometry of panels and consoles of the control and monitoring system represented as 3-dimensional static CG (computer graphics) models. Dynamic components, such as control switches, CRT displays and so on, are modeled as dynamic objects in the geometric CG model environment. CORDS is linked with real-time plant simulator. The dynamic objects respond to the corresponding process variables in the simulator, which enables visual evaluation of the response of the control and monitoring system for the various normal and abnormal plant status. The behavior of plant operators can be simulated in 3-dimensional CG control room environment. The operators can be displayed as CG figures and their motions are modeled and controlled based on plant operation manuals. A prototype of CORDS has constructed on a graphics workstation and two engineering workstations. (author)

  15. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  16. Immersive Visualization of the Solid Earth

    Science.gov (United States)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs

  17. Objective Assessment of Activity Limitation in Glaucoma with Smartphone Virtual Reality Goggles: A Pilot Study.

    Science.gov (United States)

    Goh, Rachel L Z; Kong, Yu Xiang George; McAlinden, Colm; Liu, John; Crowston, Jonathan G; Skalicky, Simon E

    2018-01-01

    To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire - Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups ( P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes ( R = 0.243-0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS ( P = 0.044) and greater age ( P = 0.009) were associated with worse stationary test person scores. Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma.

  18. Virtual-Reality training system for nuclear security

    International Nuclear Information System (INIS)

    Nonaka, Nobuyuki

    2012-01-01

    At the Integrated Support Center for Nuclear Nonproliferation and Nuclear Security (ISCN) of the Japan Atomic Energy Agency, the virtual reality (VR) training system is under development for providing a practical training environment to implement experience-oriented and interactive lessons on nuclear security for wide range of participants in human resource development assistance program mainly to Asian emerging nuclear-power countries. This system electrically recreates and visualizes nuclear facilities and training conditions in stereoscopic (3D) view on a large-scale display (CAVE system) as virtual reality training facility (VR facility) and it provides training participants with effective environments to learn installation and layout of security equipment in the facility testing and verifying visually the protection performances under various situations such as changes in day-night lighting and weather conditions, which may lead to practical exercise in the design and evaluation of the physical protection system. This paper introduces basic concept of the system and outline of training programs as well as featured aspects in using the VR technology for the nuclear security. (author)

  19. Augmented Reality Based Doppler Lidar Data Visualization: Promises and Challenges

    Directory of Open Access Journals (Sweden)

    Cherukuru N. W.

    2016-01-01

    As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.

  20. Augmented Reality in Tourism - Research and Applications Overview

    Directory of Open Access Journals (Sweden)

    Anabel L. Kečkeš

    2017-06-01

    Full Text Available Augmented reality is a complex interdisciplinary field utilizing information technologies in diverse areas such as medicine, education, architecture, industry, tourism and others, augmenting the real-time, real-world view with additional superimposed information in chosen format(s. The aim of this paper is to present an overview of both research and application aspects of using augmented reality technologies in tourism domain. While most research, and especially applications, are dealing with and developing visual-based augmented reality systems, there is a relevant amount of research discussing the utilization of other human senses such as tactioception and audioception, both being discussed within this work. A comprehensive literature analysis within this paper resulted with the identification, compilation and categorization of the key factors having the most relevant impact on the success of utilization of augmented technology in tourism domain.

  1. Cholinergic enhancement reduces orientation-specific surround suppression but not visual crowding

    Directory of Open Access Journals (Sweden)

    Anna A. Kosovicheva

    2012-09-01

    Full Text Available Acetylcholine (ACh reduces the spatial spread of excitatory fMRI responses in early visual cortex and the receptive field sizes of V1 neurons. We investigated the perceptual consequences of these physiological effects of ACh with surround suppression and crowding, two tasks that involve spatial interactions between visual field locations. Surround suppression refers to the reduction in perceived stimulus contrast by a high-contrast surround stimulus. For grating stimuli, surround suppression is selective for the relative orientations of the center and surround, suggesting that it results from inhibitory interactions in early visual cortex. Crowding refers to impaired identification of a peripheral stimulus in the presence of flankers and is thought to result from excessive integration of visual features. We increased synaptic ACh levels by administering the cholinesterase inhibitor donepezil to healthy human subjects in a placebo-controlled, double-blind design. In Exp. 1, we measured surround suppression of a central grating using a contrast discrimination task with three conditions: 1 surround grating with the same orientation as the center (parallel, 2 surround orthogonal to the center, or 3 no surround. Contrast discrimination thresholds were higher in the parallel than in the orthogonal condition, demonstrating orientation-specific surround suppression (OSSS. Cholinergic enhancement reduced thresholds only in the parallel condition, thereby reducing OSSS. In Exp. 2, subjects performed a crowding task in which they reported the identity of a peripheral letter flanked by letters on either side. We measured the critical spacing between the target and flanking letters that allowed reliable identification. Cholinergic enhancement had no effect on critical spacing. Our findings suggest that ACh reduces spatial interactions in tasks involving segmentation of visual field locations but that these effects may be limited to early visual cortical

  2. Testing of Visual Field with Virtual Reality Goggles in Manual and Visual Grasp Modes

    Directory of Open Access Journals (Sweden)

    Dariusz Wroblewski

    2014-01-01

    Full Text Available Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1 manual, with patient response registered with a mouse click, and (2 visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1 minimal systematic differences between measurements taken in visual grasp and manual modes, (2 the average standard deviation of the difference distributions of about 5 dB, and (3 a systematic shift (of 4–6 dB to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients’ acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.

  3. Testing of visual field with virtual reality goggles in manual and visual grasp modes.

    Science.gov (United States)

    Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas

    2014-01-01

    Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.

  4. Virtual reality at work

    Science.gov (United States)

    Brooks, Frederick P., Jr.

    1991-01-01

    The utility of virtual reality computer graphics in telepresence applications is not hard to grasp and promises to be great. When the virtual world is entirely synthetic, as opposed to real but remote, the utility is harder to establish. Vehicle simulators for aircraft, vessels, and motor vehicles are proving their worth every day. Entertainment applications such as Disney World's StarTours are technologically elegant, good fun, and economically viable. Nevertheless, some of us have no real desire to spend our lifework serving the entertainment craze of our sick culture; we want to see this exciting technology put to work in medicine and science. The topics covered include the following: testing a force display for scientific visualization -- molecular docking; and testing a head-mounted display for scientific and medical visualization.

  5. ENGEMBANGAN VIRTUAL CLASS UNTUK PEMBELAJARAN AUGMENTED REALITY BERBASIS ANDROID

    Directory of Open Access Journals (Sweden)

    Rifiana Arief

    2015-02-01

    Full Text Available ABSTRACT Augmanted Reality for android handphone has been a trend among collage students of computer department who join New Media course. To develop this application, the knowladge about visual presentation theory and case study of Augmanted Reality on android phoneneed to be conducted. Learning media through virtual class can facilitate the students’ needs in learning and developing Augmanted Reality. The method of this study in developing virtual class for Augmented Reality learning were: a having preparation to arrange learning unit, b analyzing and developing the content of learning materials, c designing storyboard or scenario of the virtual class, d making website of virtual class, e implementing the website as facility of online learning for Augmanted Reality. The available facilities in virtual class were to check learning units, to choose and download the material in the forms of e-book and presentation slides, to open the relevant website link for material enrichment as well as students’ practice with pre-test and post-test for measuring students’ understanding. By implementing virtual class for Augmanted Reality learning based Android, it is expected to provide alternative learning strategies for students that are interesting and easy to understand. The students are expected to be able to utilize this facility optimally in order to achieve the purposes of learning process and graduates’ competence. Keywords: VirtualClass, Augmented Reality (AR

  6. A virtual reality-based method of decreasing transmission time of visual feedback for a tele-operative robotic catheter operating system.

    Science.gov (United States)

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-03-01

    An Internet-based tele-operative robotic catheter operating system was designed for vascular interventional surgery, to afford unskilled surgeons the opportunity to learn basic catheter/guidewire skills, while allowing experienced physicians to perform surgeries cooperatively. Remote surgical procedures, limited by variable transmission times for visual feedback, have been associated with deterioration in operability and vascular wall damage during surgery. At the patient's location, the catheter shape/position was detected in real time and converted into three-dimensional coordinates in a world coordinate system. At the operation location, the catheter shape was reconstructed in a virtual-reality environment, based on the coordinates received. The data volume reduction significantly reduced visual feedback transmission times. Remote transmission experiments, conducted over inter-country distances, demonstrated the improved performance of the proposed prototype. The maximum error for the catheter shape reconstruction was 0.93 mm and the transmission time was reduced considerably. The results were positive and demonstrate the feasibility of remote surgery using conventional network infrastructures. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Revolutionizing Education: The Promise of Virtual Reality

    Science.gov (United States)

    Gadelha, Rene

    2018-01-01

    Virtual reality (VR) has the potential to revolutionize education, as it immerses students in their learning more than any other available medium. By blocking out visual and auditory distractions in the classroom, it has the potential to help students deeply connect with the material they are learning in a way that has never been possible before.…

  8. Seeing Is Believing: Using Virtual Reality to Connect the Dots Between Climate Data and Reality

    Science.gov (United States)

    Skolnik, S.

    2016-12-01

    Companies like Sony, Samsung, Google, and Facebook are heavily investing in virtual reality (VR) for gaming and entertainment, and 2016 marks an important year as many affordable VR headsets are now commercially available. As VR becomes more widely adopted, one question for the science and research community is how VR can be leveraged for practical use. One answer is found in the use of VR for science storytelling and communication. VR has the potential to allow people to experience scientific content in new and engaging ways, including interacting with GIS data. By adapting data sets to create stunning, immersive visualizations and combining them with 360 video, voiceover, music and other video production techniques, we are creating a new paradigm for science communication. 360 VR content is very compelling when viewed in a VR headset and can also be accessed and viewed in a panoramic manner on the internet via websites and social media. We will discuss the proof of concept use case of a short VR 360 video which combines climate data from NASA with 360 video filmed during an extreme weather event (a blizzard). By connecting GIS data with real video footage, the viewer can gain deeper understanding of climate patterns and better comprehend the correlation between data and reality. The positive reaction this VR climate story garnered at events and conferences, such as ESIP, demonstrates the potential for scientists and researchers to communicate results, findings, and data in an engaging format. By combining GIS data and 360 video, this is a significant new approach to enhance the way that science stories are told.

  9. Visual Input Enhancement and Grammar Learning: A Meta-Analytic Review

    Science.gov (United States)

    Lee, Sang-Ki; Huang, Hung-Tzu

    2008-01-01

    Effects of pedagogical interventions with visual input enhancement on grammar learning have been investigated by a number of researchers during the past decade and a half. The present review delineates this research domain via a systematic synthesis of 16 primary studies (comprising 20 unique study samples) retrieved through an exhaustive…

  10. Speculations on the representation of architecture in virtual reality

    DEFF Research Database (Denmark)

    Hermund, Anders; Klint, Lars; Bundgård, Ture Slot

    2017-01-01

    to the visual field of perception. However, this should not necessarily imply an acceptance of the dominance of vision over the other senses, and the much-criticized retinal architecture with its inherent loss of plasticity. Recent neurology studies indicate that 3D representation models in virtual reality......This paper discusses the present and future possibilities of representation models of architecture in new media such as virtual reality, seen in the broader context of tradition, perception, and neurology. Through comparative studies of real and virtual scenarios using eye tracking, the paper...... are less demanding on the brain’s working memory than 3D models seen on flat two-dimensional screens. This paper suggests that virtual reality representational architectural models can, if used correctly, significantly improve the imaginative role of architectural representation....

  11. Player/Avatar Body Relations in Multimodal Augmented Reality Games

    NARCIS (Netherlands)

    Rosa, N.E.

    2016-01-01

    Augmented reality research is finally moving towards multimodal experiences: more and more applications do not only include visuals, but also audio and even haptics. The purpose of multimodality in these applications can be to increase realism or to increase the amount or quality of communicated

  12. Objective Assessment of Laparoscopic Force and Psychomotor Skills in a Novel Virtual Reality-Based Haptic Simulator.

    Science.gov (United States)

    Prasad, M S Raghu; Manivannan, Muniyandi; Manoharan, Govindan; Chandramohan, S M

    2016-01-01

    Most of the commercially available virtual reality-based laparoscopic simulators do not effectively evaluate combined psychomotor and force-based laparoscopic skills. Consequently, the lack of training on these critical skills leads to intraoperative errors. To assess the effectiveness of the novel virtual reality-based simulator, this study analyzed the combined psychomotor (i.e., motion or movement) and force skills of residents and expert surgeons. The study also examined the effectiveness of real-time visual force feedback and tool motion during training. Bimanual fundamental (i.e., probing, pulling, sweeping, grasping, and twisting) and complex tasks (i.e., tissue dissection) were evaluated. In both tasks, visual feedback on applied force and tool motion were provided. The skills of the participants while performing the early tasks were assessed with and without visual feedback. Participants performed 5 repetitions of fundamental and complex tasks. Reaction force and instrument acceleration were used as metrics. Surgical Gastroenterology, Government Stanley Medical College and Hospital; Institute of Surgical Gastroenterology, Madras Medical College and Rajiv Gandhi Government General Hospital. Residents (N = 25; postgraduates and surgeons with 4 and ≤10 years of laparoscopic surgery). Residents applied large forces compared with expert surgeons and performed abrupt tool movements (p < 0.001). However, visual + haptic feedback improved the performance of residents (p < 0.001). In complex tasks, visual + haptic feedback did not influence the applied force of expert surgeons, but influenced their tool motion (p < 0.001). Furthermore, in complex tissue sweeping task, expert surgeons applied more force, but were within the tissue damage limits. In both groups, exertion of large forces and abrupt tool motion were observed during grasping, probing or pulling, and tissue sweeping maneuvers (p < 0.001). Modern day curriculum-based training should evaluate the skills

  13. Effect of Power Point Enhanced Teaching (Visual Input) on Iranian Intermediate EFL Learners' Listening Comprehension Ability

    Science.gov (United States)

    Sehati, Samira; Khodabandehlou, Morteza

    2017-01-01

    The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…

  14. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    Science.gov (United States)

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of

  15. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  16. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Touching proteins with virtual bare hands - Visualizing protein-drug complexes and their dynamics in self-made virtual reality using gaming hardware

    Science.gov (United States)

    Ratamero, Erick Martins; Bellini, Dom; Dowson, Christopher G.; Römer, Rudolf A.

    2018-06-01

    The ability to precisely visualize the atomic geometry of the interactions between a drug and its protein target in structural models is critical in predicting the correct modifications in previously identified inhibitors to create more effective next generation drugs. It is currently common practice among medicinal chemists while attempting the above to access the information contained in three-dimensional structures by using two-dimensional projections, which can preclude disclosure of useful features. A more accessible and intuitive visualization of the three-dimensional configuration of the atomic geometry in the models can be achieved through the implementation of immersive virtual reality (VR). While bespoke commercial VR suites are available, in this work, we present a freely available software pipeline for visualising protein structures through VR. New consumer hardware, such as the uc(HTC Vive) and the uc(Oculus Rift) utilized in this study, are available at reasonable prices. As an instructive example, we have combined VR visualization with fast algorithms for simulating intramolecular motions of protein flexibility, in an effort to further improve structure-led drug design by exposing molecular interactions that might be hidden in the less informative static models. This is a paradigmatic test case scenario for many similar applications in computer-aided molecular studies and design.

  18. Driver Behavior and Performance with Augmented Reality Pedestrian Collision Warning: An Outdoor User Study.

    Science.gov (United States)

    Kim, Hyungil; Gabbard, Joseph L; Anon, Alexandre Miranda; Misu, Teruhisa

    2018-04-01

    This article investigates the effects of visual warning presentation methods on human performance in augmented reality (AR) driving. An experimental user study was conducted in a parking lot where participants drove a test vehicle while braking for any cross traffic with assistance from AR visual warnings presented on a monoscopic and volumetric head-up display (HUD). Results showed that monoscopic displays can be as effective as volumetric displays for human performance in AR braking tasks. The experiment also demonstrated the benefits of conformal graphics, which are tightly integrated into the real world, such as their ability to guide drivers' attention and their positive consequences on driver behavior and performance. These findings suggest that conformal graphics presented via monoscopic HUDs can enhance driver performance by leveraging the effectiveness of monocular depth cues. The proposed approaches and methods can be used and further developed by future researchers and practitioners to better understand driver performance in AR as well as inform usability evaluation of future automotive AR applications.

  19. Mobile Augmented Reality: A Tool for Effective Tourism Interpretation in Enhancing Tourist Experience at Urban Tourism Destination

    Directory of Open Access Journals (Sweden)

    Nur Shuhadah Mohd

    2015-09-01

    Full Text Available The formation of tourism experience frequently subjected to complexity of individual tourist psycho-graphical factor, which leads to vast difference in the end experience formed among the respective tourist. However, the fact that travelling is highly subjected to environmental fuzziness and the issue of geographical consciousness may interfere the emotion of tourist and influence the formation of this experience. The evolution and advancement of mobile technologies had been optimized in improving the way human interact with the surrounding environment. Within this context, mobile augmented reality (AR technology is perceived as capable in narrowing the gap between the formation of pleasant experience and the issue of geographical consciousness, thus transform the way tourist interact with the destination. Pertaining to this situation, this conceptual paper is attempted to understand the effectiveness of mobile augmented reality in enhancing tourist travel experience on the tourism destination. In relation to this aim, this study is directed to clarify the mechanism and usability of mobile augmented reality in relation to its capability in improving tourism interpretation and to discover the influence of utilization of this technology towards tourism experience. Critical review of existing literature that relevant to the research area was done in understanding on the extensiveness of impact of mobile AR on tourist and experience formation. Findings revealed the capability of AR in merging virtual information with the real world environment through the platform of mobile device able to create a more dynamic interaction between tourist and surrounding environment.

  20. [What do virtual reality tools bring to child and adolescent psychiatry?

    Science.gov (United States)

    Bioulac, S; de Sevin, E; Sagaspe, P; Claret, A; Philip, P; Micoulaud-Franchi, J A; Bouvard, M P

    2018-06-01

    Virtual reality is a relatively new technology that enables individuals to immerse themselves in a virtual world. It offers several advantages including a more realistic, lifelike environment that may allow subjects to "forget" they are being assessed, allow a better participation and an increased generalization of learning. Moreover, the virtual reality system can provide multimodal stimuli, such as visual and auditory stimuli, and can also be used to evaluate the patient's multimodal integration and to aid rehabilitation of cognitive abilities. The use of virtual reality to treat various psychiatric disorders in adults (phobic anxiety disorders, post-traumatic stress disorder, eating disorders, addictions…) and its efficacy is supported by numerous studies. Similar research for children and adolescents is lagging behind. This may be particularly beneficial to children who often show great interest and considerable success on computer, console or videogame tasks. This article will expose the main studies that have used virtual reality with children and adolescents suffering from psychiatric disorders. The use of virtual reality to treat anxiety disorders in adults is gaining popularity and its efficacy is supported by various studies. Most of the studies attest to the significant efficacy of the virtual reality exposure therapy (or in virtuo exposure). In children, studies have covered arachnophobia social anxiety and school refusal phobia. Despite the limited number of studies, results are very encouraging for treatment in anxiety disorders. Several studies have reported the clinical use of virtual reality technology for children and adolescents with autistic spectrum disorders (ASD). Extensive research has proven the efficiency of technologies as support tools for therapy. Researches are found to be focused on communication and on learning and social imitation skills. Virtual reality is also well accepted by subjects with ASD. The virtual environment offers

  1. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    Science.gov (United States)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  2. Visualization of Disciplinary Profiles: Enhanced Science Overlay Maps

    Directory of Open Access Journals (Sweden)

    Stephen Carley

    2017-08-01

    Full Text Available Purpose: The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps. Design/methodology/approach: We use the combined set of 2015 Journal Citation Reports for the Science Citation Index (n of journals = 8,778 and the Social Sciences Citation Index (n = 3,212 for a total of 11,365 journals. The set of Web of Science Categories in the Science Citation Index and the Social Sciences Citation Index increased from 224 in 2010 to 227 in 2015. Using dedicated software, a matrix of 227 × 227 cells is generated on the basis of whole-number citation counting. We normalize this matrix using the cosine function. We first develop the citing-side, cosine-normalized map using 2015 data and VOSviewer visualization with default parameter values. A routine for making overlays on the basis of the map (“wc15.exe” is available at http://www.leydesdorff.net/wc15/index.htm. Findings: Findings appear in the form of visuals throughout the manuscript. In Figures 1–9 we provide basemaps of science and science overlay maps for a number of companies, universities, and technologies. Research limitations: As Web of Science Categories change and/or are updated so is the need to update the routine we provide. Also, to apply the routine we provide users need access to the Web of Science. Practical implications: Visualization of science overlay maps is now more accurate and true to the 2015 Journal Citation Reports than was the case with the previous version of the routine advanced in our paper. Originality/value: The routine we advance allows users to visualize science overlay maps in VOSviewer using data from more recent Journal Citation Reports.

  3. Universities’ visual image and Internet communication

    OpenAIRE

    Okushova Gulnafist; Stakhovskaya Yuliya; Sharaev Pavel

    2016-01-01

    Universities of the 21st century are built on digital walls and on the Internet foundation. Their "real virtuality" of M. Castells is represented by information and communication flows that reflect various areas: education, research, culture, leisure, and others. The visual image of a university is the bridge that connects its physical and digital reality and identifies it within the information flow on the Internet. Visual image identification on the Internet and the function that the visual...

  4. War-Fighting in the Early 21st Century: A Remote-Controlled, Robotic, Robust, Size-Reduced, Virtual-Reality Paradigm

    Science.gov (United States)

    2006-11-27

    39 40 41 Miniaturization: another exponential trend 42 43 Planetary Gear 44 Nanosystems bearing 45 Nanosystems smaller bearing 46 Respirocyte (an...clothing, our eyeglasses • Full immersion visual-auditory virtual reality • Augmented real reality • Interaction with virtual personalities as a primary

  5. VRML metabolic network visualizer.

    Science.gov (United States)

    Rojdestvenski, Igor

    2003-03-01

    A successful date collection visualization should satisfy a set of many requirements: unification of diverse data formats, support for serendipity research, support of hierarchical structures, algorithmizability, vast information density, Internet-readiness, and other. Recently, virtual reality has made significant progress in engineering, architectural design, entertainment and communication. We experiment with the possibility of using the immersive abstract three-dimensional visualizations of the metabolic networks. We present the trial Metabolic Network Visualizer software, which produces graphical representation of a metabolic network as a VRML world from a formal description written in a simple SGML-type scripting language.

  6. Virtual reality-based simulation system for nuclear and radiation safety SuperMC/RVIS

    Energy Technology Data Exchange (ETDEWEB)

    He, T.; Hu, L.; Long, P.; Shang, L.; Zhou, S.; Yang, Q.; Zhao, J.; Song, J.; Yu, S.; Cheng, M.; Hao, L., E-mail: liqin.hu@fds.org.cn [Chinese Academy of Sciences, Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Hefei, Anhu (China)

    2015-07-01

    The suggested work scenarios in radiation environment need to be iterative optimized according to the ALARA principle. Based on Virtual Reality (VR) technology and high-precision whole-body computational voxel phantom, a virtual reality-based simulation system for nuclear and radiation safety named SuperMC/RVIS has been developed for organ dose assessment and ALARA evaluation of work scenarios in radiation environment. The system architecture, ALARA evaluation strategy, advanced visualization methods and virtual reality technology used in SuperMC/RVIS are described. A case is presented to show its dose assessment and interactive simulation capabilities. (author)

  7. Virtual reality-based simulation system for nuclear and radiation safety SuperMC/RVIS

    International Nuclear Information System (INIS)

    He, T.; Hu, L.; Long, P.; Shang, L.; Zhou, S.; Yang, Q.; Zhao, J.; Song, J.; Yu, S.; Cheng, M.; Hao, L.

    2015-01-01

    The suggested work scenarios in radiation environment need to be iterative optimized according to the ALARA principle. Based on Virtual Reality (VR) technology and high-precision whole-body computational voxel phantom, a virtual reality-based simulation system for nuclear and radiation safety named SuperMC/RVIS has been developed for organ dose assessment and ALARA evaluation of work scenarios in radiation environment. The system architecture, ALARA evaluation strategy, advanced visualization methods and virtual reality technology used in SuperMC/RVIS are described. A case is presented to show its dose assessment and interactive simulation capabilities. (author)

  8. Visual prostheses: The enabling technology to give sight to the blind

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Maghami

    2014-01-01

    Full Text Available Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP and age-related macular degeneration (AMD or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices.

  9. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    Science.gov (United States)

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  10. A Survey on Applications of Augmented Reality

    OpenAIRE

    Andrea Sanna; Federico Manuri

    2016-01-01

    The term Augmented Reality (AR) refers to a set of technologies and devices able to enhance and improve human perception, thus bridging the gap between real and virtual space. Physical and artificial objects are mixed together in a hybrid space where the user can move without constraints. This mediated reality is spread in our everyday life: work, study, training, relaxation, time spent traveling are just some of the moments in which you can use AR applications.This paper aims to provide an o...

  11. Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation

    OpenAIRE

    Kiryu, Tohru; So, Richard HY

    2007-01-01

    Abstract Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge se...

  12. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique.

    Science.gov (United States)

    Besharati Tabrizi, Leila; Mahvash, Mehran

    2015-07-01

    An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.

  13. Aplikasi Virtual Reality Media Pembelajaran Sistem Tata Surya

    Directory of Open Access Journals (Sweden)

    I Putu Astya Prayudha

    2017-08-01

    Full Text Available Learning of the solar system is taught since primary school. The learning materials of the solar system are generally explained through blackboards and using books, causing a less of visualization of the solar system that becomes an obstacle in the learning process of the solar system. Develop Virtual Reality Application Learning Media of Solar System in this study aims to facilitate learning with the addition of visualization of the delivery of materials related to the solar system. The application is designed to combine entertainment and knowledge where users interact with virtual environments and see the existence of planets and planets information in the solar system with virtually mode. Application are developed with the delivery of text and voice learning to provide knowledge to users such as the distance the planet to the Sun, diameter, layers, and the constituents of the planet. Users agree Virtual Reality Application Learning Media of Solar System effectively facilitate learning related to the solar system system as evidenced by the questionnaire of 60% for agree value on the content aspect.

  14. Speculations on the representation of architecture in virtual reality

    DEFF Research Database (Denmark)

    Hermund, Anders; Klint, Lars; Bundgård, Ture Slot

    2017-01-01

    This paper discusses the present and future possibilities of representation models of architecture in new media such as virtual reality, seen in the broader context of tradition, perception, and neurology. Through comparative studies of real and virtual scenarios using eye tracking, the paper...... discusses if the constantly evolving toolset for architectural representation has in itself changed the core values of architecture, or if it is rather the level of skilful application of technology that can inflict on architecture and its quality. It is easy to contemplate virtual reality as an extension...... to the visual field of perception. However, this should not necessarily imply an acceptance of the dominance of vision over the other senses, and the much-criticized retinal architecture with its inherent loss of plasticity. Recent neurology studies indicate that 3D representation models in virtual reality...

  15. Proprioception rehabilitation training system for stroke patients using virtual reality technology.

    Science.gov (United States)

    Kim, Sun I; Song, In-Ho; Cho, Sangwoo; Kim, In Young; Ku, Jeonghun; Kang, Youn Joo; Jang, Dong Pyo

    2013-01-01

    We investigated a virtual reality (VR) proprioceptive rehabilitation system that could manipulate the visual feedback of upper-limb during training and could do training by relying on proprioception feedback only. Virtual environments were designed in order to switch visual feedback on/off during upper-limb training. Two types of VR training tasks were designed for evaluating the effect of the proprioception focused training compared to the training with visual feedback. In order to evaluate the developed proprioception feedback virtual environment system, we recruited ten stroke patients (age: 54.7± 7.83years, on set: 3.29± 3.83 years). All patients performed three times PFVE task in order to check the improvement of proprioception function just before training session, after one week training, and after all training. In a comparison between FMS score and PFVE, the FMS score had a significant relationship with the error distance(r = -.662, n=10, p = .037) and total movement distance(r = -.726, n=10, p = .018) in PFVE. Comparing the training effect between in virtual environment with visual feedback and with proprioception, the click count, error distance and total error distance was more reduced in PFVE than VFVE. (Click count: p = 0.005, error distance: p = 0.001, total error distance: p = 0.007). It suggested that the proprioception feedback rather than visual feedback could be effective means to enhancing motor control during rehabilitation training. The developed VR system for rehabilitation has been verified in that stroke patients improved motor control after VR proprioception feedback training.

  16. Short-term visual deprivation does not enhance passive tactile spatial acuity.

    Directory of Open Access Journals (Sweden)

    Michael Wong

    Full Text Available An important unresolved question in sensory neuroscience is whether, and if so with what time course, tactile perception is enhanced by visual deprivation. In three experiments involving 158 normally sighted human participants, we assessed whether tactile spatial acuity improves with short-term visual deprivation over periods ranging from under 10 to over 110 minutes. We used an automated, precisely controlled two-interval forced-choice grating orientation task to assess each participant's ability to discern the orientation of square-wave gratings pressed against the stationary index finger pad of the dominant hand. A two-down one-up staircase (Experiment 1 or a Bayesian adaptive procedure (Experiments 2 and 3 was used to determine the groove width of the grating whose orientation each participant could reliably discriminate. The experiments consistently showed that tactile grating orientation discrimination does not improve with short-term visual deprivation. In fact, we found that tactile performance degraded slightly but significantly upon a brief period of visual deprivation (Experiment 1 and did not improve over periods of up to 110 minutes of deprivation (Experiments 2 and 3. The results additionally showed that grating orientation discrimination tends to improve upon repeated testing, and confirmed that women significantly outperform men on the grating orientation task. We conclude that, contrary to two recent reports but consistent with an earlier literature, passive tactile spatial acuity is not enhanced by short-term visual deprivation. Our findings have important theoretical and practical implications. On the theoretical side, the findings set limits on the time course over which neural mechanisms such as crossmodal plasticity may operate to drive sensory changes; on the practical side, the findings suggest that researchers who compare tactile acuity of blind and sighted participants should not blindfold the sighted participants.

  17. Enhancing and Transforming Global Learning Communities with Augmented Reality

    Science.gov (United States)

    Frydenberg, Mark; Andone, Diana

    2018-01-01

    Augmented and virtual reality applications bring new insights to real world objects and scenarios. This paper shares research results of the TalkTech project, an ongoing study investigating the impact of learning about new technologies as members of global communities. This study shares results of a collaborative learning project about augmented…

  18. ) Virtual Reality Environments For The Petroleum Industry

    International Nuclear Information System (INIS)

    Diembacher, F. X.

    2003-01-01

    Large screen immersive visualization has gained enormous momentum in the last few years. The oil industry has quickly appreciate the value virtual reality centers bring to the practising engineer and to asset teams. While early concepts emphasized visualization, people soon realized that virtual reality rooms offer more: they are places where people come together, they are places where people want to collaborate. Subsequently these environments were also called Decisionariums, Collaboration Centers, Visionariums, etc. GeoQuest branded these rooms iCenters, a term which encompasses all the potential usages of this environment. is tands for information, internet, interaction, interpretation, impact, etc. iCenters are used for interpretation and analysis of complex models (e.g. 3D seismic interpretation, viewing of simulation models with hundreds of thousands of cells) and for multi-disciplinary working (e.g. planning of advanced wells typically for (deep) offshore environments currently increases by several hundred percent being built in Nigeria-more are being planned. This concepts for building iCenters, examples of how oil companies around the world and in Nigeria use these environments to foster collaboration and reduce costs, and latest developments in the area of remote collaboration (i.e., connected iCenters)

  19. E-virtual reality exposure therapy in acrophobia: A pilot study.

    Science.gov (United States)

    Levy, Fanny; Leboucher, Pierre; Rautureau, Gilles; Jouvent, Roland

    2016-06-01

    Virtual reality therapy is already used for anxiety disorders as an alternative to in vivo and in imagino exposure. To our knowledge, however, no one has yet proposed using remote virtual reality (e-virtual reality). The aim of the present study was to assess e-virtual reality in an acrophobic population. Six individuals with acrophobia each underwent six sessions (two sessions per week) of virtual reality exposure therapy. The first three were remote sessions, while the last three were traditional sessions in the physical presence of the therapist. Anxiety (STAI form Y-A, visual analog scale, heart rate), presence, technical difficulties and therapeutic alliance (Working Alliance Inventory) were measured. In order to control the conditions in which these measures were made, all the sessions were conducted in hospital. None of the participants dropped out. The remote sessions were well accepted. None of the participants verbalized reluctance. No major technical problems were reported. None of the sessions were cancelled or interrupted because of software incidents. Measures (anxiety, presence, therapeutic alliance) were comparable across the two conditions. e-Virtual reality can therefore be used to treat acrophobic disorders. However, control studies are needed to assess online feasibility, therapeutic effects and the mechanisms behind online presence. © The Author(s) 2015.

  20. Utilization of virtual reality for reading the superheated emulsion detector

    International Nuclear Information System (INIS)

    Santos Sobrinho, Jose C.; Santo, Andre C.E.; Pereira, Claudio M.N.A.; Mol, Antonio C.A.

    2013-01-01

    This paper presents a method based on Virtual Reality for reading the Superheated Emulsion Detector (Bubble Detector). The proposed method is an alternative to: automatic counters offered by the manufacturers of detectors, since they have a relatively high cost (acquisition, maintenance and periodic calibration), and visual counting of detectors, since it only has an advantage when there are a small number of bubbles. The method starts with the collection of detector's digital images in order to obtain a sequence of images to create an animation that is displayed with the help of Virtual Reality. To this end, it is modeled, using OpenGL graphics library, a virtual environment for visualizing and manipulating virtual detector. It is made, then a calibration of this virtual environment thus ensuring the correspondence of the model with reality. The reading of the detector (bubbles count) is made visually by the user with the assistance of stereo vision and a 3D cursor to navigation, marking and counting the bubbles. The user views a further auxiliary display that shows the three-dimensional cursor position, the labeled amount of bubbles and the measured dose. After testing, the following results were achieved: better precision in counting the bubbles compared with the 10% reported by the manufacturer of the automatic reader; achieving a low cost tool that requires no calibration constant in the process of maintenance and/or lifetime; minimizing the problem of manual counting for large number of bubbles and ease of use, because can be operated by a common user. (author)

  1. Utilization of virtual reality for reading the superheated emulsion detector

    Energy Technology Data Exchange (ETDEWEB)

    Santos Sobrinho, Jose C.; Santo, Andre C.E.; Pereira, Claudio M.N.A.; Mol, Antonio C.A., E-mail: volksparati@hotmail.com, E-mail: cotelli.andre@gmail.com, E-mail: cmnap@ien.gov.br, E-mail: mol@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    This paper presents a method based on Virtual Reality for reading the Superheated Emulsion Detector (Bubble Detector). The proposed method is an alternative to: automatic counters offered by the manufacturers of detectors, since they have a relatively high cost (acquisition, maintenance and periodic calibration), and visual counting of detectors, since it only has an advantage when there are a small number of bubbles. The method starts with the collection of detector's digital images in order to obtain a sequence of images to create an animation that is displayed with the help of Virtual Reality. To this end, it is modeled, using OpenGL graphics library, a virtual environment for visualizing and manipulating virtual detector. It is made, then a calibration of this virtual environment thus ensuring the correspondence of the model with reality. The reading of the detector (bubbles count) is made visually by the user with the assistance of stereo vision and a 3D cursor to navigation, marking and counting the bubbles. The user views a further auxiliary display that shows the three-dimensional cursor position, the labeled amount of bubbles and the measured dose. After testing, the following results were achieved: better precision in counting the bubbles compared with the 10% reported by the manufacturer of the automatic reader; achieving a low cost tool that requires no calibration constant in the process of maintenance and/or lifetime; minimizing the problem of manual counting for large number of bubbles and ease of use, because can be operated by a common user. (author)

  2. Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery.

    Science.gov (United States)

    Pelargos, Panayiotis E; Nagasawa, Daniel T; Lagman, Carlito; Tenn, Stephen; Demos, Joanna V; Lee, Seung J; Bui, Timothy T; Barnette, Natalie E; Bhatt, Nikhilesh S; Ung, Nolan; Bari, Ausaf; Martin, Neil A; Yang, Isaac

    2017-01-01

    Neurosurgery has undergone a technological revolution over the past several decades, from trephination to image-guided navigation. Advancements in virtual reality (VR) and augmented reality (AR) represent some of the newest modalities being integrated into neurosurgical practice and resident education. In this review, we present a historical perspective of the development of VR and AR technologies, analyze its current uses, and discuss its emerging applications in the field of neurosurgery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Exploring Urban Environments Using Virtual and Augmented Reality

    OpenAIRE

    Stelios Papakonstantinou; Vesna Brujic-Okretic; Fotis Liarokapis

    2007-01-01

    In this paper, we propose the use of specific system architecture, based on mobile device, for navigation in urban environments. The aim of this work is to assess how virtual and augmented reality interface paradigms can provide enhanced location based services using real-time techniques in the context of these two different technologies. The virtual reality interface is based on faithful graphical representation of the localities of interest, coupled with sensory information on the location ...

  4. CREATING AUDIO VISUAL DIALOGUE TASK AS STUDENTS’ SELF ASSESSMENT TO ENHANCE THEIR SPEAKING ABILITY

    Directory of Open Access Journals (Sweden)

    Novia Trisanti

    2017-04-01

    Full Text Available The study is about giving overview of employing audio visual dialogue task as students creativity task and self assessment in EFL speaking class of tertiary education to enhance the students speaking ability. The qualitative research was done in one of the speaking classes at English Department, Semarang State University, Central Java, Indonesia. The results that can be seen from the rubric of self assessment show that the oral performance through audio visual recorded tasks done by the students as their self assessment gave positive evidences. The audio visual dialogue task can be very beneficial since it can motivate the students learning and increase their learning experiences. The self-assessment can be a valuable additional means to improve their speaking ability since it is one of the motives that drive self- evaluatioan, along with self- verification and self- enhancement.

  5. Virtual reality visual feedback for hand-controlled scanning probe microscopy manipulation of single molecules

    Directory of Open Access Journals (Sweden)

    Philipp Leinen

    2015-11-01

    Full Text Available Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926–1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf of the non-contact atomic force microscope (NC-AFM tuning fork sensor as well as the magnitude of the electric current (I flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111 surface.

  6. Multi-Sensory-Motor Research: Investigating Auditory, Visual, and Motor Interaction in Virtual Reality Environments

    Directory of Open Access Journals (Sweden)

    Thorsten Kluss

    2011-10-01

    Full Text Available Perception in natural environments is inseparably linked to motor action. In fact, we consider action an essential component of perceptual representation. But these representations are inherently difficult to investigate: Traditional experimental setups are limited by the lack of flexibility in manipulating spatial features. To overcome these problems, virtual reality (VR experiments seem to be a feasible alternative, but these setups typically lack ecological realism due to the use of “unnatural” interface-devices (joystick. Thus, we propose an experimental apparatus which combines multisensory perception and action in an ecologically realistic way. The basis is a 10-foot hollow sphere (VirtuSphere placed on a platform that allows free rotation. A subject inside can walk in any direction for any distance immersed into virtual environment. Both the rotation of the sphere and movement of the subject's head are tracked to process the subject's view within the VR-environment presented on a head-mounted display. Moreover, auditory features are dynamically processed taking greatest care of exact alignment of sound-sources and visual objects using ambisonic-encoded audio processed by a HRTF-filterbank. We present empirical data that confirm ecological realism of this setup and discuss its suitability for multi-sensory-motor research.

  7. Virtual reality visual feedback for hand-controlled scanning probe microscopy manipulation of single molecules.

    Science.gov (United States)

    Leinen, Philipp; Green, Matthew F B; Esat, Taner; Wagner, Christian; Tautz, F Stefan; Temirov, Ruslan

    2015-01-01

    Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM) is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM) introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926-1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf) of the non-contact atomic force microscope (NC-AFM) tuning fork sensor as well as the magnitude of the electric current (I) flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111) surface.

  8. Making virtual reality a reality: visualization rooms revolutionize the search for oil and gas

    Energy Technology Data Exchange (ETDEWEB)

    Smith, M.

    2001-11-01

    Visualization chambers, state-of-the-art versions of the 3-D cinema films of the 1950s, made possible with the arrival of supercomputers, are popping up in the offices of most major-league explorers in Calgary, Houston and elsewhere. Combining rapid-fire networking, powerful computers, integrated software and digital projection systems, visualization rooms display seismic and other data in images that appear to lift off the screen and float in front of it. The display allows participants to work with stereoscopic subsurface simulations in well-lit rooms where they can reference notes, printouts and drawings; enables the exploration team to gather close to the screen for discussion and inspection of minute details ; improves the ability to understand huge data sets; speeds the process of arriving at effective drilling decisions; encourages and facilitates collaborative work among people of different disciplines, geologists, engineers, geophysicists, bringing them together in one place in front of a giant screen, where everyone can see the same data all at once. Various examples of the technology's successes are described. The technology does not come cheap; it may cost anywhere from $500,000 to $3 million for a visualization room, but considering that drilling a single well may cost up to $40 million, visualization technology is not considered to be a huge expense in terms of exploration.

  9. Visualization of Robotic Sensor Data with Augmented Reality

    OpenAIRE

    Thorstensen, Mathias Ciarlo

    2017-01-01

    To understand a robot's intent and behavior, a robot engineer must analyze data at the input and output, but also at all intermediary steps. This might require looking at a specific subset of the system, or a single data node in isolation. A range of different data formats can be used in the systems, and require visualization in different mediums; some are text based, and best visualized in a terminal, while other types must be presented graphically, in 2D or 3D. This often makes understandin...

  10. Enhanced visual short-term memory in action video game players.

    Science.gov (United States)

    Blacker, Kara J; Curby, Kim M

    2013-08-01

    Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.

  11. 1.5 T augmented reality navigated interventional MRI: paravertebral sympathetic plexus injections.

    Science.gov (United States)

    Marker, David R; U Thainual, Paweena; Ungi, Tamas; Flammang, Aaron J; Fichtinger, Gabor; Iordachita, Iulian I; Carrino, John A; Fritz, Jan

    2017-01-01

    The high contrast resolution and absent ionizing radiation of interventional magnetic resonance imaging (MRI) can be advantageous for paravertebral sympathetic nerve plexus injections. We assessed the feasibility and technical performance of MRI-guided paravertebral sympathetic injections utilizing augmented reality navigation and 1.5 T MRI scanner. A total of 23 bilateral injections of the thoracic (8/23, 35%), lumbar (8/23, 35%), and hypogastric (7/23, 30%) paravertebral sympathetic plexus were prospectively planned in twelve human cadavers using a 1.5 Tesla (T) MRI scanner and augmented reality navigation system. MRI-conditional needles were used. Gadolinium-DTPA-enhanced saline was injected. Outcome variables included the number of control magnetic resonance images, target error of the needle tip, punctures of critical nontarget structures, distribution of the injected fluid, and procedure length. Augmented-reality navigated MRI guidance at 1.5 T provided detailed anatomical visualization for successful targeting of the paravertebral space, needle placement, and perineural paravertebral injections in 46 of 46 targets (100%). A mean of 2 images (range, 1-5 images) were required to control needle placement. Changes of the needle trajectory occurred in 9 of 46 targets (20%) and changes of needle advancement occurred in 6 of 46 targets (13%), which were statistically not related to spinal regions (P = 0.728 and P = 0.86, respectively) and cadaver sizes (P = 0.893 and P = 0.859, respectively). The mean error of the needle tip was 3.9±1.7 mm. There were no punctures of critical nontarget structures. The mean procedure length was 33±12 min. 1.5 T augmented reality-navigated interventional MRI can provide accurate imaging guidance for perineural injections of the thoracic, lumbar, and hypogastric sympathetic plexus.

  12. Possible Application of Virtual Reality in Geography Teaching

    Directory of Open Access Journals (Sweden)

    Ivan Stojšić

    2017-03-01

    Full Text Available Virtual reality represents simulated three-dimensional environment created by hardware and software, which providing realistic experience and possibility of interaction to the end-user. Benefits provided by immersive virtual reality in educational setting were recognised in the past decades, however mass application was left out due to the lack of development and high price. Intensive development of new platforms and virtual reality devices in the last few years started up with Oculus Rift, and subsequently accelerated in the year 2014 by occurrence of Google Cardboard. Nowadays, for the first time in history, immersive virtual reality is available to millions of people. In the mid 2015 Google commenced developing Expeditions Pioneer Program aiming to massively utilise the Google Cardboard platform in education. Expeditions and other VR apps can enhance geography teaching and learning. Realistic experience acquired by utilisation of virtual reality in teaching process significantly overcome possibilities provided by images and illustrations in the textbook. Besides literature review on usage of virtual reality in education this paper presents suggestion of VR mobile apps that can be used together with the Google Cardboard head mounted displays (HMDs in geography classes, thereby emphasising advantages and disadvantages as well as possible obstacles which may occur in introducing the immersive virtual reality in the educational process.

  13. Immersive Training Systems: Virtual Reality and Education and Training.

    Science.gov (United States)

    Psotka, Joseph

    1995-01-01

    Describes virtual reality (VR) technology and VR research on education and training. Focuses on immersion as the key added value of VR, analyzes cognitive variables connected to immersion, how it is generated in synthetic environments and its benefits. Discusses value of tracked, immersive visual displays over nonimmersive simulations. Contains 78…

  14. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    Science.gov (United States)

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  15. The Impact of Virtual Reality on Chronic Pain.

    Science.gov (United States)

    Jones, Ted; Moore, Todd; Choo, James

    2016-01-01

    The treatment of chronic pain could benefit from additional non-opioid interventions. Virtual reality (VR) has been shown to be effective in decreasing pain for procedural or acute pain but to date there have been few studies on its use in chronic pain. The present study was an investigation of the impact of a virtual reality application for chronic pain. Thirty (30) participants with various chronic pain conditions were offered a five-minute session using a virtual reality application called Cool! Participants were asked about their pain using a 0-10 visual analog scale rating before the VR session, during the session and immediately after the session. They were also asked about immersion into the VR world and about possible side effects. Pain was reduced from pre-session to post-session by 33%. Pain was reduced from pre-session during the VR session by 60%. These changes were both statistically significant at the p virtual reality session. All participants (100%) reported a decrease in pain to some degree between pre-session pain and during-session pain. The virtual reality experience was found here to provide a significant amount of pain relief. A head mounted display (HMD) was used with all subjects and no discomfort was experienced. Only one participant noted any side effects. VR seems to have promise as a non-opioid treatment for chronic pain and further investigation is warranted.

  16. Simulated Prism Therapy in Virtual Reality produces larger after-effects than standard prism exposure in normal healthy subject - Implications for Neglect Therapy

    DEFF Research Database (Denmark)

    Wilms, Inge Linda

    2018-01-01

    BACKGROUND: Virtual reality is an important area of exploration within computer-based cognitive rehabilitation of visual neglect. Virtual reality will allow for closer monitoring of patient behaviour during prism adaptation therapy and perhaps change the way we induce prismatic after......-effects. OBJECTIVE: This study compares the effect of two different prism simulation conditions in virtual reality to a standard exposure to prism goggles after one session of Prism Adaptation Therapy in healthy subjects. METHOD: 20 healthy subjects were subjected to one session of prism adaptation therapy under...... training for rehabilitation of hemi spatial attentional deficits such as visual neglect....

  17. A common path forward for the immersive visualization community

    Energy Technology Data Exchange (ETDEWEB)

    Eric A. Wernert; William R. Sherman; Patrick O' Leary; Eric Whiting

    2012-03-01

    Immersive visualization makes use of the medium of virtual reality (VR) - it is a subset of virtual reality focused on the application of VR technologies to scientific and information visualization. As the name implies, there is a particular focus on the physically immersive aspect of VR that more fully engages the perceptual and kinesthetic capabilities of the scientist with the goal of producing greater insight. The immersive visualization community is uniquely positioned to address the analysis needs of the wide spectrum of domain scientists who are becoming increasingly overwhelmed by data. The outputs of computational science simulations and high-resolution sensors are creating a data deluge. Data is coming in faster than it can be analyzed, and there are countless opportunities for discovery that are missed as the data speeds by. By more fully utilizing the scientists visual and other sensory systems, and by offering a more natural user interface with which to interact with computer-generated representations, immersive visualization offers great promise in taming this data torrent. However, increasing the adoption of immersive visualization in scientific research communities can only happen by simultaneously lowering the engagement threshold while raising the measurable benefits of adoption. Scientists time spent immersed with their data will thus be rewarded with higher productivity, deeper insight, and improved creativity. Immersive visualization ties together technologies and methodologies from a variety of related but frequently disjoint areas, including hardware, software and human-computer interaction (HCI) disciplines. In many ways, hardware is a solved problem. There are well established technologies including large walk-in systems such as the CAVE{trademark} and head-based systems such as the Wide-5{trademark}. The advent of new consumer-level technologies now enable an entirely new generation of immersive displays, with smaller footprints and costs

  18. Mixed-reality Learning Environments: What Happens When You Move from a Laboratory to a Classroom?

    OpenAIRE

    King, Barbara; Smith, Carmen Petrick

    2018-01-01

    The advent ofmotion-controlled technologies has unlocked new possibilities for body-basedlearning in the mathematics classroom. For example, mixed-reality learning environments allow students theopportunity to embody a mathematical concept while simultaneously beingprovided a visual interface that represents their movement.  In the current study, we created amixed-reality environment to help children learn about angle measurement, andwe investigated similarities and differen...

  19. Virtual Reality As an Effective Simulation Tool for OSH Education on Robotized Workplace

    OpenAIRE

    Janak Miroslav; Cmorej Tomas; Vysocky Tomas; Kocisko Marek; Teliskova Monika

    2016-01-01

    In last decade, the virtual reality became a huge trend in the field of visualization not only for simple elements, but also for complex devices, their actions and processes, rooms and entire areas. This contribution focuses on the possibilities of utilization of the elements of virtual reality for educational purposes regarding the potential employees and their OSH (Occupational Safety and Health) training at the specialized workplaces. It points to the possibility of using the applications ...

  20. Using Augmented Reality Tools to Enhance Children's Library Services

    Science.gov (United States)

    Meredith, Tamara R.

    2015-01-01

    Augmented reality (AR) has been used and documented for a variety of commercial and educational purposes, and the proliferation of mobile devices has increased the average person's access to AR systems and tools. However, little research has been done in the area of using AR to supplement traditional library services, specifically for patrons aged…

  1. Augmented Reality: Advances in Diagnostic Imaging

    Directory of Open Access Journals (Sweden)

    David B. Douglas

    2017-11-01

    Full Text Available In recent years, advances in medical imaging have provided opportunities for enhanced diagnosis and characterization of diseases including cancer. The improved spatial resolution provides outstanding detail of intricate anatomical structures, but has challenged physicians on how to effectively and efficiently review the extremely large datasets of over 1000 images. Standard volume rendering attempts to tackle this problem as it provides a display of 3D information on a flat 2D screen, but it lacks depth perception and has poor human–machine interface (HMI. Most recently, Augmented Reality/Virtual Reality (AR/VR with depth 3-dimensional (D3D imaging provides depth perception through binocular vision, head tracking for improved HMI and other key AR features. In this article, we will discuss current and future medical applications of AR including assessing breast cancer. We contend that leveraging AR technology may enhance diagnosis, save cost and improve patient care.

  2. Utilizing visual art to enhance the clinical observation skills of medical students.

    Science.gov (United States)

    Jasani, Sona K; Saks, Norma S

    2013-07-01

    Clinical observation is fundamental in practicing medicine, but these skills are rarely taught. Currently no evidence-based exercises/courses exist for medical student training in observation skills. The goal was to develop and teach a visual arts-based exercise for medical students, and to evaluate its usefulness in enhancing observation skills in clinical diagnosis. A pre- and posttest and evaluation survey were developed for a three-hour exercise presented to medical students just before starting clerkships. Students were provided with questions to guide discussion of both representational and non-representational works of art. Quantitative analysis revealed that the mean number of observations between pre- and posttests was not significantly different (n=70: 8.63 vs. 9.13, p=0.22). Qualitative analysis of written responses identified four themes: (1) use of subjective terminology, (2) scope of interpretations, (3) speculative thinking, and (4) use of visual analogies. Evaluative comments indicated that students felt the exercise enhanced both mindfulness and skills. Using visual art images with guided questions can train medical students in observation skills. This exercise can be replicated without specially trained personnel or art museum partnerships.

  3. Virtual reality technology and applications

    CERN Document Server

    Mihelj, Matjaž; Beguš, Samo

    2014-01-01

    As virtual reality expands from the imaginary worlds of science fiction and pervades every corner of everyday life, it is becoming increasingly important for students and professionals alike to understand the diverse aspects of this technology. This book aims to provide a comprehensive guide to the theoretical and practical elements of virtual reality, from the mathematical and technological foundations of virtual worlds to the human factors and the applications that enrich our lives: in the fields of medicine, entertainment, education and others. After providing a brief introduction to the topic, the book describes the kinematic and dynamic mathematical models of virtual worlds. It explores the many ways a computer can track and interpret human movement, then progresses through the modalities that make up a virtual world: visual, acoustic and haptic. It explores the interaction between the actual and virtual environments, as well as design principles of the latter. The book closes with an examination of diff...

  4. Pharmacological Mechanisms of Cortical Enhancement Induced by the Repetitive Pairing of Visual/Cholinergic Stimulation.

    Directory of Open Access Journals (Sweden)

    Jun-Il Kang

    Full Text Available Repetitive visual training paired with electrical activation of cholinergic projections to the primary visual cortex (V1 induces long-term enhancement of cortical processing in response to the visual training stimulus. To better determine the receptor subtypes mediating this effect the selective pharmacological blockade of V1 nicotinic (nAChR, M1 and M2 muscarinic (mAChR or GABAergic A (GABAAR receptors was performed during the training session and visual evoked potentials (VEPs were recorded before and after training. The training session consisted of the exposure of awake, adult rats to an orientation-specific 0.12 CPD grating paired with an electrical stimulation of the basal forebrain for a duration of 1 week for 10 minutes per day. Pharmacological agents were infused intracortically during this period. The post-training VEP amplitude was significantly increased compared to the pre-training values for the trained spatial frequency and to adjacent spatial frequencies up to 0.3 CPD, suggesting a long-term increase of V1 sensitivity. This increase was totally blocked by the nAChR antagonist as well as by an M2 mAChR subtype and GABAAR antagonist. Moreover, administration of the M2 mAChR antagonist also significantly decreased the amplitude of the control VEPs, suggesting a suppressive effect on cortical responsiveness. However, the M1 mAChR antagonist blocked the increase of the VEP amplitude only for the high spatial frequency (0.3 CPD, suggesting that M1 role was limited to the spread of the enhancement effect to a higher spatial frequency. More generally, all the drugs used did block the VEP increase at 0.3 CPD. Further, use of each of the aforementioned receptor antagonists blocked training-induced changes in gamma and beta band oscillations. These findings demonstrate that visual training coupled with cholinergic stimulation improved perceptual sensitivity by enhancing cortical responsiveness in V1. This enhancement is mainly mediated by n

  5. The effects of coping style on virtual reality enhanced videogame distraction in children undergoing cold pressor pain.

    Science.gov (United States)

    Sil, Soumitri; Dahlquist, Lynnda M; Thompson, Caitlin; Hahn, Amy; Herbert, Linda; Wohlheiter, Karen; Horn, Susan

    2014-02-01

    This study sought to evaluate the effectiveness of virtual reality (VR) enhanced interactive videogame distraction for children undergoing experimentally induced cold pressor pain and examined the role of avoidant and approach coping style as a moderator of VR distraction effectiveness. Sixty-two children (6-13 years old) underwent a baseline cold pressor trial followed by two cold pressor trials in which interactive videogame distraction was delivered both with and without a VR helmet in counterbalanced order. As predicted, children demonstrated significant improvement in pain tolerance during both interactive videogame distraction conditions. However, a differential response to videogame distraction with or without the enhancement of VR technology was not found. Children's coping style did not moderate their response to distraction. Rather, interactive videogame distraction with and without VR technology was equally effective for children who utilized avoidant or approach coping styles.

  6. 3D augmented reality for improving social acceptance and public participation in wind farms planning

    Science.gov (United States)

    Grassi, S.; Klein, T. M.

    2016-09-01

    Wind energy is one of the most important source of renewable energy characterized by a significant growth in the last decades and giving a more and more relevant contribution to the energy supply. One of the main disadvantages of a faster integration of wind energy into the energy mix is related to the visual impact of wind turbines on the landscape. In addition, the siting of new massive infrastructures has the potential to threaten a community's well-being if new projects are perceived being unfair. The public perception of the impact of wind turbines on the landscape is also crucial for their acceptance. The implementation of wind energy projects is hampered often because of a lack of planning or communication tools enabling a more transparent and efficient interaction between all stakeholders involved in the projects (i.e. developers, local communities and administrations, NGOs, etc.). Concerning the visual assessment of wind farms, a critical gap lies in effective visualization tools to improve the public perception of alternative wind turbines layouts. In this paper, we describe the advantages of a 3D dynamical and interactive visualization platform for an augmented reality to support wind energy planners in order to enhance the social acceptance of new wind energy projects.

  7. Mixed Reality with HoloLens: Where Virtual Reality Meets Augmented Reality in the Operating Room.

    Science.gov (United States)

    Tepper, Oren M; Rudy, Hayeem L; Lefkowitz, Aaron; Weimer, Katie A; Marks, Shelby M; Stern, Carrie S; Garfein, Evan S

    2017-11-01

    Virtual reality and augmented reality devices have recently been described in the surgical literature. The authors have previously explored various iterations of these devices, and although they show promise, it has become clear that virtual reality and/or augmented reality devices alone do not adequately meet the demands of surgeons. The solution may lie in a hybrid technology known as mixed reality, which merges many virtual reality and augmented realty features. Microsoft's HoloLens, the first commercially available mixed reality device, provides surgeons intraoperative hands-free access to complex data, the real environment, and bidirectional communication. This report describes the use of HoloLens in the operating room to improve decision-making and surgical workflow. The pace of mixed reality-related technological development will undoubtedly be rapid in the coming years, and plastic surgeons are ideally suited to both lead and benefit from this advance.

  8. Augmented reality user interface for mobile ground robots with manipulator arms

    Science.gov (United States)

    Vozar, Steven; Tilbury, Dawn M.

    2011-01-01

    Augmented Reality (AR) is a technology in which real-world visual data is combined with an overlay of computer graphics, enhancing the original feed. AR is an attractive tool for teleoperated UGV UIs as it can improve communication between robots and users via an intuitive spatial and visual dialogue, thereby increasing operator situational awareness. The successful operation of UGVs often relies upon both chassis navigation and manipulator arm control, and since existing literature usually focuses on one task or the other, there is a gap in mobile robot UIs that take advantage of AR for both applications. This work describes the development and analysis of an AR UI system for a UGV with an attached manipulator arm. The system supplements a video feed shown to an operator with information about geometric relationships within the robot task space to improve the operator's situational awareness. Previous studies on AR systems and preliminary analyses indicate that such an implementation of AR for a mobile robot with a manipulator arm is anticipated to improve operator performance. A full user-study can determine if this hypothesis is supported by performing an analysis of variance on common test metrics associated with UGV teleoperation.

  9. Monitoring what is real: The effects of modality and action on accuracy and type of reality monitoring error.

    Science.gov (United States)

    Garrison, Jane R; Bond, Rebecca; Gibbard, Emma; Johnson, Marcia K; Simons, Jon S

    2017-02-01

    Reality monitoring refers to processes involved in distinguishing internally generated information from information presented in the external world, an activity thought to be based, in part, on assessment of activated features such as the amount and type of cognitive operations and perceptual content. Impairment in reality monitoring has been implicated in symptoms of mental illness and associated more widely with the occurrence of anomalous perceptions as well as false memories and beliefs. In the present experiment, the cognitive mechanisms of reality monitoring were probed in healthy individuals using a task that investigated the effects of stimulus modality (auditory vs visual) and the type of action undertaken during encoding (thought vs speech) on subsequent source memory. There was reduced source accuracy for auditory stimuli compared with visual, and when encoding was accompanied by thought as opposed to speech, and a greater rate of externalization than internalization errors that was stable across factors. Interpreted within the source monitoring framework (Johnson, Hashtroudi, & Lindsay, 1993), the results are consistent with the greater prevalence of clinically observed auditory than visual reality discrimination failures. The significance of these findings is discussed in light of theories of hallucinations, delusions and confabulation. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. iview: an interactive WebGL visualizer for protein-ligand complex.

    Science.gov (United States)

    Li, Hongjian; Leung, Kwong-Sak; Nakane, Takanori; Wong, Man-Hon

    2014-02-25

    Visualization of protein-ligand complex plays an important role in elaborating protein-ligand interactions and aiding novel drug design. Most existing web visualizers either rely on slow software rendering, or lack virtual reality support. The vital feature of macromolecular surface construction is also unavailable. We have developed iview, an easy-to-use interactive WebGL visualizer of protein-ligand complex. It exploits hardware acceleration rather than software rendering. It features three special effects in virtual reality settings, namely anaglyph, parallax barrier and oculus rift, resulting in visually appealing identification of intermolecular interactions. It supports four surface representations including Van der Waals surface, solvent excluded surface, solvent accessible surface and molecular surface. Moreover, based on the feature-rich version of iview, we have also developed a neat and tailor-made version specifically for our istar web platform for protein-ligand docking purpose. This demonstrates the excellent portability of iview. Using innovative 3D techniques, we provide a user friendly visualizer that is not intended to compete with professional visualizers, but to enable easy accessibility and platform independence.

  11. INTERACTIVE MOTION PLATFORMS AND VIRTUAL REALITY FOR VEHICLE SIMULATORS

    Directory of Open Access Journals (Sweden)

    Evžen Thöndel

    2017-12-01

    Full Text Available Interactive motion platforms are intended for vehicle simulators, where the direct interaction of the human body is used for controlling the simulated vehicle (e.g. bicycle, motorbike or other sports vehicles. The second use of interactive motion platforms is for entertainment purposes or fitness. The development of interactive motion platforms reacts to recent calls in the simulation industry to provide a device, which further enhances the virtual reality experience, especially with connection to the new and very fast growing business in virtual reality glasses. The paper looks at the design and control of an interactive motion platform with two degrees of freedom to be used in virtual reality applications. The paper provides the description of the control methods and new problems related to the virtual reality sickness are discussed here.

  12. Interactive Assembly Guide using Augmented Reality

    DEFF Research Database (Denmark)

    Andersen, Martin; Andersen, Rasmus Skovgaard; Larsen, Christian Lindequist

    2009-01-01

    This paper presents an Augmented Reality system for aiding a pump assembling process at Grundfos, one of the leading pump producers. Stable pose estimation of the pump is required in order to augment the graphics correctly. This is achieved by matching image edges with synthesized edges from CAD...... norm. A dynamic visualization of the augmented graphics provides the user with guidance. Usability tests show that the accuracy of the system is sufficient for assembling the pump....

  13. Enhancing the Tourism Experience through Mobile Augmented Reality: Challenges and Prospects

    Directory of Open Access Journals (Sweden)

    Chris D. Kounavis

    2012-07-01

    Full Text Available This paper discusses the use of Augmented Reality (AR applications for the needs of tourism. It describes the technology’s evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR application development, emphasizing the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR’s substantial end‐ user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR’s full potential within the field.

  14. Virtual reality in medicine-computer graphics and interaction techniques.

    Science.gov (United States)

    Haubner, M; Krapichler, C; Lösch, A; Englmeier, K H; van Eimeren, W

    1997-03-01

    This paper describes several new visualization and interaction techniques that enable the use of virtual environments for routine medical purposes. A new volume-rendering method supports shaded and transparent visualization of medical image sequences in real-time with an interactive threshold definition. Based on these rendering algorithms two complementary segmentation approaches offer an intuitive assistance for a wide range of requirements in diagnosis and therapy planning. In addition, a hierarchical data representation for geometric surface descriptions guarantees an optimal use of available hardware resources and prevents inaccurate visualization. The combination of the presented techniques empowers the improved human-machine interface of virtual reality to support every interactive task in medical three-dimensional (3-D) image processing, from visualization of unsegmented data volumes up to the simulation of surgical procedures.

  15. Maintaining balance when looking at a virtual reality three dimensional display of a field of moving dots or at a virtual reality scene

    Directory of Open Access Journals (Sweden)

    Elodie eChiarovano

    2015-07-01

    Full Text Available Experimental objective. To provide a safe, simple, relatively inexpensive, fast, accurate way of quantifying balance performance either in isolation, or in the face of challenges provided by 3D high definition moving visual stimuli as well as by the proprioceptive challenge from standing on a foam pad. This method uses the new technology of the Wii balance board to measure postural stability during powerful, realistic visual challenges from immersive virtual reality. Limitations of current techniques. Present computerized methods for measuring postural stability are large, complex, slow and expensive, and do not allow for testing the response to realistic visual challenges.Protocol. Subjects stand on a 6cm thick, firm, foam pad on a Wii balance board. They wear a fast, high resolution, low persistence, virtual reality head set (Oculus Rift DK2. This allows displays of varying speed, direction, depth and complexity to be delivered. The subject experiences a visual illusion of real objects fixed relative to the world, and any of these displays can be perturbed in an unpredictable fashion. A special app (Balance Rite used the same procedures for analysing postural analysis as used by the Equitest.Power of the technique. Four simple proof of concept experiments demonstrate that this technique matches the gold standard Equitest in terms of the measurement of postural stability but goes beyond the Equitest by measuring stability in the face of visual challenges, which are so powerful that even healthy subjects fall. The response to these challenges presents an opportunity for predicting falls and for rehabilitation of seniors and patients with poor postural stability.Significance for the field. This new method provides a simpler, quicker, cheaper method of measurement than the Equitest. It may provide a new mode of training to prevent falls, by maintaining postural stability in the face of visual and proprioceptive challenges similar to those

  16. Maintaining Balance when Looking at a Virtual Reality Three-Dimensional Display of a Field of Moving Dots or at a Virtual Reality Scene.

    Science.gov (United States)

    Chiarovano, Elodie; de Waele, Catherine; MacDougall, Hamish G; Rogers, Stephen J; Burgess, Ann M; Curthoys, Ian S

    2015-01-01

    To provide a safe, simple, relatively inexpensive, fast, accurate way of quantifying balance performance either in isolation, or in the face of challenges provided by 3D high definition moving visual stimuli as well as by the proprioceptive challenge from standing on a foam pad. This method uses the new technology of the Wii balance board to measure postural stability during powerful, realistic visual challenges from immersive virtual reality. Present computerized methods for measuring postural stability are large, complex, slow, and expensive, and do not allow for testing the response to realistic visual challenges. Subjects stand on a 6 cm thick, firm, foam pad on a Wii balance board. They wear a fast, high resolution, low persistence, virtual reality head set (Oculus Rift DK2). This allows displays of varying speed, direction, depth, and complexity to be delivered. The subject experiences a visual illusion of real objects fixed relative to the world, and any of these displays can be perturbed in an unpredictable fashion. A special app (BalanceRite) used the same procedures for analyzing postural analysis as used by the Equitest. Four simple "proof of concept" experiments demonstrate that this technique matches the gold standard Equitest in terms of the measurement of postural stability but goes beyond the Equitest by measuring stability in the face of visual challenges, which are so powerful that even healthy subjects fall. The response to these challenges presents an opportunity for predicting falls and for rehabilitation of seniors and patients with poor postural stability. This new method provides a simpler, quicker, cheaper method of measurement than the Equitest. It may provide a new mode of training to prevent falls, by maintaining postural stability in the face of visual and proprioceptive challenges similar to those encountered in life.

  17. 3D Multi-Channel Networked Visualization System for National LambdaRail, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Multichannel virtual reality visualization is the future of complex simulation with a large number of visual channels rendered and transmitted over high-speed...

  18. Effect of Topography on Learning Military Tactics - Integration of Generalized Intelligent Framework for Tutoring (GIFT) and Augmented REality Sandtable (ARES)

    Science.gov (United States)

    2016-09-01

    Dunleavy M, Dede C. Augmented reality teaching and learning. Handbook of research on educational communications and technology . New York (NY): Springer...taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems. 1994;77(12):1321–1329. Noordzij ML, Scholten P, Laroy-Noordzij...Generalized Intelligent Framework for Tutoring (GIFT) and Augmented REality Sandtable (ARES) by Michael W Boyce, Ramsamooj J Reyes, Deeja E Cruz, Charles

  19. Distributed Virtual Reality: System Concepts for Cooperative Training and Commanding in Virtual Worlds

    Directory of Open Access Journals (Sweden)

    Eckhard Freund

    2003-02-01

    Full Text Available The general aim of the development of virtual reality technology for automation applications at the IRF is to provide the framework for Projective Virtual Reality which allows users to "project" their actions in the virtual world into the real world primarily by means of robots but also by other means of automation. The framework is based on a new task-oriented approach which builds on the "task deduction" capabilities of a newly developed virtual reality system and a task planning component. The advantage of this new approach is that robots which work at great distances from the control station can be controlled as easily and intuitively as robots that work right next to the control station. Robot control technology now provides the user in the virtual world with a "prolonged arm" into the physical environment, thus paving the way for a new quality of userfriendly man machine interfaces for automation applications. Lately, this work has been enhanced by a new structure that allows to distribute the virtual reality application over multiple computers. With this new step, it is now possible for multiple users to work together in the same virtual room, although they may physically be thousands of miles apart. They only need an Internet or ISDN connection to share this new experience. Last but not least, the distribution technology has been further developed to not just allow users to cooperate but to be able to run the virtual world on many synchronized PCs so that a panorama projection or even a cave can be run with 10 synchronized PCs instead of high-end workstations, thus cutting down the costs for such a visualization environment drastically and allowing for a new range of applications.

  20. Density of Visual Input Enhancement and Grammar Learning: A Research Proposal

    Science.gov (United States)

    Tran, Thu Hoang

    2009-01-01

    Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…

  1. Augmented reality

    Directory of Open Access Journals (Sweden)

    Patrik Pucer

    2011-08-01

    Full Text Available Today we can obtain in a simple and rapid way most of the information that we need. Devices, such as personal computers and mobile phones, enable access to information in different formats (written, pictorial, audio or video whenever and wherever. Daily we use and encounter information that can be seen as virtual objects or objects that are part of the virtual world of computers. Everyone, at least once, wanted to bring these virtual objects from the virtual world of computers into real environments and thus mix virtual and real worlds. In such a mixed reality, real and virtual objects coexist in the same environment. The reality, where users watch and use the real environment upgraded with virtual objects is called augmented reality. In this article we describe the main properties of augmented reality. In addition to the basic properties that define a reality as augmented reality, we present the various building elements (possible hardware and software that provide an insight into such a reality and practical applications of augmented reality. The applications are divided into three groups depending on the information and functions that augmented reality offers, such as help, guide and simulator.

  2. Augmented Reality Comes to Physics

    Science.gov (United States)

    Buesing, Mark; Cook, Michael

    2013-01-01

    Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as…

  3. Collaborative Embodied Learning in Mixed Reality Motion-Capture Environments: Two Science Studies

    Science.gov (United States)

    Johnson-Glenberg, Mina C.; Birchfield, David A.; Tolentino, Lisa; Koziupa, Tatyana

    2014-01-01

    These 2 studies investigate the extent to which an Embodied Mixed Reality Learning Environment (EMRELE) can enhance science learning compared to regular classroom instruction. Mixed reality means that physical tangible and digital components were present. The content for the EMRELE required that students map abstract concepts and relations onto…

  4. Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality

    Science.gov (United States)

    Hua, Hong

    2017-05-01

    Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).

  5. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  6. Virtual reality for identification of defects and fractures in images

    International Nuclear Information System (INIS)

    Sales, Douglas S.; Mol, Antonio Carlos A.; Jorge, Carlos Alexandre F.

    2009-01-01

    This paper describes the development of a system for non destructive analysis of materials through the stereo tridimensional visualization, applying the virtual reality techniques. It is a set of experiments and computational techniques involving the using of two images, each one related to the image as viewed by an eye in order to compose the tridimensional image. Based on the three dimension visualization and the the previously defined scale, it is possible to make an analysis and to evaluate possible failures on the studied material

  7. Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality.

    Science.gov (United States)

    Grubert, Jens; Langlotz, Tobias; Zollmann, Stefanie; Regenbrecht, Holger

    2017-06-01

    Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user's current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.

  8. Brain-computer interface: changes in performance using virtual reality techniques.

    Science.gov (United States)

    Ron-Angevin, Ricardo; Díaz-Estrella, Antonio

    2009-01-09

    The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.

  9. A Visual Arts Education pedagogical approach for enhancing quality of life for persons with dementia (innovative practice).

    Science.gov (United States)

    Tietyen, Ann C; Richards, Allan G

    2017-01-01

    A new and innovative pedagogical approach that administers hands-on visual arts activities to persons with dementia based on the field of Visual Arts Education is reported in this paper. The aims of this approach are to enhance cognition and improve quality of life. These aims were explored in a small qualitative study with eight individuals with moderate dementia, and the results are published as a thesis. In this paper, we summarize and report the results of this small qualitative study and expand upon the rationale for the Visual Arts Education pedagogical approach that has shown promise for enhancing cognitive processes and improving quality of life for persons with dementia.

  10. The Use of Virtual Reality Facilitates Dialectical Behavior Therapy® "Observing Sounds and Visuals" Mindfulness Skills Training Exercises for a Latino Patient with Severe Burns: A Case Study.

    Science.gov (United States)

    Gomez, Jocelyn; Hoffman, Hunter G; Bistricky, Steven L; Gonzalez, Miriam; Rosenberg, Laura; Sampaio, Mariana; Garcia-Palacios, Azucena; Navarro-Haro, Maria V; Alhalabi, Wadee; Rosenberg, Marta; Meyer, Walter J; Linehan, Marsha M

    2017-01-01

    Sustaining a burn injury increases an individual's risk of developing psychological problems such as generalized anxiety, negative emotions, depression, acute stress disorder, or post-traumatic stress disorder. Despite the growing use of Dialectical Behavioral Therapy® (DBT®) by clinical psychologists, to date, there are no published studies using standard DBT® or DBT® skills learning for severe burn patients. The current study explored the feasibility and clinical potential of using Immersive Virtual Reality (VR) enhanced DBT® mindfulness skills training to reduce negative emotions and increase positive emotions of a patient with severe burn injuries. The participant was a hospitalized (in house) 21-year-old Spanish speaking Latino male patient being treated for a large (>35% TBSA) severe flame burn injury. Methods: The patient looked into a pair of Oculus Rift DK2 virtual reality goggles to perceive the computer-generated virtual reality illusion of floating down a river, with rocks, boulders, trees, mountains, and clouds, while listening to DBT® mindfulness training audios during 4 VR sessions over a 1 month period. Study measures were administered before and after each VR session. Results: As predicted, the patient reported increased positive emotions and decreased negative emotions. The patient also accepted the VR mindfulness treatment technique. He reported the sessions helped him become more comfortable with his emotions and he wanted to keep using mindfulness after returning home. Conclusions: Dialectical Behavioral Therapy is an empirically validated treatment approach that has proved effective with non-burn patient populations for treating many of the psychological problems experienced by severe burn patients. The current case study explored for the first time, the use of immersive virtual reality enhanced DBT® mindfulness skills training with a burn patient. The patient reported reductions in negative emotions and increases in positive emotions

  11. Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment

    Science.gov (United States)

    Bhagawati, Dwipen

    2001-07-01

    Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and

  12. Computer visualization for enhanced operator performance for advanced nuclear power plants

    International Nuclear Information System (INIS)

    Simon, B.H.; Raghavan, R.

    1993-01-01

    The operators of nuclear power plants are presented with an often uncoordinated and arbitrary array of displays and controls. Information is presented in different formats and on physically dissimilar instruments. In an accident situation, an operator must be very alert to quickly diagnose and respond to the state of the plant as represented by the control room displays. Improvements in display technology and increased automation have helped reduce operator burden; however, too much automation may lead to operator apathy and decreased efficiency. A proposed approach to the human-system interface uses modern graphics technology and advances in computational power to provide a visualization or ''virtual reality'' framework for the operator. This virtual reality comprises a simulated perception of another existence, complete with three-dimensional structures, backgrounds, and objects. By placing the operator in an environment that presents an integrated, graphical, and dynamic view of the plant, his attention is directly engaged. Through computer simulation, the operator can view plant equipment, read local displays, and manipulate controls as if he were in the local area. This process not only keeps an operator involved in plant operation and testing procedures, but also reduces personnel exposure. In addition, operator stress is reduced because, with realistic views of plant areas and equipment, the status of the plant can be accurately grasped without interpreting a large number of displays. Since a single operator can quickly ''visit'' many different plant areas without physically moving from the control room, these techniques are useful in reducing labor requirements for surveillance and maintenance activities. This concept requires a plant dynamic model continuously updated via real-time process monitoring. This model interacts with a three-dimensional, solid-model architectural configuration of the physical plant

  13. Four-dimensional microscope- integrated optical coherence tomography to enhance visualization in glaucoma surgeries.

    Science.gov (United States)

    Pasricha, Neel Dave; Bhullar, Paramjit Kaur; Shieh, Christine; Viehland, Christian; Carrasco-Zevallos, Oscar Mijail; Keller, Brenton; Izatt, Joseph Adam; Toth, Cynthia Ann; Challa, Pratap; Kuo, Anthony Nanlin

    2017-01-01

    We report the first use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT) capable of live four-dimensional (4D) (three-dimensional across time) imaging intraoperatively to directly visualize tube shunt placement and trabeculectomy surgeries in two patients with severe open-angle glaucoma and elevated intraocular pressure (IOP) that was not adequately managed by medical intervention or prior surgery. We performed tube shunt placement and trabeculectomy surgery and used SS-MIOCT to visualize and record surgical steps that benefitted from the enhanced visualization. In the case of tube shunt placement, SS-MIOCT successfully visualized the scleral tunneling, tube shunt positioning in the anterior chamber, and tube shunt suturing. For the trabeculectomy, SS-MIOCT successfully visualized the scleral flap creation, sclerotomy, and iridectomy. Postoperatively, both patients did well, with IOPs decreasing to the target goal. We found the benefit of SS-MIOCT was greatest in surgical steps requiring depth-based assessments. This technology has the potential to improve clinical outcomes.

  14. Remote Collaboration With Mixed Reality Displays

    DEFF Research Database (Denmark)

    Müller, Jens; Rädle, Roman; Reiterer, Harald

    2017-01-01

    HCI research has demonstrated Mixed Reality (MR) as being beneficial for co-located collaborative work. For remote collaboration, however, the collaborators' visual contexts do not coincide due to their individual physical environments. The problem becomes apparent when collaborators refer...... to physical landmarks in their individual environments to guide each other's attention. In an experimental study with 16 dyads, we investigated how the provisioning of shared virtual landmarks (SVLs) influences communication behavior and user experience. A quantitative analysis revealed that participants used...

  15. Visual working memory enhances the neural response to matching visual input

    NARCIS (Netherlands)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-01-01

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw

  16. Implementation of augmented reality to models sultan deli

    Science.gov (United States)

    Syahputra, M. F.; Lumbantobing, N. P.; Siregar, B.; Rahmat, R. F.; Andayani, U.

    2018-03-01

    Augmented reality is a technology that can provide visualization in the form of 3D virtual model. With the utilization of augmented reality technology hence image-based modeling to produce 3D model of Sultan Deli Istana Maimun can be applied to restore photo of Sultan of Deli into three dimension model. This is due to the Sultan of Deli which is one of the important figures in the history of the development of the city of Medan is less known by the public because the image of the Sultanate of Deli is less clear and has been very long. To achieve this goal, augmented reality applications are used with image processing methodologies into 3D models through several toolkits. The output generated from this method is the visitor’s photos Maimun Palace with 3D model of Sultan Deli with the detection of markers 20-60 cm apart so as to provide convenience for the public to recognize the Sultan Deli who had ruled in Maimun Palace.

  17. An indoor augmented reality mobile application for simulation of building evacuation

    Science.gov (United States)

    Sharma, Sharad; Jerripothula, Shanmukha

    2015-03-01

    Augmented Reality enables people to remain connected with the physical environment they are in, and invites them to look at the world from new and alternative perspectives. There has been an increasing interest in emergency evacuation applications for mobile devices. Nearly all the smart phones these days are Wi-Fi and GPS enabled. In this paper, we propose a novel emergency evacuation system that will help people to safely evacuate a building in case of an emergency situation. It will further enhance knowledge and understanding of where the exits are in the building and safety evacuation procedures. We have applied mobile augmented reality (mobile AR) to create an application with Unity 3D gaming engine. We show how the mobile AR application is able to display a 3D model of the building and animation of people evacuation using markers and web camera. The system gives a visual representation of a building in 3D space, allowing people to see where exits are in the building through the use of a smart phone or tablets. Pilot studies were conducted with the system showing its partial success and demonstrated the effectiveness of the application in emergency evacuation. Our computer vision methods give good results when the markers are closer to the camera, but accuracy decreases when the markers are far away from the camera.

  18. Linear Narratives, Arbitrary Relationships: Arbitrary Relationships: Mimesis and Direct Communication for Effectively Representing Engineering Realities Multimodally

    Science.gov (United States)

    Jeyaraj, Joseph

    2017-01-01

    Engineers communicate multimodally using written and visual communication, but there is not much theorizing on why they do so and how. This essay, therefore, examines why engineers communicate multimodally, what, in the context of representing engineering realities, are the strengths and weaknesses of written and visual communication, and how,…

  19. Enhancing Nuclear Newcomer Training with 3D Visualization Learning Tools

    International Nuclear Information System (INIS)

    Gagnon, V.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  20. [Intraoperative multidimensional visualization].

    Science.gov (United States)

    Sperling, J; Kauffels, A; Grade, M; Alves, F; Kühn, P; Ghadimi, B M

    2016-12-01

    Modern intraoperative techniques of visualization are increasingly being applied in general and visceral surgery. The combination of diverse techniques provides the possibility of multidimensional intraoperative visualization of specific anatomical structures. Thus, it is possible to differentiate between normal tissue and tumor tissue and therefore exactly define tumor margins. The aim of intraoperative visualization of tissue that is to be resected and tissue that should be spared is to lead to a rational balance between oncological and functional results. Moreover, these techniques help to analyze the physiology and integrity of tissues. Using these methods surgeons are able to analyze tissue perfusion and oxygenation. However, to date it is not clear to what extent these imaging techniques are relevant in the clinical routine. The present manuscript reviews the relevant modern visualization techniques focusing on intraoperative computed tomography and magnetic resonance imaging as well as augmented reality, fluorescence imaging and optoacoustic imaging.

  1. Augmented Reality: Sustaining iPad-facilitated Visualisation Pedagogy in Nursing Education

    DEFF Research Database (Denmark)

    Kjærgaard, Hanne Wacher; Kjeldsen, Lars Peter; Rahn, Annette

    2015-01-01

    This chapter describes the use of iPad-facilitated application of augmented reality in the teaching of highly complex anatomical and physiological subjects in the training of nurses at undergraduate level. The general aim of the project is to investigate the potentials of this application in terms...... of making the complex content and context of these subjects more approachable to the students through the visualization made possible through the use of this technology. A case study is described in this chapter. Issues and factors required for the sustainable use of the mobile-facilitated application...... of augmented reality are discussed....

  2. Evaluation of Sports Visualization Based on Wearable Devices

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2017-12-01

    Full Text Available In order to visualize the physical education classroom in school, we create a visualized movement management system, which records the student's exercise data efficiently and stores data in the database that enables virtual reality client to call. Each individual's exercise data are gathered as the source material to study the law of group movement, playing a strategic role in managing physical education. Through the combination of wearable devices, virtual reality and network technology, the student movement data (time, space, rate, etc. are collected in real time to drive the role model in virtual scenes, which visualizes the movement data. Moreover, the Markov chain based algorithm is used to predict the movement state. The test results show that this method can quantize the student movement data. Therefore, the application of this system in PE classes can help teacher to observe the students’ real-time movement amount and state, so as to improve the teaching quality.

  3. PixEye Virtual Reality Training has the Potential of Enhancing Proficiency of Laser Trabeculoplasty Performed by Medical Students: A Pilot Study.

    Science.gov (United States)

    Alwadani, Fahad; Morsi, Mohammed Saad

    2012-01-01

    To compare the surgical proficiency of medical students who underwent traditional training or virtual reality training for argon laser trabeculoplasty with the PixEye simulator. The cohort comprised of 47 fifth year male medical students from the College of Medicine, King Faisal University, Saudi Arabia. The cohort was divided into two groups: students (n = 24), who received virtual reality training (VR Group) and students (n = 23), who underwent traditional training (Control Group). After training, the students performed the trabeculoplasty procedure. All trainings were included concurrent power point presentations describing the details of the procedure. Evaluation of surgical performance was based on the following variables: missing the exact location with the laser, overtreatment, undertreatment and inadvertent laser shots to iris and cornea. The target was missed by 8% of the VR Group compared to 55% in the Control Group. Overtreatment and undertreatment was observed in 7% of the VR Group compared to 46% of the Control Group. Inadvertent laser application to the cornea or iris was performed by 4.5% of the VR Group compared to 34% of the Control Group. Virtual reality training on PixEye simulator may enhance the proficiency of medical students and limit possible surgical errors during laser trabeculoplasty. The authors have no financial interest in the material mentioned in this study.

  4. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    Science.gov (United States)

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  5. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  6. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  7. Visuo-acoustic stimulation that helps you to relax: A virtual reality setup for patients in the intensive care unit.

    Science.gov (United States)

    Gerber, Stephan M; Jeitziner, Marie-Madlen; Wyss, Patric; Chesham, Alvin; Urwyler, Prabitha; Müri, René M; Jakob, Stephan M; Nef, Tobias

    2017-10-16

    After prolonged stay in an intensive care unit (ICU) patients often complain about cognitive impairments that affect health-related quality of life after discharge. The aim of this proof-of-concept study was to test the feasibility and effects of controlled visual and acoustic stimulation in a virtual reality (VR) setup in the ICU. The VR setup consisted of a head-mounted display in combination with an eye tracker and sensors to assess vital signs. The stimulation consisted of videos featuring natural scenes and was tested in 37 healthy participants in the ICU. The VR stimulation led to a reduction of heart rate (p = 0. 049) and blood pressure (p = 0.044). Fixation/saccade ratio (p < 0.001) was increased when a visual target was presented superimposed on the videos (reduced search activity), reflecting enhanced visual processing. Overall, the VR stimulation had a relaxing effect as shown in vital markers of physical stress and participants explored less when attending the target. Our study indicates that VR stimulation in ICU settings is feasible and beneficial for critically ill patients.

  8. Virtual Reality Presentation of Moment Tensor Analysis by SiGMA

    International Nuclear Information System (INIS)

    Ohtsu, Masayasu; Shigeishi, Mitsuhiro

    2003-01-01

    Nucleation of a crack is readily defected by acoustic emission (AE) method. One powerful technique for AE waveform analysis has been developed as SiGMh (Simplified Greens functions for Moment tensor Analysis), as crack kinematics of locations, types and orientations are quantitatively determined. Because these kinematical outcomes are obtained as three-dimensional (3-D) locations and vectors, 3-D visualization is definitely desirable. To this end, the visualization system has been developed by using VRML (Virtual Reality Modeling Language). As an application, failure protest of a reinforced concrete beam is discussed

  9. The effects of virtual reality displays on visual attention and detection of signals performance for main control room training

    International Nuclear Information System (INIS)

    Lin Shiaufeng; Lin Chiuhsiang Joe; Wang Rouwen; Yang Lichen; Yang Chihwei; Cheng Tsungchieh; Wang Jyhgang

    2011-01-01

    The nuclear power plant (NPP) mainly serve the purpose to provide low-cost and stable electricity for the people, but this purpose must be dependent upon the premise of 'safety first.' The reason for this is that the occurrence of nuclear power plant accidents could cause catastrophic damage to the people, property, society, and the environment. Therefore, training in superior and high reliability system is very important in accident prevention. In recent years, the Virtual Reality (VR) technology advances very fast as well as the technology for e-learning environment. VR systems have been applied for education, safety training of NPP and flying simulators. In particular, VR is an interactive and reactive technology; it allows users to interact and navigate with objects in the virtual environment. Development of VR and simulation techniques contributes to an accurate and immersive training environment for NPP operators. Main Control Room (MCR) training simulator based on VR is a more cost effective and efficient alternative to traditional simulator based training methods. The VR simulation for MCR training is a complex task. Since VR not only reinforces the visual presentation of the training materials but also provides ways to interact with the training system, it becomes more flexible and possibly more powerful in the training system. In the VR training system, the MCR operators may use just one display to view the wide range of the real world displays. The field of view (FOV) will be different from the real MCR environment in which many displays exist for the operators to view. Thus operator's immersion and visual attention will be reduced. This is the problem of MCR virtual training compared with the traditional simulator based training systems. Therefore, improving the operator's visual attention and the detection of signals in VR training system is a very important issue. This investigation intends to contribute in assessing benefits of visual attention and

  10. N1 enhancement in synesthesia during visual and audio-visual perception in semantic cross-modal conflict situations: an ERP study

    Directory of Open Access Journals (Sweden)

    Christopher eSinke

    2014-01-01

    Full Text Available Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and inanimated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found an enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.

  11. Visualizing planetary data by using 3D engines

    Science.gov (United States)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  12. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    Science.gov (United States)

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  13. Enhancing Time-Connectives with 3D Immersive Virtual Reality (IVR)

    Science.gov (United States)

    Passig, David; Eden, Sigal

    2010-01-01

    This study sought to test the most efficient representation mode with which children with hearing impairment could express a story while producing connectives indicating relations of time and of cause and effect. Using Bruner's (1973, 1986, 1990) representation stages, we tested the comparative effectiveness of Virtual Reality (VR) as a mode of…

  14. Using augmented reality to teach and learn biochemistry.

    Science.gov (United States)

    Vega Garzón, Juan Carlos; Magrini, Marcio Luiz; Galembeck, Eduardo

    2017-09-01

    Understanding metabolism and metabolic pathways constitutes one of the central aims for students of biological sciences. Learning metabolic pathways should be focused on the understanding of general concepts and core principles. New technologies such Augmented Reality (AR) have shown potential to improve assimilation of biochemistry abstract concepts because students can manipulate 3D molecules in real time. Here we describe an application named Augmented Reality Metabolic Pathways (ARMET), which allowed students to visualize the 3D molecular structure of substrates and products, thus perceiving changes in each molecule. The structural modification of molecules shows students the flow and exchange of compounds and energy through metabolism. © 2017 by The International Union of Biochemistry and Molecular Biology, 45(5):417-420, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.

  15. Markov Processes: Exploring the Use of Dynamic Visualizations to Enhance Student Understanding

    Science.gov (United States)

    Pfannkuch, Maxine; Budgett, Stephanie

    2016-01-01

    Finding ways to enhance introductory students' understanding of probability ideas and theory is a goal of many first-year probability courses. In this article, we explore the potential of a prototype tool for Markov processes using dynamic visualizations to develop in students a deeper understanding of the equilibrium and hitting times…

  16. Evaluating the Effects of Immersive Embodied Interaction on Cognition in Virtual Reality

    Science.gov (United States)

    Parmar, Dhaval

    Virtual reality is on its advent of becoming mainstream household technology, as technologies such as head-mounted displays, trackers, and interaction devices are becoming affordable and easily available. Virtual reality (VR) has immense potential in enhancing the fields of education and training, and its power can be used to spark interest and enthusiasm among learners. It is, therefore, imperative to evaluate the risks and benefits that immersive virtual reality poses to the field of education. Research suggests that learning is an embodied process. Learning depends on grounded aspects of the body including action, perception, and interactions with the environment. This research aims to study if immersive embodiment through the means of virtual reality facilitates embodied cognition. A pedagogical VR solution which takes advantage of embodied cognition can lead to enhanced learning benefits. Towards achieving this goal, this research presents a linear continuum for immersive embodied interaction within virtual reality. This research evaluates the effects of three levels of immersive embodied interactions on cognitive thinking, presence, usability, and satisfaction among users in the fields of science, technology, engineering, and mathematics (STEM) education. Results from the presented experiments show that immersive virtual reality is greatly effective in knowledge acquisition and retention, and highly enhances user satisfaction, interest and enthusiasm. Users experience high levels of presence and are profoundly engaged in the learning activities within the immersive virtual environments. The studies presented in this research evaluate pedagogical VR software to train and motivate students in STEM education, and provide an empirical analysis comparing desktop VR (DVR), immersive VR (IVR), and immersive embodied VR (IEVR) conditions for learning. This research also proposes a fully immersive embodied interaction metaphor (IEIVR) for learning of computational

  17. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  18. Virtual Reality

    Science.gov (United States)

    1993-04-01

    until exhausted. SECURITY CLASSIFICATION OF THIS PAGE All other editions are obsolete. UNCLASSIFIED " VIRTUAL REALITY JAMES F. DAILEY, LIEUTENANT COLONEL...US" This paper reviews the exciting field of virtual reality . The author describes the basic concepts of virtual reality and finds that its numerous...potential benefits to society could revolutionize everyday life. The various components that make up a virtual reality system are described in detail

  19. Augmented Reality as a Telemedicine Platform for Remote Procedural Training.

    Science.gov (United States)

    Wang, Shiyao; Parsons, Michael; Stone-McLean, Jordan; Rogers, Peter; Boyd, Sarah; Hoover, Kristopher; Meruvia-Pastor, Oscar; Gong, Minglun; Smith, Andrew

    2017-10-10

    Traditionally, rural areas in many countries are limited by a lack of access to health care due to the inherent challenges associated with recruitment and retention of healthcare professionals. Telemedicine, which uses communication technology to deliver medical services over distance, is an economical and potentially effective way to address this problem. In this research, we develop a new telepresence application using an Augmented Reality (AR) system. We explore the use of the Microsoft HoloLens to facilitate and enhance remote medical training. Intrinsic advantages of AR systems enable remote learners to perform complex medical procedures such as Point of Care Ultrasound (PoCUS) without visual interference. This research uses the HoloLens to capture the first-person view of a simulated rural emergency room (ER) through mixed reality capture (MRC) and serves as a novel telemedicine platform with remote pointing capabilities. The mentor's hand gestures are captured using a Leap Motion and virtually displayed in the AR space of the HoloLens. To explore the feasibility of the developed platform, twelve novice medical trainees were guided by a mentor through a simulated ultrasound exploration in a trauma scenario, as part of a pilot user study. The study explores the utility of the system from the trainees, mentor, and objective observers' perspectives and compares the findings to that of a more traditional multi-camera telemedicine solution. The results obtained provide valuable insight and guidance for the development of an AR-supported telemedicine platform.

  20. Augmented Reality as a Telemedicine Platform for Remote Procedural Training

    Science.gov (United States)

    Wang, Shiyao; Parsons, Michael; Stone-McLean, Jordan; Rogers, Peter; Boyd, Sarah; Hoover, Kristopher; Meruvia-Pastor, Oscar; Gong, Minglun; Smith, Andrew

    2017-01-01

    Traditionally, rural areas in many countries are limited by a lack of access to health care due to the inherent challenges associated with recruitment and retention of healthcare professionals. Telemedicine, which uses communication technology to deliver medical services over distance, is an economical and potentially effective way to address this problem. In this research, we develop a new telepresence application using an Augmented Reality (AR) system. We explore the use of the Microsoft HoloLens to facilitate and enhance remote medical training. Intrinsic advantages of AR systems enable remote learners to perform complex medical procedures such as Point of Care Ultrasound (PoCUS) without visual interference. This research uses the HoloLens to capture the first-person view of a simulated rural emergency room (ER) through mixed reality capture (MRC) and serves as a novel telemedicine platform with remote pointing capabilities. The mentor’s hand gestures are captured using a Leap Motion and virtually displayed in the AR space of the HoloLens. To explore the feasibility of the developed platform, twelve novice medical trainees were guided by a mentor through a simulated ultrasound exploration in a trauma scenario, as part of a pilot user study. The study explores the utility of the system from the trainees, mentor, and objective observers’ perspectives and compares the findings to that of a more traditional multi-camera telemedicine solution. The results obtained provide valuable insight and guidance for the development of an AR-supported telemedicine platform. PMID:28994720

  1. Augmented Reality as a Telemedicine Platform for Remote Procedural Training

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2017-10-01

    Full Text Available Traditionally, rural areas in many countries are limited by a lack of access to health care due to the inherent challenges associated with recruitment and retention of healthcare professionals. Telemedicine, which uses communication technology to deliver medical services over distance, is an economical and potentially effective way to address this problem. In this research, we develop a new telepresence application using an Augmented Reality (AR system. We explore the use of the Microsoft HoloLens to facilitate and enhance remote medical training. Intrinsic advantages of AR systems enable remote learners to perform complex medical procedures such as Point of Care Ultrasound (PoCUS without visual interference. This research uses the HoloLens to capture the first-person view of a simulated rural emergency room (ER through mixed reality capture (MRC and serves as a novel telemedicine platform with remote pointing capabilities. The mentor’s hand gestures are captured using a Leap Motion and virtually displayed in the AR space of the HoloLens. To explore the feasibility of the developed platform, twelve novice medical trainees were guided by a mentor through a simulated ultrasound exploration in a trauma scenario, as part of a pilot user study. The study explores the utility of the system from the trainees, mentor, and objective observers’ perspectives and compares the findings to that of a more traditional multi-camera telemedicine solution. The results obtained provide valuable insight and guidance for the development of an AR-supported telemedicine platform.

  2. Enhanced visual memory during hypnosis as mediated by hypnotic responsiveness and cognitive strategies.

    Science.gov (United States)

    Crawford, H J; Allen, S N

    1983-12-01

    To investigate the hypothesis that hypnosis has an enhancing effect on imagery processing, as mediated by hypnotic responsiveness and cognitive strategies, four experiments compared performance of low and high, or low, medium, and high, hypnotically responsive subjects in waking and hypnosis conditions on a successive visual memory discrimination task that required detecting differences between successively presented picture pairs in which one member of the pair was slightly altered. Consistently, hypnotically responsive individuals showed enhanced performance during hypnosis, whereas nonresponsive ones did not. Hypnotic responsiveness correlated .52 (p less than .001) with enhanced performance during hypnosis, but it was uncorrelated with waking performance (Experiment 3). Reaction time was not affected by hypnosis, although high hypnotizables were faster than lows in their responses (Experiments 1 and 2). Subjects reported enhanced imagery vividness on the self-report Vividness of Visual Imagery Questionnaire during hypnosis. The differential effect between lows and highs was in the anticipated direction but not significant (Experiments 1 and 2). As anticipated, hypnosis had no significant effect on a discrimination task that required determining whether there were differences between pairs of simultaneously presented pictures. Two cognitive strategies that appeared to mediate visual memory performance were reported: (a) detail strategy, which involved the memorization and rehearsal of individual details for memory, and (b) holistic strategy, which involved looking at and remembering the whole picture with accompanying imagery. Both lows and highs reported similar predominantly detail-oriented strategies during waking; only highs shifted to a significantly more holistic strategy during hypnosis. These findings suggest that high hypnotizables have a greater capacity for cognitive flexibility (Batting, 1979) than do lows. Results are discussed in terms of several

  3. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  4. Enhancing performance expectancies through visual illusions facilitates motor learning in children.

    Science.gov (United States)

    Bahmani, Moslem; Wulf, Gabriele; Ghadiri, Farhad; Karimi, Saeed; Lewthwaite, Rebecca

    2017-10-01

    In a recent study by Chauvel, Wulf, and Maquestiaux (2015), golf putting performance was found to be affected by the Ebbinghaus illusion. Specifically, adult participants demonstrated more effective learning when they practiced with a hole that was surrounded by small circles, making it look larger, than when the hole was surrounded by large circles, making it look smaller. The present study examined whether this learning advantage would generalize to children who are assumed to be less sensitive to the visual illusion. Two groups of 10-year olds practiced putting golf balls from a distance of 2m, with perceived larger or smaller holes resulting from the visual illusion. Self-efficacy was increased in the group with the perceived larger hole. The latter group also demonstrated more accurate putting performance during practice. Importantly, learning (i.e., delayed retention performance without the illusion) was enhanced in the group that practiced with the perceived larger hole. The findings replicate previous results with adult learners and are in line with the notion that enhanced performance expectancies are key to optimal motor learning (Wulf & Lewthwaite, 2016). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    Science.gov (United States)

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants

    OpenAIRE

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Mu?oz-Samons, Daniel; Ochoa, Susana; S?nchez-Laforga, Ana Mar?a; Br?bion, Gildas

    2017-01-01

    BACKGROUND: Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. METHODS: A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, ...

  7. Designing Android Based Augmented Reality Location-Based Service Application

    Directory of Open Access Journals (Sweden)

    Alim Hardiansyah

    2018-01-01

    Full Text Available Android is an operating system for Linux based smartphone. Android provides an open platform for the developers to create their own application. The most developed and used application now is location based application. This application gives personalization service for mobile device user and is customized to their location. Location based service also gives an opportunity for the developers to develop and increase the value of service. One of the technologies that could be combined with location based application is augmented reality. Augmented reality combines the virtual world with the real one. By the assistance of augmented reality, our surrounding environment could interact in digital form. Information of objects and environment surround us could be added to the augmented reality system and presented. Based on the background, the writers tried to implement those technologies on now rapidly developing android application as a final project to achieve bachelor degree in Department of Informatics Engineering, Faculty of Information Technology and Visual Communication, Al Kamal Science and Technology Institute. This application could be functioned to locate school by using location based service technology with the assistance of navigational applications such as waze and google maps, in form of live direction process through the smartphone

  8. Telemanipulation, telepresence, and virtual reality for surgery in the year 2000

    Science.gov (United States)

    Satava, Richard M.

    1995-12-01

    The new technologic revolution in medicine is based upon information technologies, and telemanipulation, telepresence and virtual reality are essential components. Telepresence surgery returns the look and feel of `open surgery' to the surgeon and promises enhancement of physical capabilities above normal human performance. Virtual reality provides basic medical education, simulation of surgical procedures, medical forces and disaster medicine practice, and virtual prototyping of medical equipment.

  9. Therapeutic Media: Treating PTSD with Virtual Reality Exposure Therapy

    Directory of Open Access Journals (Sweden)

    Kathrin Friedrich

    2016-09-01

    Full Text Available Applying head-mounted displays (HMDs and virtual reality scenarios in virtual reality exposure therapy (VRET promises to alleviate combat-related post-traumatic stress disorders (among others. Its basic premise is that, through virtual scenarios, patients may re-engage immersively with situations that provoke anxiety, thereby reducing fear and psychosomatic stress. In this context, HMDs and visualizations should be considered not merely as devices for entertainment purposes or tools for achieving pragmatic objectives but also as a means to instruct and guide patients’ imagination and visual perception in triggering traumatic experiences. Under what perceptual and therapeutic conditions is virtual therapy to be considered effective? Who is the “ideal” patient for such therapy regimes, both in terms of his/her therapeutic indications and his/her perceptual readiness to engage with VR scenarios? In short, how are “treatable” patients conceptualized by and within virtual therapy? From a media-theory perspective, this essay critically explores various aspects of the VRET application Bravemind in order to shed light on conditions of virtual exposure therapy and conceptions of subjectivity and traumatic experience that are embodied and replicated by such HMD-based technology.

  10. FlyAR: augmented reality supported micro aerial vehicle navigation.

    Science.gov (United States)

    Zollmann, Stefanie; Hoppe, Christof; Langlotz, Tobias; Reitmayr, Gerhard

    2014-04-01

    Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicle’s position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the user’s view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.

  11. Analysis of brain activity and response during monoscopic and stereoscopic visualization

    Science.gov (United States)

    Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele

    2012-03-01

    Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.

  12. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    Science.gov (United States)

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  13. Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments

    OpenAIRE

    Thies Pfeiffer; Ipke Wachsmuth; Marc E. Latoschik

    2009-01-01

    Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of...

  14. Learning Program for Enhancing Visual Literacy for Non-Design Students Using a CMS to Share Outcomes

    Science.gov (United States)

    Ariga, Taeko; Watanabe, Takashi; Otani, Toshio; Masuzawa, Toshimitsu

    2016-01-01

    This study proposes a basic learning program for enhancing visual literacy using an original Web content management system (Web CMS) to share students' outcomes in class as a blog post. It seeks to reinforce students' understanding and awareness of the design of visual content. The learning program described in this research focuses on to address…

  15. In Vivo Evaluation of the Visual Pathway in Streptozotocin-Induced Diabetes by Diffusion Tensor MRI and Contrast Enhanced MRI.

    Directory of Open Access Journals (Sweden)

    Swarupa Kancherla

    Full Text Available Visual function has been shown to deteriorate prior to the onset of retinopathy in some diabetic patients and experimental animal models. This suggests the involvement of the brain's visual system in the early stages of diabetes. In this study, we tested this hypothesis by examining the integrity of the visual pathway in a diabetic rat model using in vivo multi-modal magnetic resonance imaging (MRI. Ten-week-old Sprague-Dawley rats were divided into an experimental diabetic group by intraperitoneal injection of 65 mg/kg streptozotocin in 0.01 M citric acid, and a sham control group by intraperitoneal injection of citric acid only. One month later, diffusion tensor MRI (DTI was performed to examine the white matter integrity in the brain, followed by chromium-enhanced MRI of retinal integrity and manganese-enhanced MRI of anterograde manganese transport along the visual pathway. Prior to MRI experiments, the streptozotocin-induced diabetic rats showed significantly smaller weight gain and higher blood glucose level than the control rats. DTI revealed significantly lower fractional anisotropy and higher radial diffusivity in the prechiasmatic optic nerve of the diabetic rats compared to the control rats. No apparent difference was observed in the axial diffusivity of the optic nerve, the chromium enhancement in the retina, or the manganese enhancement in the lateral geniculate nucleus and superior colliculus between groups. Our results suggest that streptozotocin-induced diabetes leads to early injury in the optic nerve when no substantial change in retinal integrity or anterograde transport along the visual pathways was observed in MRI using contrast agent enhancement. DTI may be a useful tool for detecting and monitoring early pathophysiological changes in the visual system of experimental diabetes non-invasively.

  16. Virtual reality simulators for rock engineering related training.

    CSIR Research Space (South Africa)

    Squelch, A

    1997-12-01

    Full Text Available Virtual reality (VR) has been investigated by SIMRAC and CSIR Miningtek as a means of providing an enhancement to current training methods that will lead to more effective hazard awareness training programmes. A VR training simulator developed under...

  17. Enhanced Visualization of Hematoxylin and Eosin Stained Pathological Characteristics by Phasor Approach.

    Science.gov (United States)

    Luo, Teng; Lu, Yuan; Liu, Shaoxiong; Lin, Danying; Qu, Junle

    2017-09-05

    The phasor approach to fluorescence lifetime imaging microscopy (FLIM) is used to identify different types of tissues from hematoxylin and eosin (H&E) stained basal cell carcinoma (BCC) sections. The results suggest that working directly on the phasor space with the clustering assignment achieves immunofluorescence like simultaneous five or six-color imaging by using multiplexed fluorescence lifetimes of H&E. The phase approach is of particular effectiveness for enhanced visualization of the abnormal morphology of a suspected nidus. Moreover, the phasor approach to H&E FLIM data can determine the actual paths or the infiltrating trajectories of basophils and immune cells associated with the preneoplastic or neoplastic skin lesions. The integration of the phasor approach with routine histology proved its available value for skin cancer prevention and early detection. We therefore believe that the phasor analysis of H&E tissue sections is an enhanced visualization tool with the potential to simplify the preparation process of special staining and serve as color contrast aided imaging in clinical pathological examination.

  18. Signal enhancement, not active suppression, follows the contingent capture of visual attention.

    Science.gov (United States)

    Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J

    2017-02-01

    Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Effects of virtual reality training using Nintendo Wii and treadmill walking exercise on balance and walking for stroke patients.

    Science.gov (United States)

    Bang, Yo-Soon; Son, Kyung Hyun; Kim, Hyun Jin

    2016-11-01

    [Purpose] The purpose of this study is to investigate the effects of virtual reality training using Nintendo Wii on balance and walking for stroke patients. [Subjects and Methods] Forty stroke patients with stroke were randomly divided into two exercise program groups: virtual reality training (n=20) and treadmill (n=20). The subjects underwent their 40-minute exercise program three times a week for eight weeks. Their balance and walking were measured before and after the complete program. We measured the left/right weight-bearing and the anterior/posterior weight-bearing for balance, as well as stance phase, swing phase, and cadence for walking. [Results] For balance, both groups showed significant differences in the left/right and anterior/posterior weight-bearing, with significant post-program differences between the groups. For walking, there were significant differences in the stance phase, swing phase, and cadence of the virtual reality training group. [Conclusion] The results of this study suggest that virtual reality training providing visual feedback may enable stroke patients to directly adjust their incorrect weight center and shift visually. Virtual reality training may be appropriate for patients who need improved balance and walking ability by inducing their interest for them to perform planned exercises on a consistent basis.

  20. Enabling scientific workflows in virtual reality

    Science.gov (United States)

    Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.

    2006-01-01

    To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.

  1. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    Science.gov (United States)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  2. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  3. Is Visual Selective Attention in Deaf Individuals Enhanced or Deficient? The Case of the Useful Field of View

    Science.gov (United States)

    Dye, Matthew W. G.; Hauser, Peter C.; Bavelier, Daphne

    2009-01-01

    Background Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals. Methodology/Principal Findings We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age. Conclusions/Significance This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task. PMID:19462009

  4. Virtual reality applied to hepatic surgery simulation: the next revolution.

    Science.gov (United States)

    Marescaux, J; Clément, J M; Tassetti, V; Koehl, C; Cotin, S; Russier, Y; Mutter, D; Delingette, H; Ayache, N

    1998-11-01

    This article describes a preliminary work on virtual reality applied to liver surgery and discusses the repercussions of assisted surgical strategy and surgical simulation on tomorrow's surgery. Liver surgery is considered difficult because of the complexity and variability of the organ. Common generic tools for presurgical medical image visualization do not fulfill the requirements for the liver, restricting comprehension of a patient's specific liver anatomy. Using data from the National Library of Medicine, a realistic three-dimensional image was created, including the envelope and the four internal arborescences. A computer interface was developed to manipulate the organ and to define surgical resection planes according to internal anatomy. The first step of surgical simulation was implemented, providing the organ with real-time deformation computation. The three-dimensional anatomy of the liver could be clearly visualized. The virtual organ could be manipulated and a resection defined depending on the anatomic relations between the arborescences, the tumor, and the external envelope. The resulting parts could also be visualized and manipulated. The simulation allowed the deformation of a liver model in real time by means of a realistic laparoscopic tool. Three-dimensional visualization of the organ in relation to the pathology is of great help to appreciate the complex anatomy of the liver. Using virtual reality concepts (navigation, interaction, and immersion), surgical planning, training, and teaching for this complex surgical procedure may be possible. The ability to practice a given gesture repeatedly will revolutionize surgical training, and the combination of surgical planning and simulation will improve the efficiency of intervention, leading to optimal care delivery.

  5. The Reality of Virtual Reality Product Development

    Science.gov (United States)

    Dever, Clark

    Virtual Reality and Augmented Reality are emerging areas of research and product development in enterprise companies. This talk will discuss industry standard tools and current areas of application in the commercial market. Attendees will gain insights into how to research, design, and (most importantly) ship, world class products. The presentation will recount the lessons learned to date developing a Virtual Reality tool to solve physics problems resulting from trying to perform aircraft maintenance on ships at sea.

  6. Transfer of Skill from a Virtual Reality Trainer to Real Juggling

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available The purpose of this study was to evaluate transfer of training from a virtual reality environment that captures visual and temporal-spatial aspects of juggling, but not the motor demands of juggling. Transfer of skill to real juggling was examined by comparing juggling performance of novices that either experienced both the virtual training protocol and real juggling practice, or only practiced real juggling. After ten days of training, participants who have alternated between real and virtual training demonstrated comparable performance to those who only practiced real juggling. Moreover, they adapted better to instructed changes in temporal-spatial constraints. These results imply that juggling relevant skill subcomponents can be trained in the virtual environment, and support the notion that cognitive aspects of a skill can be separately trained to enhance the acquisition of a complex perceptual-motor task. This study was performed within the SKILLS integrated project of the EC 6th framework.

  7. Motor Simulation without Motor Expertise: Enhanced Corticospinal Excitability in Visually Experienced Dance Spectators

    Science.gov (United States)

    Jola, Corinne; Abedian-Amiri, Ali; Kuppuswamy, Annapoorna; Pollick, Frank E.; Grosbras, Marie-Hélène

    2012-01-01

    The human “mirror-system” is suggested to play a crucial role in action observation and execution, and is characterized by activity in the premotor and parietal cortices during the passive observation of movements. The previous motor experience of the observer has been shown to enhance the activity in this network. Yet visual experience could also have a determinant influence when watching more complex actions, as in dance performances. Here we tested the impact visual experience has on motor simulation when watching dance, by measuring changes in corticospinal excitability. We also tested the effects of empathic abilities. To fully match the participants' long-term visual experience with the present experimental setting, we used three live solo dance performances: ballet, Indian dance, and non-dance. Participants were either frequent dance spectators of ballet or Indian dance, or “novices” who never watched dance. None of the spectators had been physically trained in these dance styles. Transcranial magnetic stimulation was used to measure corticospinal excitability by means of motor-evoked potentials (MEPs) in both the hand and the arm, because the hand is specifically used in Indian dance and the arm is frequently engaged in ballet dance movements. We observed that frequent ballet spectators showed larger MEP amplitudes in the arm muscles when watching ballet compared to when they watched other performances. We also found that the higher Indian dance spectators scored on the fantasy subscale of the Interpersonal Reactivity Index, the larger their MEPs were in the arms when watching Indian dance. Our results show that even without physical training, corticospinal excitability can be enhanced as a function of either visual experience or the tendency to imaginatively transpose oneself into fictional characters. We suggest that spectators covertly simulate the movements for which they have acquired visual experience, and that empathic abilities heighten

  8. Visualization of spatial-temporal data based on 3D virtual scene

    Science.gov (United States)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  9. Molecular Rift: Virtual Reality for Drug Designers.

    Science.gov (United States)

    Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas

    2015-11-23

    Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.

  10. More than visual literacy: art and the enhancement of tolerance for ambiguity and empathy.

    Science.gov (United States)

    Bentwich, Miriam Ethel; Gilbey, Peter

    2017-11-10

    Comfort with ambiguity, mostly associated with the acceptance of multiple meanings, is a core characteristic of successful clinicians. Yet past studies indicate that medical students and junior physicians feel uncomfortable with ambiguity. Visual Thinking Strategies (VTS) is a pedagogic approach involving discussions of art works and deciphering the different possible meanings entailed in them. However, the contribution of art to the possible enhancement of the tolerance for ambiguity among medical students has not yet been adequately investigated. We aimed to offer a novel perspective on the effect of art, as it is experienced through VTS, on medical students' tolerance of ambiguity and its possible relation to empathy. Quantitative method utilizing a short survey administered after an interactive VTS session conducted within mandatory medical humanities course for first-year medical students. The intervention consisted of a 90-min session in the form of a combined lecture and interactive discussions about art images. The VTS session and survey were filled by 67 students in two consecutive rounds of first-year students. 67% of the respondents thought that the intervention contributed to their acceptance of multiple possible meanings, 52% thought their visual observation ability was enhanced and 34% thought that their ability to feel the sufferings of other was being enhanced. Statistically significant moderate-to-high correlations were found between the contribution to ambiguity tolerance and contribution to empathy (0.528-0.744; p ≤ 0.01). Art may contribute especially to the development of medical students' tolerance of ambiguity, also related to the enhancement of empathy. The potential contribution of visual art works used in VTS to the enhancement of tolerance for ambiguity and empathy is explained based on relevant literature regarding the embeddedness of ambiguity within art works, coupled with reference to John Dewey's theory of learning. Given the

  11. Does manipulating the speed of visual flow in virtual reality change distance estimation while walking in Parkinson's disease?

    Science.gov (United States)

    Ehgoetz Martens, Kaylena A; Ellard, Colin G; Almeida, Quincy J

    2015-03-01

    Although dopaminergic replacement therapy is believed to improve sensory processing in PD, while delayed perceptual speed is thought to be caused by a predominantly cholinergic deficit, it is unclear whether sensory-perceptual deficits are a result of corrupt sensory processing, or a delay in updating perceived feedback during movement. The current study aimed to examine these two hypotheses by manipulating visual flow speed and dopaminergic medication to examine which influenced distance estimation in PD. Fourteen PD and sixteen HC participants were instructed to estimate the distance of a remembered target by walking to the position the target formerly occupied. This task was completed in virtual reality in order to manipulate the visual flow (VF) speed in real time. Three conditions were carried out: (1) BASELINE: VF speed was equal to participants' real-time movement speed; (2) SLOW: VF speed was reduced by 50 %; (2) FAST: VF speed was increased by 30 %. Individuals with PD performed the experiment in their ON and OFF state. PD demonstrated significantly greater judgement error during BASELINE and FAST conditions compared to HC, although PD did not improve their judgement error during the SLOW condition. Additionally, PD had greater variable error during baseline compared to HC; however, during the SLOW conditions, PD had significantly less variable error compared to baseline and similar variable error to HC participants. Overall, dopaminergic medication did not significantly influence judgement error. Therefore, these results suggest that corrupt processing of sensory information is the main contributor to sensory-perceptual deficits during movement in PD rather than delayed updating of sensory feedback.

  12. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    Science.gov (United States)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  13. Augmented Reality in Science Education

    DEFF Research Database (Denmark)

    Nielsen, Birgitte Lund; Brandt, Harald; Swensen, Hakon

    Augmented reality (AR) holds great promise as a learning tool. However, most extant studies in this field have focused on the technology itself. The poster presents findings from the first stage of the AR-sci project addressing the issue of applying AR for educational purposes. Benefits and chall......Augmented reality (AR) holds great promise as a learning tool. However, most extant studies in this field have focused on the technology itself. The poster presents findings from the first stage of the AR-sci project addressing the issue of applying AR for educational purposes. Benefits...... and challenges related to AR enhancing student learning in science in lower secondary school were identified by expert science teachers, ICT designers and science education researchers from four countries in a Delphi survey. Findings were condensed in a framework to categorize educational AR designs....

  14. Transforming Experience: The Potential of Augmented Reality and Virtual Reality for Enhancing Personal and Clinical Change

    OpenAIRE

    Giuseppe Riva; Giuseppe Riva; ROSA M. BAÑOS; ROSA M. BAÑOS; ROSA M. BAÑOS; Cristina Botella; Cristina Botella; Cristina Botella; Fabrizia Mantovani; Andrea Gaggioli; Andrea Gaggioli

    2016-01-01

    During our life we undergo many personal changes: we change our house, our school, our work and even our friends and partners. However, our daily experience shows clearly that in some situations subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: a) the focus of personal change is reducing the distance between self and reality (conflict); b) this re...

  15. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Science.gov (United States)

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  16. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Directory of Open Access Journals (Sweden)

    Scott A Stone

    Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  17. Perception Enhancement using Visual Attributes in Sequence Motif Visualization

    OpenAIRE

    Oon, Yin; Lee, Nung; Kok, Wei

    2016-01-01

    Sequence logo is a well-accepted scientific method to visualize the conservation characteristics of biological sequence motifs. Previous studies found that using sequence logo graphical representation for scientific evidence reports or arguments could seriously cause biases and misinterpretation by users. This study investigates on the visual attributes performance of a sequence logo in helping users to perceive and interpret the information based on preattentive theories and Gestalt principl...

  18. Multi-agent: a technique to implement geo-visualization of networked virtual reality

    Science.gov (United States)

    Lin, Zhiyong; Li, Wenjing; Meng, Lingkui

    2007-06-01

    Networked Virtual Reality (NVR) is a system based on net connected and spatial information shared, whose demands cannot be fully meet by the existing architectures and application patterns of VR to some extent. In this paper, we propose a new architecture of NVR based on Multi-Agent framework. which includes the detailed definition of various agents and their functions and full description of the collaboration mechanism, Through the prototype system test with DEM Data and 3D Models Data, the advantages of Multi-Agent based Networked Virtual Reality System in terms of the data loading time, user response time and scene construction time etc. are verified. First, we introduce the characters of Networked Virtual Realty and the characters of Multi-Agent technique in Section 1. Then we give the architecture design of Networked Virtual Realty based on Multi-Agent in Section 2.The Section 2 content includes the rule of task division, the multi-agent architecture design to implement Networked Virtual Realty and the function of agents. Section 3 shows the prototype implementation according to the design. Finally, Section 4 discusses the benefits of using Multi-Agent to implement geovisualization of Networked Virtual Realty.

  19. A Planetarium Inside Your Office: Virtual Reality in the Dome Production Pipeline

    Science.gov (United States)

    Summers, Frank

    2018-01-01

    Producing astronomy visualization sequences for a planetarium without ready access to a dome is a distorted geometric challenge. Fortunately, one can now use virtual reality (VR) to simulate a dome environment without ever leaving one's office chair. The VR dome experience has proven to be a more than suitable pre-visualization method that requires only modest amounts of processing beyond the standard production pipeline. It also provides a crucial testbed for identifying, testing, and fixing the visual constraints and artifacts that arise in a spherical presentation environment. Topics adreesed here will include rendering, geometric projection, movie encoding, software playback, and hardware setup for a virtual dome using VR headsets.

  20. Integrating Augmented Reality Technology to Enhance Children's Learning in Marine Education

    Science.gov (United States)

    Lu, Su-Ju; Liu, Ying-Chieh

    2015-01-01

    Marine education comprises rich and multifaceted issues. Raising general awareness of marine environments and issues demands the development of new learning materials. This study adapts concepts from digital game-based learning to design an innovative marine learning program integrating augmented reality (AR) technology for lower grade primary…

  1. ESSE: Engineering Super Simulation Emulation for Virtual Reality Systems Environment

    International Nuclear Information System (INIS)

    Suh, Kune Y.; Yeon, Choul W.

    2008-01-01

    The trademark 4 + D Technology TM based Engineering Super Simulation Emulation (ESSE) is introduced. ESSE resorting to three-dimensional (3D) Virtual Reality (VR) technology pledges to provide with an interactive real-time motion, sound and tactile and other forms of feedback in the man machine systems environment. In particular, the 3D Virtual Engineering Neo cybernetic Unit Soft Power (VENUS) adds a physics engine to the VR platform so as to materialize a physical atmosphere. A close cooperation system and prompt information share are crucial, thereby increasing the necessity of centralized information system and electronic cooperation system. VENUS is further deemed to contribute towards public acceptance of nuclear power in general, and safety in particular. For instance, visualization of nuclear systems can familiarize the public in answering their questions and alleviating misunderstandings on nuclear power plants answering their questions and alleviating misunderstandings on nuclear power plants (NPPs) in general, and performance, security and safety in particular. An in-house flagship project Systemic Three-dimensional Engine Platform Prototype Engineering (STEPPE) endeavors to develop the Systemic Three-dimensional Engine Platform (STEP) for a variety of VR applications. STEP is home to a level system providing the whole visible scene of virtual engineering of man machine system environment. The system is linked with video monitoring that provides a 3D Computer Graphics (CG) visualization of major events. The database linked system provides easy access to relevant blueprints. The character system enables the operators easy access to visualization of major events. The database linked system provides easy access to relevant blueprints. The character system enables the operators to access the virtual systems by using their virtual characters. Virtually Engineered NPP Informative systems by using their virtual characters. Virtually Engineered NPP Informative

  2. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  3. Advanced Visual and Instruction Systems for Maintenance Support (AVIS-MS)

    National Research Council Canada - National Science Library

    Badler, Norman I; Allbeck, Jan M

    2006-01-01

    .... Moreover, the realities of real-world maintenance may not permit the hardware indulgences and rigid controls of laboratory settings for visualization and training systems, and at the same time...

  4. An Investigation of the Differential Effects of Visual Input Enhancement on the Vocabulary Learning of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Zhila Mohammadnia

    2014-07-01

    Full Text Available This study investigated the effect of visual input enhancement on the vocabulary learning of Iranian EFL learners. One hundred and thirty-two EFL learners from elementary, intermediate and advanced proficiency levels were assigned to six groups, two groups at each proficiency level with one being an experimental and the other a control group. The study employed pretests, treatment reading texts, and posttests. T-test was used for the analysis of the data. The results revealed positive effects for visual input enhancement in the advanced level based on within group and between groups’ comparisons. However this positive effect was not found for the elementary and intermediate levels based on between groups’ comparisons. It was concluded that although visual input enhancement may have beneficial effects for elementary and intermediate levels, it is much more effective for the advanced EFL learners. This study may provide useful guiding principles for EFL teachers and syllabus designers.

  5. Augmented reality in neurovascular surgery: feasibility and first uses in the operating room.

    Science.gov (United States)

    Kersten-Oertel, Marta; Gerard, Ian; Drouin, Simon; Mok, Kelvin; Sirhan, Denis; Sinclair, David S; Collins, D Louis

    2015-11-01

    The aim of this report is to present a prototype augmented reality (AR) intra-operative brain imaging system. We present our experience of using this new neuronavigation system in neurovascular surgery and discuss the feasibility of this technology for aneurysms, arteriovenous malformations (AVMs), and arteriovenous fistulae (AVFs). We developed an augmented reality system that uses an external camera to capture the live view of the patient on the operating room table and to merge this view with pre-operative volume-rendered vessels. We have extensively tested the system in the laboratory and have used the system in four surgical cases: one aneurysm, two AVMs and one AVF case. The developed AR neuronavigation system allows for precise patient-to-image registration and calibration of the camera, resulting in a well-aligned augmented reality view. Initial results suggest that augmented reality is useful for tailoring craniotomies, localizing vessels of interest, and planning resection corridors. Augmented reality is a promising technology for neurovascular surgery. However, for more complex anomalies such as AVMs and AVFs, better visualization techniques that allow one to distinguish between arteries and veins and determine the absolute depth of a vessel of interest are needed.

  6. Mobile Technologies and Augmented Reality in Open Education

    Science.gov (United States)

    Kurubacak, Gulsun, Ed.; Altinpulluk, Hakan, Ed.

    2017-01-01

    Novel trends and innovations have enhanced contemporary educational environments. When applied properly, these computing advances can create enriched learning opportunities for students. "Mobile Technologies and Augmented Reality in Open Education" is a pivotal reference source for the latest academic research on the integration of…

  7. Virtual reality in advanced medical immersive imaging: a workflow for introducing virtual reality as a supporting tool in medical imaging

    KAUST Repository

    Knodel, Markus M.

    2018-02-27

    Radiologic evaluation of images from computed tomography (CT) or magnetic resonance imaging for diagnostic purposes is based on the analysis of single slices, occasionally supplementing this information with 3D reconstructions as well as surface or volume rendered images. However, due to the complexity of anatomical or pathological structures in biomedical imaging, innovative visualization techniques are required to display morphological characteristics three dimensionally. Virtual reality is a modern tool of representing visual data, The observer has the impression of being “inside” a virtual surrounding, which is referred to as immersive imaging. Such techniques are currently being used in technical applications, e.g. in the automobile industry. Our aim is to introduce a workflow realized within one simple program which processes common image stacks from CT, produces 3D volume and surface reconstruction and rendering, and finally includes the data into a virtual reality device equipped with a motion head tracking cave automatic virtual environment system. Such techniques have the potential to augment the possibilities in non-invasive medical imaging, e.g. for surgical planning or educational purposes to add another dimension for advanced understanding of complex anatomical and pathological structures. To this end, the reconstructions are based on advanced mathematical techniques and the corresponding grids which we can export are intended to form the basis for simulations of mathematical models of the pathogenesis of different diseases.

  8. Using visual thinking strategies with nursing students to enhance nursing assessment skills: A qualitative design.

    Science.gov (United States)

    Nanavaty, Joanne

    2018-03-01

    This qualitative design study addressed the enhancement of nursing assessment skills through the use of Visual Thinking Strategies and reflection. This study advances understanding of the use of Visual Thinking Strategies and reflection as ways to explore new methods of thinking and observing patient situations relating to health care. Sixty nursing students in a licensed practical nursing program made up the sample of participants who attended an art gallery as part of a class assignment. Participants replied to a survey of interest for participation at the art gallery. Participants reviewed artwork at the gallery and shared observations with the larger group during a post-conference session in a gathering area of the museum at the end of the visit. A reflective exercise on the art gallery experience exhibited further thoughts about the art gallery experience and demonstrated the connections made to clinical practice by the student. The findings of this study support the use of Visual Thinking Strategies and reflection as effective teaching and learning tools for enhancing nursing skills. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Get Real: Augmented Reality for the Classroom

    Science.gov (United States)

    Mitchell, Rebecca; DeBay, Dennis

    2012-01-01

    Kids love augmented reality (AR) simulations because they are like real-life video games. AR simulations allow students to learn content while collaborating face to face and interacting with a multimedia-enhanced version of the world around them. Although the technology may seem advanced, AR software makes it easy to develop content-based…

  10. Simulation of eye disease in virtual reality.

    Science.gov (United States)

    Jin, Bei; Ai, Zhuming; Rasmussen, Mary

    2005-01-01

    It is difficult to understand verbal descriptions of visual phenomenon if one has no such experience. Virtual Reality offers a unique opportunity to "experience" diminished vision and the problems it causes in daily life. We have developed an application to simulate age-related macular degeneration, glaucoma, protanopia, and diabetic retinopathy in a familiar setting. The application also includes the introduction of eye anatomy representing both normal and pathologic states. It is designed for patient education, health care practitioner training, and eye care specialist education.

  11. Augmented Reality Sultan Deli Di Istana Maimun

    OpenAIRE

    Nathan P. L

    2016-01-01

    Augmented reality is a technology that can provide visualization of 3D models. Based from that technology, the modeling from a picture of Sultan Deli Istana Maimun can be applied to restore photos Sultan Deli into a three-dimensional model. This is due to Sultan Deli which is one of the important figures in the history of Medan city known less by the public. Submission of Deli Sultanate history only through such two-dimensional images and other archives. The photo shows the Sultan of Deli ev...

  12. The application of virtual reality technology to testing resistance to motion sickness

    Directory of Open Access Journals (Sweden)

    Menshikova G. Ya.

    2017-09-01

    Full Text Available Background. Prolonged exposure to moving images in virtual reality systems can cause virtual reality induced motion sickness (VIMS. The ability to resist motion sickness may be associated with the level of vestibular function development. objective. The aim of the present research is to study the oculomotor characteristics of individuals whose observation of moving virtual environments causes the VIMS effect. We hypothesized that people who have a robust vestibular function as a result of their professional activity, are less susceptible to VIMS than people who have no such professional abilities. The differences in people’s abilities to resist the effects of the virtual environment may be revealed in the oculomotor characteristics registered during their interaction with a virtual environment. Design. Figure skaters, football players, wushu fighters, and non-trained people were tested. e CAVE virtual reality system was used to initiate the VIMS effect. three virtual scenes were constructed consisting of many bright balls moving as a whole around the observer. e scenes differed in the width of the visual field; all balls subtended either 45°, 90° or 180°. Results. The results showed more active eye movements for athletes compared to non-trained people, i.e. an increase in blink, fixation, and saccade counts. A decrease in saccadic amplitudes was revealed for figure skaters. These characteristics were considered specific indicators of the athletes’ ability to resist motion sickness. Conclusions. It was found that the strength of the VIMS effect increased with the increasing width of the visual field. The effectiveness of virtual reality and eye-tracking technologies to test the VIMS effect was demonstrated.

  13. Embolic intracranial arterial occlusion visualized by non-enhanced computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tomita, Masaaki; Minematsu, Kazuo; Choki, Junichiro; Yamaguchi, Takenori [National Cardiovascular Center, Suita, Osaka (Japan)

    1984-12-01

    A 77-year-old woman with a history of valvular heart disease, atrial fibrillation and a massive infarction in the right cerebral hemisphere developed contralateral infarction due to occlusion of the internal carotid artery. A string-like structure with higher density than normal brain was demonstrated on non-enhanced computed tomography that was performed in the acute stage. This abnormal structure seen in the left hemisphere was thought to be consistent with the middle cerebral artery trunk of the affected side. Seventeen days after the onset, the abnormal structure was no more visualized on non-enhanced CT. These findings suggested that the abnormal structure with increased density was compatible with thromboembolus or intraluminal clot formed in the distal part of the occluded internal carotid artery. The importance of this finding as a diagnostic sign of the cerebral arterial occlusion was discussed.

  14. Embolic intracranial arterial occlusion visualized by non-enhanced computed tomography

    International Nuclear Information System (INIS)

    Tomita, Masaaki; Minematsu, Kazuo; Choki, Junichiro; Yamaguchi, Takenori

    1984-01-01

    A 77-year-old woman with a history of valvular heart disease, atrial fibrillation and a massive infarction in the right cerebral hemisphere developed contralateral infarction due to occlusion of the internal carotid artery. A string-like structure with higher density than normal brain was demonstrated on non-enhanced computed tomography that was performed in the acute stage. This abnormal structure seen in the left hemisphere was thought to be consistent with the middle cerebral artery trunk of the affected side. Seventeen days after the onset, the abnormal structure was no more visualized on non-enhanced CT. These findings suggested that the abnormal structure with increased density was compatible with thromboembolus or intraluminal clot formed in the distal part of the occluded internal ca rotid artery. An importance of this finding as a diagnostic sign of the cerebral arterial occlusion was discussed. (author)

  15. Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?

    Science.gov (United States)

    Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming

    2017-01-01

    A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males' mental workloads were significantly higher than females'. For males, high-value products' mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio-visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio-visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference.

  16. Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?

    Science.gov (United States)

    Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming

    2017-01-01

    A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males’ mental workloads were significantly higher than females’. For males, high-value products’ mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio–visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio–visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference. PMID:28184207

  17. Quantum reality filters

    International Nuclear Information System (INIS)

    Gudder, Stan

    2010-01-01

    An anhomomorphic logic A* is the set of all possible realities for a quantum system. Our main goal is to find the 'actual reality' Φ a element of A* for the system. Reality filters are employed to eliminate unwanted potential realities until only φ a remains. In this paper, we consider three reality filters that are constructed by means of quantum integrals. A quantum measure μ can generate or actualize a Φ element of A* if μ(A) is a quantum integral with respect to φ for a density function f over events A. In this sense, μ is an 'average' of the truth values of φ with weights given by f. We mainly discuss relations between these filters and their existence and uniqueness properties. For example, we show that a quadratic reality generated by a quantum measure is unique. In this case we obtain the unique actual quadratic reality.

  18. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    Science.gov (United States)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  19. Virtual reality applications in improving postural control and minimizing falls.

    Science.gov (United States)

    Virk, Sumandeep; McConville, Kristiina M Valter

    2006-01-01

    Maintaining balance under all conditions is an absolute requirement for humans. Orientation in space and balance maintenance requires inputs from the vestibular, the visual, the proprioceptive and the somatosensory systems. All the cues coming from these systems are integrated by the central nervous system (CNS) to employ different strategies for orientation and balance. How the CNS integrates all the inputs and makes cognitive decisions about balance strategies has been an area of interest for biomedical engineers for a long time. More interesting is the fact that in the absence of one or more cues, or when the input from one of the sensors is skewed, the CNS "adapts" to the new environment and gives less weight to the conflicting inputs [1]. The focus of this paper is a review of different strategies and models put forward by researchers to explain the integration of these sensory cues. Also, the paper compares the different approaches used by young and old adults in maintaining balance. Since with age the musculoskeletal, visual and vestibular system deteriorates, the older subjects have to compensate for these impaired sensory cues for postural stability. The paper also discusses the applications of virtual reality in rehabilitation programs not only for balance in the elderly but also in occupational falls. Virtual reality has profound applications in the field of balance rehabilitation and training because of its relatively low cost. Studies will be conducted to evaluate the effectiveness of virtual reality training in modifying the head and eye movement strategies, and determine the role of these responses in the maintenance of balance.

  20. Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.

    Science.gov (United States)

    Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J

    2011-11-01

    To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.