Uematsu, Yuko; Saito, Hideo
This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera....
Full Text Available This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera. Second application is a bowling system which allows users to roll a real ball down a real bowling lane model on the tabletop and knock down virtual pins. The users watch the virtual pins through the monitor. The lane and the ball are also tracked by vision-based tracking. In those applications, we utilize multiple 2D markers distributed at arbitrary positions and directions. Even though the geometrical relationship among the markers is unknown, we can track the camera in very wide area.
Grimson, W.E.L.; Lozano-Perez, T.; White, S.J.; Wells, W.M. III; Kikinis, R.
There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images
Englmeier, K.-H.; Siebert, M.; Griebel, J.; Lucht, R.; Brix, G.; Knopp, M.
Background: The purpose of this study was the development of a method for fast and efficient analysis of dynamic MR images of the female breast. The image data sets were acquired with a saturation-recovery turbo-FLASH sequence which enables the detection of the kinetics of the contrast agent concentration in the whole breast with a high temporal and spatial resolution. In addition, a morphologic 3D-FLASH data set was acquired. Methods: The dynamic image datasets were analyzed by a pharmacokinetic model which enables the representation of the relevant functional tissue information by two parameters. In order to display simultaneously morphologic and functional tissue information, we developed a multidimensional visualization system, which enables a practical and intuitive human-computer interface in virtual reality. Discussions: The developed system allows the fast and efficient analysis of dynamic MR data sets. An important clinical application is the localization and definition of multiple lesions of the female breast. (orig.) [de
Englmeier, K H; Griebel, J; Lucht, R; Knopp, M; Siebert, M; Brix, G
The purpose of this study was the development of a method for fast and efficient analysis of dynamic MR images of the female breast. The image data sets were acquired with a saturation-recovery turbo-FLASH sequence which enables the detection of the kinetics of the contrast agent concentration in the whole breast with a high temporal and spatial resolution. In addition, a morphologic 3D-FLASH data set was acquired. The dynamic image datasets were analyzed by a pharmacokinetic model which enables the representation of the relevant functional tissue information by two parameters. In order to display simultaneously morphologic and functional tissue information, we developed a multidimensional visualization system, which enables a practical and intuitive human-computer interface in virtual reality. The developed system allows the fast and efficient analysis of dynamic MR data sets. An important clinical application is the localization and definition of multiple lesions of the female breast.
Flanders, Megan; Kavanagh, Richard C.
Mental rotations are among the most difficult of all spatial tasks to perform, and even those with high levels of spatial ability can struggle to visualize the result of compound rotations. This pilot study investigates the use of the virtual reality-based Rotation Tool, created using the Virtual Reality Modeling Language (VRML) together with…
Söderberg, Jonas; Waern, Annika; Åkesson, Karl-Petter; Björk, Staffan; Falk, Jennica
Live role-playing is a form of improvisational theatre played for the experience of the performers and without an audience. These games form a challenging application domain for ubiquitous technology. We discuss the design options for enhanced reality live role-playing and the role of technology in live role-playing games.
Krijnen, Robbert; Smelik, Ruben; Appleton, Rick; van Maanen, Peter-Paul
Due to their increasing complexity and size, visualization of geological data is becoming more and more important. It enables detailed examining and reviewing of large volumes of geological data and it is often used as a communication tool for reporting and education to demonstrate the importance of the geology to policy makers. In the Netherlands two types of nation-wide geological models are available: 1) Layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and 2) Voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties per voxel. The Geological Survey of the Netherlands (GSN) provides an interactive web portal that delivers maps and vertical cross-sections of such layer-based and voxel models. From this portal you can download a 3D subsurface viewer that can visualize the voxel model data of an area of 20 × 25 km with 100 × 100 × 5 meter voxel resolution on a desktop computer. Virtual Reality (VR) technology enables us to enhance the visualization of this volumetric data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of-the-shelf VR hardware enabled us to create an new intuitive and low visualization tool. A VR viewer has been implemented using the HTC Vive head set and allows visualization and analysis of the GSN voxel model data with geological or hydrogeological units. The user can navigate freely around the voxel data (20 × 25 km) which is presented in a virtual room at a scale of 2 × 2 or 3 × 3 meters. To enable analysis, e.g. hydraulic conductivity, the user can select filters to remove specific hydrogeological units. The user can also use slicing to cut-off specific sections of the voxel data to get a closer look. This slicing
Dryer, David A.
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
Kageyama, Akira; Tomiyama, Asako
We have developed a software framework for scientific visualization in immersive-type, room-sized virtual reality (VR) systems, or Cave automatic virtual environment (CAVEs). This program, called Multiverse, allows users to select and invoke visualization programs without leaving CAVE’s VR space. Multiverse is a kind of immersive “desktop environment” for users, with a three-dimensional graphical user interface. For application developers, Multiverse is a software framework with useful class ...
This thesis focuses on interactively visualizing, and ultimately simulating, cumulus clouds both in virtual reality (VR) and with a standard desktop computer. The cumulus clouds in question are found in data sets generated by Large-Eddy Simulations (LES), which are used to simulate a small section
Huang, M.; Papka, M.; DeFanti, T.; Kettunen, L.
The authors describe the use of the CAVE virtual reality visualization environment as an aid to the design of accelerator magnets. They have modeled an elliptical multipole wiggler magnet being designed for use at the Advanced Photon Source at Argonne National Laboratory. The CAVE environment allows the authors to explore and interact with the 3-D visualization of the magnet. Capabilities include changing the number of periods the magnet displayed, changing the icons used for displaying the magnetic field, and changing the current in the electromagnet and observing the effect on the magnetic field and particle beam trajectory through the field
van Nes, Floris L.
This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.
Augmented and virtual reality are on the advance. In the last twelve months, several interesting devices have entered the market. Since tourism is one of the fastest growing economic sectors in the world and has become one of the major players in international commerce, the aim of this thesis was to examine how tourism could be enhanced with augmented and virtual reality. The differences and functional principles of augmented and virtual reality were investigated, general uses were described ...
Folorunso Olufemi A.
Full Text Available This paper addressed the development of an augmented reality (AR based scientific visualization system prototype that supports identification, localisation, and 3D visualisation of oil leakages sensors datasets. Sensors generates significant amount of multivariate datasets during normal and leak situations which made data exploration and visualisation daunting tasks. Therefore a model to manage such data and enhance computational support needed for effective explorations are developed in this paper. A challenge of this approach is to reduce the data inefficiency. This paper presented a model for computing information gain for each data attributes and determine a lead attribute.The computed lead attribute is then used for the development of an AR-based scientific visualization interface which automatically identifies, localises and visualizes all necessary data relevant to a particularly selected region of interest (ROI on the network. Necessary architectural system supports and the interface requirements for such visualizations are also presented.
Joan, D. R. Robert
In this article, the author has discussed about the Mobile Augmented Reality and enhancing education through it. The aim of the present study was to give some general information about mobile augmented reality which helps to boost education. Purpose of the current study reveals the mobile networks which are used in the institution campus as well…
Bachelder, Ed; Klyde, David
The feasibility of using Fused Reality-based simulation technology to enhance flight test capabilities has been investigated. In terms of relevancy to piloted evaluation, there remains no substitute for actual flight tests, even when considering the fidelity and effectiveness of modern ground-based simulators. In addition to real-world cueing (vestibular, visual, aural, environmental, etc.), flight tests provide subtle but key intangibles that cannot be duplicated in a ground-based simulator. There is, however, a cost to be paid for the benefits of flight in terms of budget, mission complexity, and safety, including the need for ground and control-room personnel, additional aircraft, etc. A Fused Reality(tm) (FR) Flight system was developed that allows a virtual environment to be integrated with the test aircraft so that tasks such as aerial refueling, formation flying, or approach and landing can be accomplished without additional aircraft resources or the risk of operating in close proximity to the ground or other aircraft. Furthermore, the dynamic motions of the simulated objects can be directly correlated with the responses of the test aircraft. The FR Flight system will allow real-time observation of, and manual interaction with, the cockpit environment that serves as a frame for the virtual out-the-window scene.
Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa
By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.
upon which to discuss the potential for reengineering the traditional role of the teacher/learning designer as the only supplier and the students as the receivers of digital learning designs in higher education. The discussion applies the actor-network theory and socio-material perspectives...... on education in order to enhance the meta-perspective of traditional teacher and student roles.......Abstract This paper reports on a project in which communication and digital media students collaborated with visual arts teacher students and their teacher trainer to develop visual digital designs for learning that involved Augmented Reality (AR) technology. The project exemplified a design...
Skolnik, S.; Ramirez-Linan, R.
Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.
Chen, Chwen Jen
This study aims to investigate the effects of virtual reality (VR)-based learning environment on learners of different spatial visualization abilities. The findings of the aptitude-by-treatment interaction study have shown that learners benefit most from the Guided VR mode, irrespective of their spatial visualization abilities. This indicates that…
Visualization methods in the analysis of geographical datasets are based on static models, which restrict the visual analysis capabilities. The use of virtual reality, which is a three-dimensional (3D) perspective, gives the user the ability to change viewpoints and models dynamically overcomes the static limitations of ...
Chouyin Hsu; Haui-Chih Shiau
Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient ...
Full Text Available The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.
Schomaker, Judith; Meeter, Martijn
The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.
Rahn, Annette; Kjærgaard, Hanne Wacher
Title: Augmented Reality as a visualizing facilitator in nursing education Background: Understanding the workings of the biological human body is as complex as the body itself, and because of their complexity, the phenomena of respiration and lung anatomy pose a special problem for nursing students......’ understanding within anatomy and physiology. Aim: Against this background, the current project set out to investigate how and to what extent the application of augmented reality (AR) could help students gain a better understanding through an increased focus on contextualized visualization. The overall aim...
Full Text Available This article aims to reflect on the use of anamorphosis in the context of the graphic and visual communication by presenting a brief evolution of anamorphosis in visual communication, from its origin to the present time, through the analysis of historical and contemporary examples of anamorphic representations used in art and design. This is a reflection on the potential of the mechanism of anamorphosis as a vehicle of visual communication based on perceptive game between reality and deception. Thus, we propose the possibility of this perceptual mechanism to fit in a more comprehensive history, the history of visuality.
On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.
Hvass, Jonatan Salling; Larsen, Oliver Stevns; Vendelbo, Kasper Bøgelund
Virtual Reality (VR) has finally entered the homes of consumers, and a large number of the available applications are games. This paper presents a between-subjects study (n=50) exploring if vi-sual realism (polygon count and texture resolution) affects pres-ence during a scenario involving gameplay...
Full Text Available The basic requirement for the successful deployment of a mobile augmented reality application is a reliable tracking system with high accuracy. Recently, a helmet-based inside-out tracking system which meets this demand has been proposed for self-localization in buildings. To realize an augmented reality application based on this tracking system, a display has to be added for visualization purposes. Therefore, the relative pose of this visualization platform with respect to the helmet has to be tracked. In the case of hand-held visualization platforms like smartphones or tablets, this can be achieved by means of image-based tracking methods like marker-based or model-based tracking. In this paper, we present two marker-based methods for tracking the relative pose between the helmet-based tracking system and a tablet-based visualization system. Both methods were implemented and comparatively evaluated in terms of tracking accuracy. Our results show that mobile inside-out tracking systems without integrated displays can easily be supplemented with a hand-held tablet as visualization device for augmented reality purposes.
Appleton, R.; van Maanen, P. P.; Fisher, W. I.; Krijnen, R.
Due to their complexity and size, visualization of meteorological data is important. It enables the precise examining and reviewing of meteorological details and is used as a communication tool for reporting, education and to demonstrate the importance of the data to policy makers. Specifically for the UCAR community it is important to explore all of such possibilities.Virtual Reality (VR) technology enhances the visualization of volumetric and dynamical data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of the shelf VR hardware enabled us to create a very intuitive and low cost way to visualize meteorological data. A VR viewer has been implemented using multiple HTC Vive head sets and allows visualization and analysis of meteorological data in NetCDF format (e.g. of NCEP North America Model (NAM), see figure). Sources of atmospheric/meteorological data include radar and satellite as well as traditional weather stations. The data includes typical meteorological information such as temperature, humidity, air pressure, as well as those data described by the climate forecast (CF) model conventions (http://cfconventions.org). Other data such as lightning-strike data and ultra-high-resolution satellite data are also becoming available. The users can navigate freely around the data which is presented in a virtual room at a scale of up to 3.5 X 3.5 meters. The multiple users can manipulate the model simultaneously. Possible mutations include scaling/translating, filtering by value and using a slicing tool to cut-off specific sections of the data to get a closer look. The slicing can be done in any direction using the concept of a `virtual knife' in real-time. The users can also scoop out parts of the data and walk though successive states of the model. Future plans are (a.o.) to
Full Text Available Background and objective: The subjective visual vertical (SVV is a measure of a subject's perceived verticality, and a sensitive test of vestibular dysfunction. Despite this, and consequent upon technical and logistical limitations, SVV has not entered mainstream clinical practice. The aim of the study was to develop a mobile virtual reality based system for SVV test, evaluate the suitability of different controllers and assess the system's usability in practical settings. Materials and methods: In this study, we describe a novel virtual reality based system that has been developed to test SVV using integrated software and hardware, and report normative values across healthy population. Participants wore a mobile virtual reality headset in order to observe a 3D stimulus presented across separate conditions – static, dynamic and an immersive real-world (“boat in the sea” SVV tests. The virtual reality environment was controlled by the tester using a Bluetooth connected controllers. Participants controlled the movement of a vertical arrow using either a gesture control armband or a general-purpose gamepad, to indicate perceived verticality. We wanted to compare 2 different methods for object control in the system, determine normal values and compare them with literature data, to evaluate the developed system with the help of the system usability scale questionnaire and evaluate possible virtually induced dizziness with the help of subjective visual analog scale. Results: There were no statistically significant differences in SVV values during static, dynamic and virtual reality stimulus conditions, obtained using the two different controllers and the results are compared to those previously reported in the literature using alternative methodologies. The SUS scores for the system were high, with a median of 82.5 for the Myo controller and of 95.0 for the Gamepad controller, representing a statistically significant difference between the two
Hwang, Alex D; Peli, Eli
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
Full Text Available Scientific visualization in technology of virtual reality is a graphical representation of virtual environment in the form of images or animation that can be displayed with various devices such as Head Mounted Display (HMD or monitors that can view threedimensional world. Research in real time is a desirable capability for scientific visualization and virtual reality in which we are immersed and make the research process easier. In this scientific paper the interaction between the user and objects in the virtual environment аrе in real time which gives a sense of reality to the user. Also, Quest3D VR software package is used and the movement of the user through the virtual environment, the impossibility to walk through solid objects, methods for grabbing objects and their displacement are programmed and all interactions between them will be possible. At the end some critical analysis were made on all of these techniques on various computer systems and excellent results were obtained.
Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.
Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and
Cherukuru, N. W.; Calhoun, R.
Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.
Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared
This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.
Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.
Cherukuru N. W.; Calhoun R.
Augmented reality (AR) is a technology in which the enables the user to view virtual content as if it existed in real world. We are exploring the possibility of using this technology to view radial velocities or processed wind vectors from a Doppler wind lidar, thus giving the user an ability to see the wind in a literal sense. This approach could find possible applications in aviation safety, atmospheric data visualization as well as in weather education and public outreach. As a proof of...
Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y
Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.
Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.
One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.
Fung, Joyce; Perez, Claire F
We have developed a mixed reality system incorporating virtual reality (VR), surface perturbations and light touch for gait rehabilitation. Haptic touch has emerged as a novel and efficient technique to improve postural control and dynamic stability. Our system combines visual display with the manipulation of physical environments and addition of haptic feedback to enhance balance and mobility post stroke. A research study involving 9 participants with stroke and 9 age-matched healthy individuals show that the haptic cue provided while walking is an effective means of improving gait stability in people post stroke, especially during challenging environmental conditions such as downslope walking.
Cherukuru, Nihanth Wagmi
Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as
Chen, Bin; Moreland, John; Zhang, Jingyu
Magnetic resonance diffusion tensor imaging (DTI) and functional MRI (fMRI) are two active research areas in neuroimaging. DTI is sensitive to the anisotropic diffusion of water exerted by its macromolecular environment and has been shown useful in characterizing structures of ordered tissues such as the brain white matter, myocardium, and cartilage. The diffusion tensor provides two new types of information of water diffusion: the magnitude and the spatial orientation of water diffusivity inside the tissue. This information has been used for white matter fiber tracking to review physical neuronal pathways inside the brain. Functional MRI measures brain activations using the hemodynamic response. The statistically derived activation map corresponds to human brain functional activities caused by neuronal activities. The combination of these two methods provides a new way to understand human brain from the anatomical neuronal fiber connectivity to functional activities between different brain regions. In this study, virtual reality (VR) based MR DTI and fMRI visualization with high resolution anatomical image segmentation and registration, ROI definition and neuronal white matter fiber tractography visualization and fMRI activation map integration is proposed. Rationale and methods for producing and distributing stereoscopic videos are also discussed.
Vorauer, A.; Cotesta, L.
Ontario Power Generation's Deep Geologic Repository Technology Program has undertaken applied research into the application of scientific visualization technologies to: i) improve the interpretation and synthesis of complex geoscientific field data; ii) facilitate the development of defensible conceptual site descriptive models; and iii) enhance communication between multi-disciplinary site investigation teams and other stakeholders. Two scientific visualization projects are summarized that benefited from the use of the Gocad earth modelling software and were supported by an immersive virtual reality laboratory: i) the Moderately Fractured Rock experiment at the 125,000 m 3 block scale; and ii) the Sub-regional Flow System Modelling Project at the 100 km 2 scale. (author)
Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio
Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Georg F Meyer
Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.
Meyer, Georg F; Shao, Fei; White, Mark D; Hopkins, Carl; Robotham, Antony J
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.
Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid
Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.
Mota, José Miguel; Ruiz-Rube, Iván; Dodero, Juan Manuel; Figueiredo, Mauro
Augmented Reality (AR) technology allows the inclusion of virtual elements on a vision of actual physical environment for the creation of a mixed reality in real time. This kind of technology can be used in educational settings. However, the current AR authoring tools present several drawbacks, such as, the lack of a mechanism for tracking the…
Ma, Minhua; Zheng, Huiru; Lallie, Harjinder
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.
Full Text Available Stylianos Tsapakis, Dimitrios Papaconstantinou, Andreas Diagourtas, Konstantinos Droutsas, Konstantinos Andreanos, Marilita M Moschos, Dimitrios Brouzas 1st Department of Ophthalmology, National and Kapodistrian University of Athens, Athens, Greece Purpose: To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter.Materials and methods: Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter.Results: High correlation coefficient (r=0.808, P<0.0001 was found between the virtual reality visual field test and the Humphrey perimeter visual field.Conclusion: Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use. Keywords: visual fields, virtual reality glasses, perimetry, visual fields software, smartphone
Luciene Chagas de Oliveira
Full Text Available Introduction: Individuals with mobility impairments associated with lower limb disabilities often face enormous challenges to participate in routine activities and to move around various environments. For many, the use of wheelchairs is paramount to provide mobility and social inclusion. Nevertheless, they still face a number of challenges to properly function in our society. Among the many difficulties, one in particular stands out: navigating in complex internal environments (indoors. The main objective of this work is to propose an architecture based on Mobile Augmented Reality to support the development of indoor navigation systems dedicated to wheelchair users, that is also capable of recording CAD drawings of the buildings and dealing with accessibility issues for that population. Methods Overall, five main functional requirements are proposed: the ability to allow for indoor navigation by means of Mobile Augmented Reality techniques; the capacity to register and configure building CAD drawings and the position of fiducial markers, points of interest and obstacles to be avoided by the wheelchair user; the capacity to find the best route for wheelchair indoor navigation, taking stairs and other obstacles into account; allow for the visualization of virtual directional arrows in the smartphone displays; and incorporate touch or voice commands to interact with the application. The architecture is proposed as a combination of four layers: User interface; Control; Service; and Infrastructure. A proof-of-concept application was developed and tests were performed with disable volunteers operating manual and electric wheelchairs. Results The application was implemented in Java for the Android operational system. A local database was used to store the test building CAD drawings and the position of fiducial markers and points of interest. The Android Augmented Reality library was used to implement Augmented Reality and the Blender open source
Punpongsanon, Parinya; Iwai, Daisuke; Sato, Kosuke
We present SoftAR, a novel spatial augmented reality (AR) technique based on a pseudo-haptics mechanism that visually manipulates the sense of softness perceived by a user pushing a soft physical object. Considering the limitations of projection-based approaches that change only the surface appearance of a physical object, we propose two projection visual effects, i.e., surface deformation effect (SDE) and body appearance effect (BAE), on the basis of the observations of humans pushing physical objects. The SDE visualizes a two-dimensional deformation of the object surface with a controlled softness parameter, and BAE changes the color of the pushing hand. Through psychophysical experiments, we confirm that the SDE can manipulate softness perception such that the participant perceives significantly greater softness than the actual softness. Furthermore, fBAE, in which BAE is applied only for the finger area, significantly enhances manipulation of the perception of softness. We create a computational model that estimates perceived softness when SDE+fBAE is applied. We construct a prototype SoftAR system in which two application frameworks are implemented. The softness adjustment allows a user to adjust the softness parameter of a physical object, and the softness transfer allows the user to replace the softness with that of another object.
Journe, G.; Guilbaud, C.
A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)
Lutheran, April L
Home architects and designers use many types of presentation drawings to convey design ideas. Augmented reality is a relatively new technology that can be used to aid in design and marketing for residential builders. An augmented reality presentation provides a more complete idea of a design than other presentations such as 3D model renderings and hand drawn artist sketches. While designers are accustomed to visualizing 2D plans, this task is difficult for home buyers. This difficulty has bee...
Zeelenberg, Rene; Bocanegra, Bruno R.
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
Cherukuru N. W.
As a proof of concept, we used the lidar data from a recent field campaign and developed a smartphone application to view the lidar scan in augmented reality. In this paper, we give a brief methodology of this feasibility study, present the challenges and promises of using AR technology in conjunction with Doppler wind lidars.
Stets, Jonathan Dyssel; Sun, Yongbin; Greenwald, Scott W.
We present a Virtual Reality (VR) application for labeling and handling point cloud data sets. A series of room-scale point clouds are recorded as a video sequence using a Microsoft Kinect. The data can be played and paused, and frames can be skipped just like in a video player. The user can walk...
Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( = conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.
Riva, Giuseppe; Baños, Rosa M; Botella, Cristina; Mantovani, Fabrizia; Gaggioli, Andrea
During life, many personal changes occur. These include changing house, school, work, and even friends and partners. However, the daily experience shows clearly that, in some situations, subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: (a) the focus of personal change is reducing the distance between self and reality (conflict); (b) this reduction is achieved through (1) an intense focus on the particular experience creating the conflict or (2) an internal or external reorganization of this experience; (c) personal change requires a progression through a series of different stages that however happen in discontinuous and non-linear ways; and (d) clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper focuses on the two leading virtual technologies - augmented reality (AR) and virtual reality (VR) - exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience by focusing on the high level of personal efficacy and self-reflectiveness generated by their sense of presence and emotional engagement. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering, and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual's worldview.
Full Text Available During our life we undergo many personal changes: we change our house, our school, our work and even our friends and partners. However, our daily experience shows clearly that in some situations subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: a the focus of personal change is reducing the distance between self and reality (conflict; b this reduction is achieved through (1 an intense focus on the particular experience creating the conflict or (2 an internal or external reorganization of this experience; c personal change requires a progression through a series of different stages; d clinical psychology is often used to facilitate personal change when subjects are unable to move forward. Starting from these premises, the aim of this paper is to review the potential of virtuality for enhancing the processes of personal and clinical change. First, the paper will focus on the two leading virtual technologies – Augmented Reality (AR and Virtual Reality (VR – exploring their current uses in behavioral health and the outcomes of the 28 available systematic reviews and meta-analyses. Then the paper discusses the added value provided by VR and AR in transforming our external experience, by focusing on the high level of self-reflectiveness and personal efficacy induced by their emotional engagement and sense of presence. Finally, it outlines the potential future use of virtuality for transforming our inner experience by structuring, altering and/or replacing our bodily self-consciousness. The final outcome may be a new generation of transformative experiences that provide knowledge that is epistemically inaccessible to the individual until he or she has that experience, while at the same time transforming the individual’s worldview.
This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and motorically embodied in our minds. For example, people take longer to rotate mentally an image of their hand not only when there is a greater degree of rotation, but also when the images are presented in a manner incompatible with their natural body movement (Parsons, 1987a, 1994; Cooper & Shepard, 1975; Sekiyama, 1983). Such findings confirm the notion that our mental images and rotations of those images are in fact confined by the laws of physics and biomechanics, because we perceive, think and reason in an embodied fashion. With the advancement of new technologies, virtual reality programs for medical education now enable users to interact directly in a 3-D environment with internal anatomical structures. Given that such structures are not readily viewable to users and thus not previously susceptible to embodiment, coupled with the VR environment also affording all possible degrees of rotation, how people learn from these programs raises new questions. If we embody external anatomical parts we can see, such as our hands and feet, can we embody internal anatomical parts we cannot see? Does manipulating the anatomical part in virtual space facilitate the user's embodiment of that structure and therefore the ability to visualize the structure mentally? Medical students grouped in yoked-pairs were tasked with mastering the spatial configuration of an internal anatomical structure; only one group was allowed to manipulate the images of this anatomical structure in a 3-D VR environment, whereas the other group could only view the manipulation. The manipulation group outperformed the visual group, suggesting that the interactivity
Youngstrom, Isaac A.; Strowbridge, Ben W.
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian
Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo
Nan, Wenya; Wan, Feng; Lou, Chin Ian; Vai, Mang I; Rosa, Agostinho
Peripheral visual performance is an important ability for everyone, and a positive inter-individual correlation is found between the peripheral visual performance and the alpha amplitude during the performance test. This study investigated the effect of alpha neurofeedback training on the peripheral visual performance. A neurofeedback group of 13 subjects finished 20 sessions of alpha enhancement feedback within 20 days. The peripheral visual performance was assessed by a new dynamic peripheral visual test on the first and last training day. The results revealed that the neurofeedback group showed significant enhancement of the peripheral visual performance as well as the relative alpha amplitude during the peripheral visual test. It was not the case in the non-neurofeedback control group, which performed the tests within the same time frame as the neurofeedback group but without any training sessions. These findings suggest that alpha neurofeedback training was effective in improving peripheral visual performance. To the best of our knowledge, this is the first study to show evidence for performance improvement in peripheral vision via alpha neurofeedback training.
Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões
This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…
Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Carrozzino, Marcello
The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...
This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…
Kartiko, Iwan; Kavakli, Manolya; Cheng, Ken
As the technology in computer graphics advances, Animated-Virtual Actors (AVAs) in Virtual Reality (VR) applications become increasingly rich and complex. Cognitive Theory of Multimedia Learning (CTML) suggests that complex visual materials could hinder novice learners from attending to the lesson properly. On the other hand, previous studies have…
Piao, Jin-Chun; Kim, Shin-Dug
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Sidhu, Manjit Singh
Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.
Singh Sidhu, Manjit
Augmented Reality (AR) is a potential area of research for education, covering issues such as tracking and calibration, and realistic rendering of virtual objects. The ability to augment real world with virtual information has opened the possibility of using AR technology in areas such as education and training as well. In the domain of Computer Aided Learning (CAL), researchers have long been looking into enhancing the effectiveness of the teaching and learning process by providing cues that could assist learners to better comprehend the materials presented. Although a number of works were done looking into the effectiveness of learning-aided cues, but none has really addressed this issue for AR-based learning solutions. This paper discusses the design and model of an AR based software that uses visual cues to enhance the learning process and the outcome perception results of the cues.
Thorstensen, Mathias Ciarlo
To understand a robot's intent and behavior, a robot engineer must analyze data at the input and output, but also at all intermediary steps. This might require looking at a specific subset of the system, or a single data node in isolation. A range of different data formats can be used in the systems, and require visualization in different mediums; some are text based, and best visualized in a terminal, while other types must be presented graphically, in 2D or 3D. This often makes understandin...
Sharp, Ian; Huang, Felix; Patton, James
Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
Artists' view on the world and the way they convey it in their work provide avenues towards understanding the evolution of our societies, as well as the status of scientific knowledge. Reflection of an ophthalmologist and painter who has explored view and gaze through the production of visual arts.
, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also...
O'Connor, Timothy; Rawat, Siddharth; Markman, Adam; Javidi, Bahram
We propose a compact imaging system that integrates an augmented reality head mounted device with digital holographic microscopy for automated cell identification and visualization. A shearing interferometer is used to produce holograms of biological cells, which are recorded using customized smart glasses containing an external camera. After image acquisition, segmentation is performed to isolate regions of interest containing biological cells in the field-of-view, followed by digital reconstruction of the cells, which is used to generate a three-dimensional (3D) pseudocolor optical path length profile. Morphological features are extracted from the cell's optical path length map, including mean optical path length, coefficient of variation, optical volume, projected area, projected area to optical volume ratio, cell skewness, and cell kurtosis. Classification is performed using the random forest classifier, support vector machines, and K-nearest neighbor, and the results are compared. Finally, the augmented reality device displays the cell's pseudocolor 3D rendering of its optical path length profile, extracted features, and the identified cell's type or class. The proposed system could allow a healthcare worker to quickly visualize cells using augmented reality smart glasses and extract the relevant information for rapid diagnosis. To the best of our knowledge, this is the first report on the integration of digital holographic microscopy with augmented reality devices for automated cell identification and visualization.
Djukic, Tijana; Mandic, Vesna; Filipovic, Nenad
Medical education, training and preoperative diagnostics can be drastically improved with advanced technologies, such as virtual reality. The method proposed in this paper enables medical doctors and students to visualize and manipulate three-dimensional models created from CT or MRI scans, and also to analyze the results of fluid flow simulations. Simulation of fluid flow using the finite element method is performed, in order to compute the shear stress on the artery walls. The simulation of motion through the artery is also enabled. The virtual reality system proposed here could shorten the length of training programs and make the education process more effective. © 2013 Published by Elsevier Ltd.
Visualization chambers, state-of-the-art versions of the 3-D cinema films of the 1950s, made possible with the arrival of supercomputers, are popping up in the offices of most major-league explorers in Calgary, Houston and elsewhere. Combining rapid-fire networking, powerful computers, integrated software and digital projection systems, visualization rooms display seismic and other data in images that appear to lift off the screen and float in front of it. The display allows participants to work with stereoscopic subsurface simulations in well-lit rooms where they can reference notes, printouts and drawings; enables the exploration team to gather close to the screen for discussion and inspection of minute details ; improves the ability to understand huge data sets; speeds the process of arriving at effective drilling decisions; encourages and facilitates collaborative work among people of different disciplines, geologists, engineers, geophysicists, bringing them together in one place in front of a giant screen, where everyone can see the same data all at once. Various examples of the technology's successes are described. The technology does not come cheap; it may cost anywhere from $500,000 to $3 million for a visualization room, but considering that drilling a single well may cost up to $40 million, visualization technology is not considered to be a huge expense in terms of exploration.
Parsons, Thomas D.
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869
Letterie, Gerard S
Contemporary training in obstetrics and gynecology is aimed at the acquisition of a complex set of skills oriented to both the technical and personal aspects of patient care. The ability to create clinical simulations through virtual reality (VR) may facilitate the accomplishment of these goals. The purpose of this paper is 2-fold: (1) to review the circumstances and equipment in industry, science, and education in which VR has been successfully applied, and (2) to explore the possible role of VR for training in obstetrics and gynecology and to suggest innovative and unique approaches to enhancing this training. Qualitative assessment of the literature describing successful applications of VR in industry, law enforcement, military, and medicine from 1995 to 2000. Articles were identified through a computer-based search using Medline, Current Contents, and cross referencing bibliographies of articles identified through the search. One hundred and fifty-four articles were reviewed. This review of contemporary literature suggests that VR has been successfully used to simulate person-to-person interactions for training in psychiatry and the social sciences in a variety of circumstances by using real-time simulations of personal interactions, and to launch 3-dimensional trainers for surgical simulation. These successful applications and simulations suggest that this technology may be helpful and should be evaluated as an educational modality in obstetrics and gynecology in two areas: (1) counseling in circumstances ranging from routine preoperative informed consent to intervention in more acute circumstances such as domestic violence or rape, and (2) training in basic and advanced surgical skills for both medical students and residents. Virtual reality is an untested, but potentially useful, modality for training in obstetrics and gynecology. On the basis of successful applications in other nonmedical and medical areas, VR may have a role in teaching essential elements
Bornstein, B H; Neely, C B; LeCompte, D C
Experimental efforts to meliorate the modality effect have included attempts to make the visual stimulus more distinctive. McDowd and Madigan (1991) failed to find an enhanced recency effect in serial recall when the last item was made more distinct in terms of its color. In an attempt to extend this finding, three experiments were conducted in which visual distinctiveness was manipulated in a different manner, by combining the dimensions of physical size and coloration (i.e., whether the stimuli were solid or outlined in relief). Contrary to previous findings, recency was enhanced when the size and coloration of the last item differed from the other items in the list, regardless of whether the "distinctive" item was larger or smaller than the remaining items. The findings are considered in light of other research that has failed to obtain a similar enhanced recency effect, and their implications for current theories of the modality effect are discussed.
Ellis, Stephen R.
The visual requirements for augmented reality or virtual environments displays that might be used in real or virtual towers are reviewed with respect to similar displays already used in aircraft. As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation. Three different binocular fields of view (14deg, 28deg, and 47deg) were examined to determine their effect on subjects ability to detect aircraft maneuvering and landing. The results suggest that binocular fields of view much greater than 47deg are unlikely to dramatically improve search performance and that partial binocular overlap is a feasible display technique for augmented reality Tower applications.
Zenner, Andre; Kruger, Antonio
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
O' Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish; Sherman, William; Martin, Ken; Lonie, David; Whiting, Eric; Money, James
Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.
Fabrizio I. Apollonio
Full Text Available The paper describes a color enhanced processing system - applied as case study on an artifact of the Pompeii archaeological area - developed in order to enhance different techniques for reality-based 3D models construction and visualization of archaeological artifacts. This processing allows rendering reflectance properties with perceptual fidelity on a consumer display and presents two main improvements over existing techniques: a. the color definition of the archaeological artifacts; b. the comparison between the range-based and photogrammetry-based pipelines to understand the limits of use and suitability to specific objects.
Full Text Available Mobile Augmented Reality is an ideal technology for presenting information in an attractive, comprehensive and personalized way to visitors of cultural heritage sites. One of the pioneer projects in this area was certainly the European project ArcheoGuide (IST-1999-11306 which developed and evaluated Augmented Reality (AR at a very early stage. Many progresses have been done since then, and novel devices and algorithms offer novel possibilities and functionalities. In this paper we present current research work and discuss different approaches of Mobile AR for cultural heritage. Since this area is very large we focus on the visual aspects of such technologies, namely tracking and computer vision, as well as visualization.
Ruotolo, Francesco; Maffei, Luigi; Di Gabriele, Maria; Iachini, Tina; Masullo, Massimiliano; Ruggiero, Gennaro; Senese, Vincenzo Paolo
Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies can be used to study
Antonio Valerio Netto
Full Text Available This paper attempts to provide an overview of current market trends in industrial applications of VR (Virtual Reality and VisSim (visual simulation for the next few years. Several market studies recently undertaken are presented and commented. A profile of some companies that are starting to work with these technologies is provided, in an attempt to motivate Brazilian companies into the use of these new technologies by describing successful example applications undertaken by foreign companies.
Kravtsov A. A.
The author performed a research with the purpose of improving visualization of three-dimensional objects by means of augmented reality technology with the use of massively available mobile devices as a platform. This article summarizes the main results and provides suggestions for future research. Since graphical user interfaces made it to the consumer market about 30 years ago, interaction with the computer has not changed significantly. The focus of current user interface techniques is only...
Ruotolo, Francesco, E-mail: firstname.lastname@example.org [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Maffei, Luigi, E-mail: email@example.com [Department of Architecture and Industrial Design, Second University of Naples, Abazia di S. Lorenzo, 81031, Aversa (Italy); Di Gabriele, Maria, E-mail: firstname.lastname@example.org [Department of Architecture and Industrial Design, Second University of Naples, Abazia di S. Lorenzo, 81031, Aversa (Italy); Iachini, Tina, E-mail: email@example.com [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Masullo, Massimiliano, E-mail: firstname.lastname@example.org [Department of Architecture and Industrial Design, Second University of Naples, Abazia di S. Lorenzo, 81031, Aversa (Italy); Ruggiero, Gennaro, E-mail: email@example.com [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Senese, Vincenzo Paolo, E-mail: firstname.lastname@example.org [Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy); Psychometric Laboratory, Department of Psychology, Second University of Naples, Viale Ellittico, 31, 81100, Caserta (Italy)
Several international studies have shown that traffic noise has a negative impact on people's health and that people's annoyance does not depend only on noise energetic levels, but rather on multi-perceptual factors. The combination of virtual reality technology and audio rendering techniques allow us to experiment a new approach for environmental noise assessment that can help to investigate in advance the potential negative effects of noise associated with a specific project and that in turn can help designers to make educated decisions. In the present study, the audio–visual impact of a new motorway project on people has been assessed by means of immersive virtual reality technology. In particular, participants were exposed to 3D reconstructions of an actual landscape without the projected motorway (ante operam condition), and of the same landscape with the projected motorway (post operam condition). Furthermore, individuals' reactions to noise were assessed by means of objective cognitive measures (short term verbal memory and executive functions) and subjective evaluations (noise and visual annoyance). Overall, the results showed that the introduction of a projected motorway in the environment can have immediate detrimental effects of people's well-being depending on the distance from the noise source. In particular, noise due to the new infrastructure seems to exert a negative influence on short term verbal memory and to increase both visual and noise annoyance. The theoretical and practical implications of these findings are discussed. -- Highlights: ► Impact of traffic noise on people's well-being depends on multi-perceptual factors. ► A multisensory virtual reality technology is used to simulate a projected motorway. ► Effects on short-term memory and auditory and visual subjective annoyance were found. ► The closer the distance from the motorway the stronger was the effect. ► Multisensory virtual reality methodologies
Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying
We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.
Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Jared A. Frank
Full Text Available Although user interfaces with gesture-based input and augmented graphics have promoted intuitive human–robot interactions (HRI, they are often implemented in remote applications on research-grade platforms requiring significant training and limiting operator mobility. This paper proposes a mobile mixed-reality interface approach to enhance HRI in shared spaces. As a user points a mobile device at the robot’s workspace, a mixed-reality environment is rendered providing a common frame of reference for the user and robot to effectively communicate spatial information for performing object manipulation tasks, improving the user’s situational awareness while interacting with augmented graphics to intuitively command the robot. An evaluation with participants is conducted to examine task performance and user experience associated with the proposed interface strategy in comparison to conventional approaches that utilize egocentric or exocentric views from cameras mounted on the robot or in the environment, respectively. Results indicate that, despite the suitability of the conventional approaches in remote applications, the proposed interface approach provides comparable task performance and user experiences in shared spaces without the need to install operator stations or vision systems on or around the robot. Moreover, the proposed interface approach provides users the flexibility to direct robots from their own visual perspective (at the expense of some physical workload and leverages the sensing capabilities of the tablet to expand the robot’s perceptual range.
Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T
Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.
National Aeronautics and Space Administration — To develop an Augmented Reality system that runs on a small portable device to aid crew in routine maintenance activities by providing enhanced information and...
This work presents a development approach for mixed reality systems in health care. Although health-care service costs account for 5-15% of GDP in developed countries the sector has been remarkably resistant to the introduction of technology-supported optimizations. Digitalization of data storing and processing in the form of electronic patient records (EPR) and hospital information systems (HIS) is a first necessary step. Contrary to typical business functions (e.g., accounting or CRM) a health-care service is characterized by a knowledge intensive decision process and usage of specialized devices ranging from stethoscopes to complex surgical systems. Mixed reality systems can help fill the gap between highly patient-specific health-care services that need a variety of technical resources on the one side and the streamlined process flow that typical process supporting information systems expect on the other side. To achieve this task, we present a development approach that includes an evaluation of existing tasks and processes within the health-care service and the information systems that currently support the service, as well as identification of decision paths and actions that can benefit from mixed reality systems. The result is a mixed reality system that allows a clinician to monitor the elements of the physical world and to blend them with virtual information provided by the systems. He or she can also plan and schedule treatments and operations in the digital world depending on status information from this mixed reality.
Full Text Available Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1 manual, with patient response registered with a mouse click, and (2 visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1 minimal systematic differences between measurements taken in visual grasp and manual modes, (2 the average standard deviation of the difference distributions of about 5 dB, and (3 a systematic shift (of 4–6 dB to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients’ acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.
Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.
Gagnon, V.; Gagnon, B.
Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author
The first of the three worlds to be discussed is Reality. This whole level is devoted to this world consisting of consultants, subject-matter experts, and disciplines related to the domain and subject of the game. After a short introduction where I show—amongst many other things—a virtual reproduction and game interpretation of Magritte's famous painting of a pipe, I explain by using my experiences from Levee Patroller and drawing upon other examples, four relevant aspects from this world that designers need to consider. The first concerns defining the problem. This is quite hard, especially because at many times, different problem definitions can be conceived. When a problem is finally defined, the second aspect, the factors which are involved with the problem, need to be found and elaborated on. If designers start to relate the factors to each other, they are preoccupied with the third aspect, the relationships. To picture this well, it helps to draw a diagram. Mostly, games are not static and that is why the process needs to be taken into account as well. After considering this fourth aspect, the “model of reality” can be said to be complete. To judge this model and the eventual game, Reality has its own criteria of which I discuss flexibility, fidelity, and validity.
Giuseppe Riva; Giuseppe Riva; ROSA M. BAÑOS; ROSA M. BAÑOS; ROSA M. BAÑOS; Cristina Botella; Cristina Botella; Cristina Botella; Fabrizia Mantovani; Andrea Gaggioli; Andrea Gaggioli
During our life we undergo many personal changes: we change our house, our school, our work and even our friends and partners. However, our daily experience shows clearly that in some situations subjects are unable to change even if they want to. The recent advances in psychology and neuroscience are now providing a better view of personal change, the change affecting our assumptive world: a) the focus of personal change is reducing the distance between self and reality (conflict); b) this re...
Kaper, H. G.
An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.
Frydenberg, Mark; Andone, Diana
Augmented and virtual reality applications bring new insights to real world objects and scenarios. This paper shares research results of the TalkTech project, an ongoing study investigating the impact of learning about new technologies as members of global communities. This study shares results of a collaborative learning project about augmented…
Meredith, Tamara R.
Augmented reality (AR) has been used and documented for a variety of commercial and educational purposes, and the proliferation of mobile devices has increased the average person's access to AR systems and tools. However, little research has been done in the area of using AR to supplement traditional library services, specifically for patrons aged…
Full Text Available Abstract Background To determine if increased visual dependence can be quantified through its impact on automatic postural responses, we have measured the combined effect on the latencies and magnitudes of postural response kinematics of transient optic flow in the pitch plane with platform rotations and translations. Methods Six healthy (29–31 yrs and 4 visually sensitive (27–57 yrs subjects stood on a platform rotated (6 deg of dorsiflexion at 30 deg/sec or translated (5 cm at 5 deg/sec for 200 msec. Subjects either had eyes closed or viewed an immersive, stereo, wide field of view virtual environment (scene moved in upward pitch for a 200 msec period for three 30 sec trials at 5 velocities. RMS values and peak velocities of head, trunk, and head with respect to trunk were calculated. EMG responses of 6 trunk and lower limb muscles were collected and latencies and magnitudes of responses determined. Results No effect of visual velocity was observed in EMG response latencies and magnitudes. Healthy subjects exhibited significant effects (p p Conclusion Differentiation of postural kinematics in visually sensitive subjects when exposed to the combined perturbations suggests that virtual reality technology could be useful for differential diagnosis and specifically designed interventions for individuals whose chief complaint is sensitivity to visual motion.
Ploder, O.; Wagner, A.; Enislidis, G.; Ewers, R.
In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment - the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants. (orig.) [de
Augmented reality (AR) is seen as an important tool for the future of user interfaces as well as training applications. An important application area for AR is expected to be in the digitization of training and worker instructions used in the Brilliant Factory environment. The transition of work instructions methods from printed pages in a book or taped to a machine to virtual simulations is a long step with many challenges along the way. A variety of augmented reality tools are being explored today for industrial applications that range from simple programmable projections in the work space to 3D displays and head mounted gear. This paper will review where some of these tool are today and some of the pros and cons being considered for the future worker environment.
Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François
Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.
Alawa, Karam A.; Sayed, Mohamed; Arboleda, Alejandro; Durkee, Heather A.; Aguilar, Mariela C.; Lee, Richard K.
Glaucoma is the leading cause of irreversible blindness worldwide. Due to its wide prevalence, effective screening tools are necessary. The purpose of this project is to design and evaluate a system that enables portable, cost effective, smartphone based visual field screening based on frequency doubling technology. The system is comprised of an Android smartphone to display frequency doubling stimuli and handle processing, a Bluetooth remote for user input, and a virtual reality headset to simulate the exam. The LG Nexus 5 smartphone and BoboVR Z3 virtual reality headset were used for their screen size and lens configuration, respectively. The system is capable of running the C-20, N-30, 24-2, and 30-2 testing patterns. Unlike the existing system, the smartphone FDT tests both eyes concurrently by showing the same background to both eyes but only displaying the stimulus to one eye at a time. Both the Humphrey Zeiss FDT and the smartphone FDT were tested on five subjects without a history of ocular disease with the C-20 testing pattern. The smartphone FDT successfully produced frequency doubling stimuli at the correct spatial and temporal frequency. Subjects could not tell which eye was being tested. All five subjects preferred the smartphone FDT to the Humphrey Zeiss FDT due to comfort and ease of use. The smartphone FDT is a low-cost, portable visual field screening device that can be used as a screening tool for glaucoma.
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
Phan , Minh Tien; Thouvenin , Indira; Frémont , Vincent
International audience; Pedestrian accident is a serious problem for the society. Pedestrian Collision Warning Systems (PCWS) are proposed to detect the presence of pedestrians and to warn the driver about the potential dangers. However, their interfaces associated with ambiguous alerts can distract drivers and create more dangers. On the other hand, Augmented Reality (AR) with Head-Up Display (HUD) interfaces have recently attracted the attention in the field of automotive research as they c...
Chujitarom, Wannaporn; Piriyasurawong, Pallop
This study aims to synthesize an Animation Augmented Reality Book Model (AAR Book Model) to enhance teamwork and to assess the AAR Book Model to enhance teamwork. Samples are five specialists that consist of one animation specialist, two communication and information technology specialists, and two teaching model design specialists, selected by…
Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit
Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.
Full Text Available Virtual Reality is an incipient technology that is proving very useful for training different skills. Our hypothesis is that it is possible to design virtual reality learning activities that can help students to develop their spatial ability. To prove the hypothesis, we have conducted an experiment consisting of training the students using an on-purpose learning activity based on a virtual reality application and assessing the possible improvement of the students’ spatial ability through a widely accepted spatial visualization test. The learning activity consists of a virtual environment where some simple polyhedral shapes are shown and manipulated by moving, rotating and scaling them. The students participating in the experiment are divided into a control and an experimental group, carrying out the same learning activity with the only difference of the device used for the interaction: a traditional computer with screen, keyboard and mouse for the control group, and virtual reality goggles with a smartphone for the experimental group. To assess the experience, all the students have completed a spatial visualization test twice: just before performing the activities and four weeks later, once all the activities were performed. Specifically, we have used the well-known and widely used Purdue Spatial Visualization Test—Rotation (PSVT-R, designed to test rotational visualization ability. The results of the test show that there is an improvement in the test results for both groups, but the improvement is significantly higher in the case of the experimental group. The conclusion is that the virtual reality learning activities have shown to improve the spatial ability of the experimental group.
Oon, Yin; Lee, Nung; Kok, Wei
Sequence logo is a well-accepted scientific method to visualize the conservation characteristics of biological sequence motifs. Previous studies found that using sequence logo graphical representation for scientific evidence reports or arguments could seriously cause biases and misinterpretation by users. This study investigates on the visual attributes performance of a sequence logo in helping users to perceive and interpret the information based on preattentive theories and Gestalt principl...
Jackson, Margaret C.; Raymond, Jane E.
Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or…
Arvind, Hemamalini; Klistorner, Alexander; Graham, Stuart L; Grigg, John R
Multifocal visual evoked potentials (mfVEPs) have demonstrated good diagnostic capabilities in glaucoma and optic neuritis. This study aimed at evaluating the possibility of simultaneously recording mfVEP for both eyes with dichoptic stimulation using virtual reality goggles and also to determine the stimulus characteristics that yield maximum amplitude. ten healthy volunteers were recruited and temporally sparse pattern pulse stimuli were presented dichoptically using virtual reality goggles. Experiment 1 involved recording responses to dichoptically presented checkerboard stimuli and also confirming true topographic representation by switching off specific segments. Experiment 2 involved monocular stimulation and comparison of amplitude with Experiment 1. In Experiment 3, orthogonally oriented gratings were dichoptically presented. Experiment 4 involved dichoptic presentation of checkerboard stimuli at different levels of sparseness (5.0 times/s, 2.5 times/s, 1.66 times/s and 1.25 times/s), where stimulation of corresponding segments of two eyes were separated by 16.7, 66.7,116.7 & 166.7 ms respectively. Experiment 1 demonstrated good traces in all regions and confirmed topographic representation. However, there was suppression of amplitude of responses to dichoptic stimulation by 17.9+/-5.4% compared to monocular stimulation. Experiment 3 demonstrated similar suppression between orthogonal and checkerboard stimuli (p = 0.08). Experiment 4 demonstrated maximum amplitude and least suppression (4.8%) with stimulation at 1.25 times/s with 166.7 ms separation between eyes. It is possible to record mfVEP for both eyes during dichoptic stimulation using virtual reality goggles, which present binocular simultaneous patterns driven by independent sequences. Interocular suppression can be almost eliminated by using a temporally sparse stimulus of 1.25 times/s with a separation of 166.7 ms between stimulation of corresponding segments of the two eyes.
Papageorgiou, Eleni; Hardiess, Gregor; Ackermann, Hermann; Wiethoelter, Horst; Dietz, Klaus; Mallot, Hanspeter A; Schiefer, Ulrich
The aim of the present study was to examine the effect of homonymous visual field defects (HVFDs) on collision avoidance of dynamic obstacles at an intersection under virtual reality (VR) conditions. Overall performance was quantitatively assessed as the number of collisions at a virtual intersection at two difficulty levels. HVFDs were assessed by binocular semi-automated kinetic perimetry within the 90° visual field, stimulus III4e and the area of sparing within the affected hemifield (A-SPAR in deg(2)) was calculated. The effect of A-SPAR, age, gender, side of brain lesion, time since brain lesion and presence of macular sparing on the number of collisions, as well as performance over time were investigated. Thirty patients (10 female, 20 male, age range: 19-71 years) with HVFDs due to unilateral vascular brain lesions and 30 group-age-matched subjects with normal visual fields were examined. The mean number of collisions was higher for patients and in the more difficult level they experienced more collisions with vehicles approaching from the blind side than the seeing side. Lower A-SPAR and increasing age were associated with decreasing performance. However, in agreement with previous studies, wide variability in performance among patients with identical visual field defects was observed and performance of some patients was similar to that of normal subjects. Both patients and healthy subjects displayed equal improvement of performance over time in the more difficult level. In conclusion, our results suggest that visual-field related parameters per se are inadequate in predicting successful collision avoidance. Individualized approaches which also consider compensatory strategies by means of eye and head movements should be introduced. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi
We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)
Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing
In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.
Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco
We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.
Yeh, Shih-Ching; Hwang, Wu-Yuin; Wang, Jin-Liang; Zhan, Shi-Yi
This study intends to investigate how multi-symbolic representations (text, digits, and colors) could effectively enhance the completion of co-located/distant collaborative work in a virtual reality context. Participants' perceptions and behaviors were also studied. A haptics-enhanced virtual reality task was developed to conduct…
Stark-Wroblewski, Kim; Kreiner, David S.; Boeding, Christopher M.; Lopata, Ashley N.; Ryan, Joseph J.; Church, Tina M.
We examined whether using virtual reality (VR) technology to provide students with direct exposure to evidence-based psychological treatment approaches would enhance their understanding of and appreciation for such treatments. Students enrolled in an abnormal psychology course participated in a VR session designed to help clients overcome the fear…
Hsu, Han-Jen; Weng, Wei-Kai; Chou, Yung-Lang; Huang, Pin-Wei
Violence in hospitals, nurses are at high risk of patient's aggression in the workplace. This learning course application Mobile Augmented Reality to enhance nurse to prevent violence skill. Increasingly, mobile technologies introduced and integrated into classroom teaching and clinical applications. Improving the quality of learning course and providing new experiences for nurses.
Orman, Evelyn K.; Price, Harry E.; Russell, Christine R.
Acquiring nonverbal skills necessary to appropriately communicate and educate members of performing ensembles is essential for wind band conductors. Virtual reality learning environments (VRLEs) provide a unique setting for developing these proficiencies. For this feasibility study, we used an augmented immersive VRLE to enhance eye contact, torso…
Semeraro, Federico; Frisoli, Antonio; Bergamasco, Massimo; Cerchiari, Erga L
The objective of this study was to test acceptance of, and interest in, a newly developed prototype of virtual reality enhanced mannequin (VREM) on a sample of congress attendees who volunteered to participate in the evaluation session and to respond to a specifically designed questionnaire. A commercial Laerdal HeartSim 4000 mannequin was developed to integrate virtual reality (VR) technologies with specially developed virtual reality software to increase the immersive perception of emergency scenarios. To evaluate the acceptance of a virtual reality enhanced mannequin (VREM), we presented it to a sample of 39 possible users. Each evaluation session involved one trainee and two instructors with a standardized procedure and scenario: the operator was invited by the instructor to wear the data-gloves and the head mounted display and was briefly introduced to the scope of the simulation. The instructor helped the operator familiarize himself with the environment. After the patient's collapse, the operator was asked to check the patient's clinical conditions and start CPR. Finally, the patient started to recover signs of circulation and the evaluation session was concluded. Each participant was then asked to respond to a questionnaire designed to explore the trainee's perception in the areas of user-friendliness, realism, and interaction/immersion. Overall, the evaluation of the system was very positive, as was the feeling of immersion and realism of the environment and simulation. Overall, 84.6% of the participants judged the virtual reality experience as interesting and believed that its development could be very useful for healthcare training. The prototype of the virtual reality enhanced mannequin was well-liked, without interfence by interaction devices, and deserves full technological development and validation in emergency medical training.
Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M
An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.
Gourley, Christopher S.; Abidi, Mongi A.
Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.
Beveridge, R; Wilson, S; Coyle, D
A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.
Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie
With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.
Vaughn, Jacqueline; Lister, Michael; Shaw, Ryan J
We describe a pilot study that incorporated an innovative hybrid simulation designed to increase the perception of realism in a high-fidelity simulation. Prelicensure students (N = 12) cared for a manikin in a simulation lab scenario wearing Google Glass, a wearable head device that projected video into the students' field of vision. Students reported that the simulation gave them confidence that they were developing skills and knowledge to perform necessary tasks in a clinical setting and that they met the learning objectives of the simulation. The video combined visual images and cues seen in a real patient and created a sense of realism the manikin alone could not provide.
Cho, Kit W
Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.
Zeng, Bowei; Meng, Fanle; Ding, Hui; Wang, Guangzhi
Using existing stereoelectroencephalography (SEEG) electrode implantation surgical robot systems, it is difficult to intuitively validate registration accuracy and display the electrode entry points (EPs) and the anatomical structure around the electrode trajectories in the patient space to the surgeon. This paper proposes a prototype system that can realize video see-through augmented reality (VAR) and spatial augmented reality (SAR) for SEEG implantation. The system helps the surgeon quickly and intuitively confirm the registration accuracy, locate EPs and visualize the internal anatomical structure in the image space and patient space. We designed and developed a projector-camera system (PCS) attached to the distal flange of a robot arm. First, system calibration is performed. Second, the PCS is used to obtain the point clouds of the surface of the patient's head, which are utilized for patient-to-image registration. Finally, VAR is produced by merging the real-time video of the patient and the preoperative three-dimensional (3D) operational planning model. In addition, SAR is implemented by projecting the planning electrode trajectories and local anatomical structure onto the patient's scalp. The error of registration, the electrode EPs and the target points are evaluated on a phantom. The fiducial registration error is [Formula: see text] mm (max 1.22 mm), and the target registration error is [Formula: see text] mm (max 1.18 mm). The projection overlay error is [Formula: see text] mm, and the TP error after the pre-warped projection is [Formula: see text] mm. The TP error caused by a surgeon's viewpoint deviation is also evaluated. The presented system can help surgeons quickly verify registration accuracy during SEEG procedures and can provide accurate EP locations and internal structural information to the surgeon. With more intuitive surgical information, the surgeon may have more confidence and be able to perform surgeries with better outcomes.
Irwin, N.H.; Berkel, J. van; Johnson, D.K.; Wylie, B.N.
Data visualization is an emerging technology with high potential for addressing the information overload problem. This project extends the data visualization work of the Navigating Science project by coupling it with more traditional information retrieval methods. A citation-derived landscape was augmented with documents using a text-based similarity measure to show viability of extension into datasets where citation lists do not exist. Landscapes, showing hills where clusters of similar documents occur, can be navigated, manipulated and queried in this environment. The capabilities of this tool provide users with an intuitive explore-by-navigation method not currently available in today`s retrieval systems.
Navarro Michel, Mónica
In the last few decades there has been a wealth of literature and legislation on advance directives. As you all know, it is an instrument by which a person can express their wishes as regards what treatment they should be given or, more to the point, not to be given, when he is in a situation when he can not do so himself. Regulations in the western world seem to promote advance directives as a way to enhance patient¿s autonomy in the context of human rights, and the media has presen...
Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.
The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a
Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and
Vora, Jeenal; Nair, Santosh; Gramopadhye, Anand K; Duchowski, Andrew T; Melloy, Brian J; Kanki, Barbara
The aircraft maintenance industry is a complex system consisting of several interrelated human and machine components. Recognizing this, the Federal Aviation Administration (FAA) has pursued human factors related research. In the maintenance arena the research has focused on the aircraft inspection process and the aircraft inspector. Training has been identified as the primary intervention strategy to improve the quality and reliability of aircraft inspection. If training is to be successful, it is critical that we provide aircraft inspectors with appropriate training tools and environment. In response to this need, the paper outlines the development of a virtual reality (VR) system for aircraft inspection training. VR has generated much excitement but little formal proof that it is useful. However, since VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. To address this important issue, this research measured the degree of immersion and presence felt by subjects in a virtual environment simulator. Specifically, it conducted two controlled studies using the VR system developed for visual inspection task of an aft-cargo bay at the VR Lab of Clemson University. Beyond assembling the visual inspection virtual environment, a significant goal of this project was to explore subjective presence as it affects task performance. The results of this study indicated that the system scored high on the issues related to the degree of presence felt by the subjects. As a next logical step, this study, then, compared VR to an existing PC-based aircraft inspection simulator. The results showed that the VR system was better and preferred over the PC-based training tool.
Full Text Available Perception in natural environments is inseparably linked to motor action. In fact, we consider action an essential component of perceptual representation. But these representations are inherently difficult to investigate: Traditional experimental setups are limited by the lack of flexibility in manipulating spatial features. To overcome these problems, virtual reality (VR experiments seem to be a feasible alternative, but these setups typically lack ecological realism due to the use of “unnatural” interface-devices (joystick. Thus, we propose an experimental apparatus which combines multisensory perception and action in an ecologically realistic way. The basis is a 10-foot hollow sphere (VirtuSphere placed on a platform that allows free rotation. A subject inside can walk in any direction for any distance immersed into virtual environment. Both the rotation of the sphere and movement of the subject's head are tracked to process the subject's view within the VR-environment presented on a head-mounted display. Moreover, auditory features are dynamically processed taking greatest care of exact alignment of sound-sources and visual objects using ambisonic-encoded audio processed by a HRTF-filterbank. We present empirical data that confirm ecological realism of this setup and discuss its suitability for multi-sensory-motor research.
Full Text Available Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926–1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf of the non-contact atomic force microscope (NC-AFM tuning fork sensor as well as the magnitude of the electric current (I flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111 surface.
Kim, Won S.; Schenker, Paul
A force-reflecting teleoperation training simulator with a high-fidelity real-time graphics display has been developed for operator training. A novel feature of this simulator is that it enables the operator to feel contact forces and torques through a force-reflecting controller during the execution of the simulated peg-in-hole task, providing the operator with the feel of visual and kinesthetic force virtual reality. A peg-in-hole task is used in our simulated teleoperation trainer as a generic teleoperation task. A quasi-static analysis of a two-dimensional peg-in-hole task model has been extended to a three-dimensional model analysis to compute contact forces and torques for a virtual realization of kinesthetic force feedback. The simulator allows the user to specify force reflection gains and stiffness (compliance) values of the manipulator hand for both the three translational and the three rotational axes in Cartesian space. Three viewing modes are provided for graphics display: single view, two split views, and stereoscopic view.
Leinen, Philipp; Green, Matthew F B; Esat, Taner; Wagner, Christian; Tautz, F Stefan; Temirov, Ruslan
Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM) is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM) introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926-1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf) of the non-contact atomic force microscope (NC-AFM) tuning fork sensor as well as the magnitude of the electric current (I) flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111) surface.
Belen G. Rodriguez-Santana; Amilcar Meneses Viveros; Blanca Esther Carvajal-Gamez; Diana Carolina Trejo-Osorio
Augmented Reality applications can serve as teach-ing tools in different contexts of use. Augmented reality appli-cation on mobile devices can help to provide tourist information on cities or to give information on visits to museums. For example, during visits to museums of natural history, applications of augmented reality on mobile devices can be used by some visitors to interact with the skeleton of a whale. However, making rendering heavy models can be computationally infeasible on device...
Koeva, Mila; Luleva, Mila; Maldjanski, Plamen
Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of
Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp
Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind
Full Text Available component of enhancer function, their expression has not been broadly analyzed at a single cell level via imaging techniques. This protocol describes a method to image eRNA in single cells by in situ hybridization followed by tyramide signal amplifi cation...
Full Text Available Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods.
Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching
Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219
Serubugo, Sule; Skantarova, Denisa; Evers, Nicolaj
This paper describes our demonstration of a walkable self-overlapping maze and its corresponding map to facilitate asymmetric collaboration for room-scale virtual reality setups in public places.......This paper describes our demonstration of a walkable self-overlapping maze and its corresponding map to facilitate asymmetric collaboration for room-scale virtual reality setups in public places....
Palmer, Eric; Kwon, TaeKyu; Pizlo, Zygmunt
Virtual reality applications provide an opportunity to test human vision in well-controlled scenarios that would be difficult to generate in real physical spaces. This paper presents a study intended to evaluate the importance of the regularity priors used by the human visual system. Using a CAVE simulation, subjects viewed virtual objects in a variety of experimental manipulations. In the first experiment, the subject was asked to count the objects in a scene that was viewed either right-side-up or upside-down for 4 seconds. The subject counted more accurately in the right-side-up condition regardless of the presence of binocular disparity or color. In the second experiment, the subject was asked to reconstruct the scene from a different viewpoint. Reconstructions were accurate, but the position and orientation error was twice as high when the scene was rotated by 45°, compared to 22.5°. Similarly to the first experiment, there was little difference between monocular and binocular viewing. In the third experiment, the subject was asked to adjust the position of one object to match the depth extent to the frontal extent among three objects. Performance was best with symmetrical objects and became poorer with asymmetrical objects and poorest with only small circular markers on the floor. Finally, in the fourth experiment, we demonstrated reliable performance in monocular and binocular recovery of 3D shapes of objects standing naturally on the simulated horizontal floor. Based on these results, we conclude that gravity, horizontal ground, and symmetry priors play an important role in veridical perception of scenes.
Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.
Parsell, G; Gibbs, T; Bligh, J
Many changes in the delivery of healthcare in the UK have highlighted the need for healthcare professionals to learn to work together as teams for the benefit of patients. Whatever the profession or level, whether for postgraduate education and training, continuing professional development, or for undergraduates, learners should have an opportunity to learn about and with, other healthcare practitioners in a stimulating and exciting way. Learning to understand how people think, feel, and react, and the parts they play at work, both as professionals and individuals, can only be achieved through sensitive discussion and exchange of views. Teaching and learning methods must provide opportunities for this to happen. This paper describes three small-group teaching techniques which encourage a high level of learner collaboration and team-working. Learning content is focused on real-life health-care issues and strong visual images are used to stimulate lively discussion and debate. Each description includes the learning objectives of each exercise, basic equipment and resources, and learning outcomes.
Rostand, N D; Eglantine, H; Jerôme, L
Today, we are witnessing an increasing complexity of transport in order to deal with requirements of safety, security, reliability and efficiency. Such transport is generally equipped with drive systems; it is nevertheless for engine manufacturers to overcome the performance requirements of energy efficiency throughout their operations. To this end, this article proposes a performance monitoring solution for a large fleet of engines in operation. It uses a pre-calibrated physical model developed by the engine manufacturer regarding the performance objectives as reference. The physical model is firstly decomposed into critical performance modules, and is secondly updated on current observations extracted at specific predefined operating conditions in order to derive residual errors status of each engine tested. Through a process of standardization of those contextual differences remaining, the solution offers a synthesis mapping to visualize the evolution of performance of each engine throughout its operations. This article describes the theoretical methodology of implementation mainly based on universal mathematical foundations, and vindicates the interests of its industrialization in the light of the proactive findings.
Isabel Cristina Siqueira da Silva
Full Text Available The evolution of technology has changed the face of education, especially when combined with appropriate pedagogical bases. This combination has created innovation opportunities in order to add quality to teaching through new perspectives for traditional methods applied in the classroom. In the Health field, particularly, augmented reality and interaction design techniques can assist the teacher in the exposition of theoretical concepts and/or concepts that need of training at specific medical procedures. Besides, visualization and interaction with Health data, from different sources and in different formats, helps to identify hidden patterns or anomalies, increases the flexibility in the search for certain values, allows the comparison of different units to obtain relative difference in quantities, provides human interaction in real time, etc. At this point, it is noted that the use of interactive visualization techniques such as augmented reality and virtual can collaborate with the process of knowledge discovery in medical and biomedical databases. This work discuss aspects related to the use of augmented reality and interaction design as a tool for teaching anatomy and knowledge discovery, with the proposition of an case study based on mobile application that can display targeted anatomical parts in high resolution and with detail of its parts.
Niemczyk, Kazimierz; Kucharski, Tomasz; Kujawinska, Malgorzata; Bruzgielewicz, Antoni
Recently surgery requires extensive support from imaging technologies in order to increase effectiveness and safety of operations. One of important tasks is to enhance visualisation of quasi-phase (transparent) 3d structures. Those structures are characterized by very low contrast. It makes differentiation of tissues in field of view very difficult. For that reason the surgeon may be extremly uncertain during operation. This problem is connected with supporting operations of inner ear during which physician has to perform cuts at specific places of quasi-transparent velums. Conventionally during such operations medical doctor views the operating field through stereoscopic microscope. In the paper we propose a 3D visualisation system based on Helmet Mounted Display. Two CCD cameras placed at the output of microscope perform acquisition of stereo pairs of images. The images are processed in real-time with the goal of enhancement of quasi-phased structures. The main task is to create algorithm that is not sensitive to changes in intensity distribution. The disadvantages of existing algorithms is their lack of adaptation to occuring reflexes and shadows in field of view. The processed images from both left and right channels are overlaid on the actual images exported and displayed at LCD's of Helmet Mounted Display. A physician observes by HMD (Helmet Mounted Display) a stereoscopic operating scene with indication of the places of special interest. The authors present the hardware ,procedures applied and initial results of inner ear structure visualisation. Several problems connected with processing of stereo-pair images are discussed.
Ohno, Nobuaki; Ohtani, Hiroaki; Horiuchi, Ritoku; Matsuoka, Daisuke
The particle kinetic effects play an important role in breaking the frozen-in condition and exciting collisionless magnetic reconnection in high temperature plasmas. Because this effect is originating from a complex thermal motion near reconnection point, it is very important to examine particle trajectories using scientific visualization technique, especially in the presence of plasma instability. We developed interactive visualization environment for the particle trajectories in time-varying electromagnetic fields in the CAVE-type virtual reality system based on VFIVE, which is interactive visualization software for the CAVE system. From the analysis of ion trajectories using the particle simulation data, it was found that time-varying electromagnetic fields around the reconnection region accelerate ions toward the downstream region. (author)
.... As an example of the type of human performance studies needed to determine the useful specifications of augmented reality displays, an optical see-through display was used in an ATC Tower simulation...
Anderson-Hanley, Cay; Snyder, Amanda L; Nimon, Joseph P; Arciero, Paul J
This study examined the effect of virtual social facilitation and competitiveness on exercise effort in exergaming older adults. Fourteen exergaming older adults participated. Competitiveness was assessed prior to the start of exercise. Participants were trained to ride a "cybercycle;" a virtual reality-enhanced stationary bike with interactive competition. After establishing a cybercycling baseline, competitive avatars were introduced. Pedaling effort (watts) was assessed. Repeated measures ANOVA revealed a significant group (high vs low competitiveness) × time (pre- to post-avatar) interaction (F[1,12] = 13.1, P = 0.003). Virtual social facilitation increased exercise effort among more competitive exercisers. Exercise programs that match competitiveness may maximize exercise effort.
Lupyan, Gary; Spivey, Michael J
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Carley, S.; Porter, A.L.; Rafols, I.; Leydesdorff, L.
Purpose The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps.
Pandya, Abhishek; Mulye, Aniket; Teoh, Soon Tee
The use of timeline to visualize time-series data is one of the most intuitive and commonly used methods, and is used for widely-used applications such as stock market data visualization, and tracking of poll data of election candidates over time. While useful, these timeline visualizations are lacking in contextual information of events which are related or cause changes in the data. We have developed a system that enhances timeline visualization with display of relevant news events and their corresponding images, so that users can not only see the changes in the data, but also understand the reasons behind the changes. We have also conducted a user study to test the effectiveness of our ideas.
Full Text Available We followed up a series of 23 Parkinson’s disease (PD patients who had performed an immersive virtual reality (VR protocol eight years before. On that occasion, six patients incidentally described visual hallucinations (VH with occurrences of images not included in the virtual environment. Curiously, in the following years, only these patients reported the appearance of VH later in their clinical history, while the rest of the group did not. Even considering the limited sample size, we may argue that VR immersive systems can induce unpleasant effects in PD patients who are predisposed to a cognitive impairment.
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions . A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W
To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.
Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław
Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.
Jun Il Kang
Full Text Available Acetylcholine (ACh contributes to learning processes by modulating cortical plasticity in terms of intensity of neuronal activity and selectivity properties of cortical neurons. However, it is not known if ACh induces long term effects within the primary visual cortex (V1 that could sustain visual learning mechanisms. In the present study we analyzed visual evoked potentials (VEPs in V1 of rats during a 4-8 h period after coupling visual stimulation to an intracortical injection of ACh analog carbachol or stimulation of basal forebrain. To clarify the action of ACh on VEP activity in V1, we individually pre-injected muscarinic (scopolamine, nicotinic (mecamylamine, alpha7 (methyllycaconitine, and NMDA (CPP receptor antagonists before carbachol infusion. Stimulation of the cholinergic system paired with visual stimulation significantly increased VEP amplitude (56% during a 6 h period. Pre-treatment with scopolamine, mecamylamine and CPP completely abolished this long-term enhancement, while alpha7 inhibition induced an instant increase of VEP amplitude. This suggests a role of ACh in facilitating visual stimuli responsiveness through mechanisms comparable to LTP which involve nicotinic and muscarinic receptors with an interaction of NMDA transmission in the visual cortex.
Sullivan, Briana; Ware, Colin; Plumlee, Matthew
3D interactive virtual reality museum exhibits should be easy to use, entertaining, and informative. If the interface is intuitive, it will allow the user more time to learn the educational content of the exhibit. This research deals with interface issues concerning activating audio descriptions of images in such exhibits while the user is…
Parks, Nathan A; Beck, Diane M; Kramer, Arthur F
The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.
Full Text Available We examined the crossmodal effect of the presentation of a simultaneous sound on visual detection and discrimination sensitivity using the equivalent noise paradigm (Dosher & Lu, 1998. In each trial, a tilted Gabor patch was presented in either the first or second of two intervals consisting of dynamic 2D white noise with one of seven possible contrast levels. The results revealed that the sensitivity of participants' visual detection and discrimination performance were both enhanced by the presentation of a simultaneous sound, though only close to the noise level at which participants' target contrast thresholds started to increase with the increasing noise contrast. A further analysis of the psychometric function at this noise level revealed that the increase in sensitivity could not be explained by the reduction of participants' uncertainty regarding the onset time of the visual target. We suggest that this crossmodal facilitatory effect may be accounted for by perceptual enhancement elicited by a simultaneously-presented sound, and that the crossmodal facilitation was easier to observe when the visual system encountered a level of noise that happened to be close to the level of internal noise embedded within the system.
Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M
The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user\\'s real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Full Text Available Cay Anderson-Hanley1,2, Amanda L Snyder1, Joseph P Nimon1, Paul J Arciero1,21Healthy Aging and Neuropsychology Lab, Department of Psychology, Union College, Schenectady, NY, USA; 2Health and Exercise Sciences Department, Skidmore College, Saratoga Springs, NY, USAAbstract: This study examined the effect of virtual social facilitation and competitiveness on exercise effort in exergaming older adults. Fourteen exergaming older adults participated. Competitiveness was assessed prior to the start of exercise. Participants were trained to ride a “cybercycle;” a virtual reality-enhanced stationary bike with interactive competition. After establishing a cybercycling baseline, competitive avatars were introduced. Pedaling effort (watts was assessed. Repeated measures ANOVA revealed a significant group (high vs low competitiveness X time (pre- to post-avatar interaction (F[1,12] = 13.1, P = 0.003. Virtual social facilitation increased exercise effort among more competitive exercisers. Exercise programs that match competitiveness may maximize exercise effort.Keywords: exercise, aging, virtual reality, competitiveness, social facilitation, exercise intensity
Full Text Available Our research deals with the development of a new type of game-based learning environment: (MMORPG based on mixed reality, applied in the archaeological domain. In this paper, we propose a learning scenario that enhances players’ motivation thanks to individual, collaborative and social activities and that offers a continuous experience between the virtual environment and real places (archaeological sites, museum. After describing the challenge to a rich multidisciplinary approach involving both computer scientists and archaeologists, we present two types of game: multiplayer online role-playing games and mixed reality games. We build on the specificities of these games to make the design choices described in the paper. We also present three modular features we have developed to support independently three activities of the scenario. The proposed approach aims at raising awareness among people on the scientific approach in Archaeology, by providing them information in the virtual environment and encouraging them to go on real sites. We finally discuss the issues raised by this work, such as the tensions between the perceived individual, team and community utilities, as well as the choice of the entering point in the learning scenario (real or virtual for the players’ involvement in the game.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Delgado, Francisco J.; Noyes, Matthew
Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created
Estapa, Anne; Nadolny, Larysa
The purpose of the study was to assess student achievement and motivation during a high school augmented reality mathematics activity focused on dimensional analysis. Included in this article is a review of the literature on the use of augmented reality in mathematics and the combination of print with augmented reality, also known as interactive…
Stone, Scott A; Tata, Matthew S
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Scott A Stone
Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray
The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.
Mattina, Brendan Casey
Network analysts have long used two-dimensional security visualizations to make sense of network data. As networks grow larger and more complex, two-dimensional visualizations become more convoluted, potentially compromising user situational awareness of cyber threats. To combat this problem, augmented reality (AR) can be employed to visualize data within a cyber-physical context to restore user perception and improve comprehension; thereby, enhancing cyber situational awareness. Multiple gen...
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
Pritchard, Stephen C; Zopf, Regine; Polito, Vince; Kaplan, David M; Williams, Mark A
The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual-tactile synchrony , and visual-proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.
Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter
Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.
Wolle, Patrik; Müller, Matthias P; Rauh, Daniel
The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.
Valdés, Julio J; Barton, Alan J
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
Nathan A Parks
Full Text Available The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task – greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs in conjunction with time-domain event-related potentials (ERPs to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2°, 6°, or 11° during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3Hz was attenuated under high perceptual load (relative to low load at the most proximal (2° eccentricity but not at more eccentric locations (6˚ or 11˚. Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.
Lin, Zhiyong; Li, Wenjing; Meng, Lingkui
Networked Virtual Reality (NVR) is a system based on net connected and spatial information shared, whose demands cannot be fully meet by the existing architectures and application patterns of VR to some extent. In this paper, we propose a new architecture of NVR based on Multi-Agent framework. which includes the detailed definition of various agents and their functions and full description of the collaboration mechanism, Through the prototype system test with DEM Data and 3D Models Data, the advantages of Multi-Agent based Networked Virtual Reality System in terms of the data loading time, user response time and scene construction time etc. are verified. First, we introduce the characters of Networked Virtual Realty and the characters of Multi-Agent technique in Section 1. Then we give the architecture design of Networked Virtual Realty based on Multi-Agent in Section 2.The Section 2 content includes the rule of task division, the multi-agent architecture design to implement Networked Virtual Realty and the function of agents. Section 3 shows the prototype implementation according to the design. Finally, Section 4 discusses the benefits of using Multi-Agent to implement geovisualization of Networked Virtual Realty.
Full Text Available Purpose: The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps. Design/methodology/approach: We use the combined set of 2015 Journal Citation Reports for the Science Citation Index (n of journals = 8,778 and the Social Sciences Citation Index (n = 3,212 for a total of 11,365 journals. The set of Web of Science Categories in the Science Citation Index and the Social Sciences Citation Index increased from 224 in 2010 to 227 in 2015. Using dedicated software, a matrix of 227 × 227 cells is generated on the basis of whole-number citation counting. We normalize this matrix using the cosine function. We first develop the citing-side, cosine-normalized map using 2015 data and VOSviewer visualization with default parameter values. A routine for making overlays on the basis of the map (“wc15.exe” is available at http://www.leydesdorff.net/wc15/index.htm. Findings: Findings appear in the form of visuals throughout the manuscript. In Figures 1–9 we provide basemaps of science and science overlay maps for a number of companies, universities, and technologies. Research limitations: As Web of Science Categories change and/or are updated so is the need to update the routine we provide. Also, to apply the routine we provide users need access to the Web of Science. Practical implications: Visualization of science overlay maps is now more accurate and true to the 2015 Journal Citation Reports than was the case with the previous version of the routine advanced in our paper. Originality/value: The routine we advance allows users to visualize science overlay maps in VOSviewer using data from more recent Journal Citation Reports.
The objective of the present deliberations was to systematise our knowledge of static visual variables used to create cartographic symbols, and also to analyse the possibility of their utilisation in the Augmented Reality (AR) applications on smartphone-type mobile devices. This was accomplished by combining the visual variables listed over the years by different researchers. Research approach was to determine the level of usefulness of particular characteristics of visual variables such as selective, associative, quantitative and order. An attempt was made to provide an overview of static visual variables and to describe the AR system which is a new paradigm of the user interface. Changing the approach to the presentation of point objects is caused by applying different perspective in the observation of objects (egocentric view) than it is done on traditional analogue maps (geocentric view). Presented topics will refer to the fast-developing field of cartography, namely mobile cartography. Particular emphasis will be put on smartphone-type mobile devices and their applicability in the process of designing cartographic symbols. Celem artykułu było usystematyzowanie wiedzy na temat statycznych zmiennych wizualnych, które sa kluczowymi składnikami budujacymi sygnatury kartograficzne. Podjeto próbe zestawienia zmiennych wizualnych wyodrebnionych przez kartografów na przestrzeni ostatnich piecdziesieciu lat, zaczynajac od klasyfikacji przedstawionej przez J. Bertin’a. Dokonano analizy stopnia uzytecznosci poszczególnych zmiennych graficznych w aspekcie ich wykorzystania w projektowaniu znaków punktowych dla mobilnych aplikacji tworzonych w technologii Rzeczywistosci Rozszerzonej (Augmented Reality). Zmienne poddano analizie pod wzgledem czterech charakterystyk: selektywnosci, skojarzeniowosci, odzwierciedlenia ilosci oraz porzadku. W artykule zwrócono uwage na odmienne zastosowanie perspektywy pomiedzy tradycyjnymi analogowymi mapami (geocentrycznosc) a
Federal Laboratory Consortium — FUNCTION: Performs basic and applied research in interactive 3D computer graphics, including visual analytics, virtual environments, and augmented reality (AR). The...
This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time
Wang, Bo; Sun, Bukuan
The current study examined whether the effect of post-encoding emotional arousal on item memory extends to reality-monitoring source memory and, if so, whether the effect depends on emotionality of learning stimuli and testing format. In Experiment 1, participants encoded neutral words and imagined or viewed their corresponding object pictures. Then they watched a neutral, positive, or negative video. The 24-hour delayed test showed that emotional arousal had little effect on both item memory and reality-monitoring source memory. Experiment 2 was similar except that participants encoded neutral, positive, and negative words and imagined or viewed their corresponding object pictures. The results showed that positive and negative emotional arousal induced after encoding enhanced consolidation of item memory, but not reality-monitoring source memory, regardless of emotionality of learning stimuli. Experiment 3, identical to Experiment 2 except that participants were tested only on source memory for all the encoded items, still showed that post-encoding emotional arousal had little effect on consolidation of reality-monitoring source memory. Taken together, regardless of emotionality of learning stimuli and regardless of testing format of source memory (conjunction test vs. independent test), the facilitatory effect of post-encoding emotional arousal on item memory does not generalize to reality-monitoring source memory.
Wright, W Geoffrey
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
W. Geoffrey Wright
Full Text Available Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS. This mini-review focuses on the use of virtual environments (VE to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Howard, Christina J; Wilding, Robert; Guest, Duncan
There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.
Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author
Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József
Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115
Lin Shiaufeng; Lin Chiuhsiang Joe; Wang Rouwen; Yang Lichen; Yang Chihwei; Cheng Tsungchieh; Wang Jyhgang
The nuclear power plant (NPP) mainly serve the purpose to provide low-cost and stable electricity for the people, but this purpose must be dependent upon the premise of 'safety first.' The reason for this is that the occurrence of nuclear power plant accidents could cause catastrophic damage to the people, property, society, and the environment. Therefore, training in superior and high reliability system is very important in accident prevention. In recent years, the Virtual Reality (VR) technology advances very fast as well as the technology for e-learning environment. VR systems have been applied for education, safety training of NPP and flying simulators. In particular, VR is an interactive and reactive technology; it allows users to interact and navigate with objects in the virtual environment. Development of VR and simulation techniques contributes to an accurate and immersive training environment for NPP operators. Main Control Room (MCR) training simulator based on VR is a more cost effective and efficient alternative to traditional simulator based training methods. The VR simulation for MCR training is a complex task. Since VR not only reinforces the visual presentation of the training materials but also provides ways to interact with the training system, it becomes more flexible and possibly more powerful in the training system. In the VR training system, the MCR operators may use just one display to view the wide range of the real world displays. The field of view (FOV) will be different from the real MCR environment in which many displays exist for the operators to view. Thus operator's immersion and visual attention will be reduced. This is the problem of MCR virtual training compared with the traditional simulator based training systems. Therefore, improving the operator's visual attention and the detection of signals in VR training system is a very important issue. This investigation intends to contribute in assessing benefits of visual attention and
Pejoska, Jana; Bauters, Merja; Purma, Jukka; Leinonen, Teemu
Our design proposal of social augmented reality (SoAR) grows from the observed difficulties of practical applications of augmented reality (AR) in workplace learning. In our research we investigated construction workers doing physical work in the field and analyzed the data using qualitative methods in various workshops. The challenges related to…
Tsai, Ming-Kuan; Yau, Nie-Jia
When radioactive accidents occur, modern tools in information technology for emergency response are good solutions to reduce the impact. Since few information-technology-based applications were developed for people during radioactive accidents, a previous study (Tsai et al., 2012) proposed augmented-reality-based mobile escape guidelines. However, because of the lack of transparent escape routes and indoor escape guidelines, the usability of the guidelines is limited. Therefore, this study introduces route planning and mobile three-dimensional (3D) graphics techniques to address the identified problems. The proposed approach could correctly present the geographical relationship from user locations to the anticipated shelters, and quickly show the floor-plan drawings as users are in the buildings. Based on the testing results, in contrast to the previous study, this study offered better escape routes, when the participants performed self-evacuation in outdoor and indoor environments. Overall, this study is not only a useful reference for similar studies, but also a beneficial tool for emergency response during radioactive accidents. -- Highlights: ► Enhancing the efficiency when people escape from radioactive accidents. ► The spatial relationship is transparently displayed in real time. ► In contrast to a previous study, this study offers better escape guidelines
Tran, Huy Hoang; Suenaga, Hideyuki; Kuwana, Kenta; Masamune, Ken; Dohi, Takeyoshi; Nakajima, Susumu; Liao, Hongen
We present an augmented reality system for oral and maxillofacial surgery in this paper. Instead of being displayed on a separated screen, three-dimensional (3D) virtual presentations of osseous structures and soft tissues are projected onto the patient's body, providing surgeons with exact knowledge of depth information of high risk tissues inside the bone. We employ a 3D integral imaging technique which produce motion parallax in both horizontal and vertical direction over a wide viewing area in this study. In addition, surgeons are able to check the progress of the operation in real-time through an intuitive 3D based interface which is content-rich, hardware accelerated. These features prevent surgeons from penetrating into high risk areas and thus help improve the quality of the operation. Operational tasks such as hole drilling, screw fixation were performed using our system and showed an overall positional error of less than 1 mm. Feasibility of our system was also verified with a human volunteer experiment.
Weidert, S; Wang, L; von der Heide, A; Navab, N; Euler, E
The intraoperative application of augmented reality (AR) has so far mainly taken place in the field of endoscopy. Here, the camera image of the endoscope was augmented by computer graphics derived mostly from preoperative imaging. Due to the complex setup and operation of the devices, they have not yet become part of routine clinical practice. The Camera Augmented Mobile C-arm (CamC) that extends a classic C-arm by a video camera and mirror construction is characterized by its uncomplicated handling. It combines its video live stream geometrically correct with the acquired X-ray. The clinical application of the device in 43 cases showed the strengths of the device in positioning for X-ray acquisition, incision placement, K-wire placement, and instrument guidance. With its new function and the easy integration into the OR workflow of any procedure that requires X-ray imaging, the CamC has the potential to become the first widely used AR technology for orthopedic and trauma surgery.
Hoffmann, Henry; Ruiz-Schirinzi, Rebecca; Goldblum, David; Dell-Kuster, Salome; Oertli, Daniel; Hahnloser, Dieter; Rosenthal, Rachel
Laparoscopic surgery represents specific challenges, such as the reduction of a three-dimensional anatomic environment to two dimensions. The aim of this study was to investigate the impact of the loss of the third dimension on laparoscopic virtual reality (VR) performance. We compared a group of examinees with impaired stereopsis (group 1, n = 28) to a group with accurate stereopsis (group 2, n = 29). The primary outcome was the difference between the mean total score (MTS) of all tasks taken together and the performance in task 3 (eye-hand coordination), which was a priori considered to be the most dependent on intact stereopsis. The MTS and performance in task 3 tended to be slightly, but not significantly, better in group 2 than in group 1 [MTS: -0.12 (95 % CI -0.32, 0.08; p = 0.234); task 3: -0.09 (95 % CI -0.29, 0.11; p = 0.385)]. The difference of MTS between simulated impaired stereopsis between group 2 (by attaching an eye patch on the adominant eye in the 2nd run) and the first run of group 1 was not significant (MTS: p = 0.981; task 3: p = 0.527). We were unable to demonstrate an impact of impaired examinees' stereopsis on laparoscopic VR performance. Individuals with accurate stereopsis seem to be able to compensate for the loss of the third dimension in laparoscopic VR simulations.
Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.
Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.
Full Text Available This essay stresses the effects of a reflexive gaze in visual arts as well as in their representation of both sex and gender. The reflexive gaze involves a spectator in an image thus depriving him of its subject while at the same time returning it as an object of desire. Sexuality, in the advertisements represents not only a commodity but the latter, according to the Marx’s definition of the fetishism of commodity, reveals both the dimension and the aspects of sexuality that previously escaped visualization. The effects of retroversion are the outcomes of the modes of commodity. In contrast to previous concepts, according to Lacan sexuality represents commodity. A monitoring position of a reflexive gaze is not defined via binary oppositions such as subject-object, active-passive, masculine-feminine. It is rather an inter-space situated between constitutive and objected identifications. The latter, seen in the context of the grounding perspectives, are senseless and impossible to grasp, but the entire visual field along with the range and horizon of the image is situated in «their shadow».
Huang, C H; Hsieh, C H; Lee, J D; Huang, W C; Lee, S T; Wu, C T; Sun, Y N; Wu, Y T
With the combined view on the physical space and the medical imaging data, augmented reality (AR) visualization can provide perceptive advantages during image-guided surgery (IGS). However, the imaging data are usually captured before surgery and might be different from the up-to-date one due to natural shift of soft tissues. This study presents an AR-enhanced IGS system which is capable to correct the movement of soft tissues from the pre-operative CT images by using intra-operative ultrasound images. First, with reconstructing 2-D free-hand ultrasound images to 3-D volume data, the system applies a Mutual-Information based registration algorithm to estimate the deformation between pre-operative and intra-operative ultrasound images. The estimated deformation transform describes the movement of soft tissues and is then applied to the pre-operative CT images which provide high-resolution anatomical information. As a result, the system thus displays the fusion of the corrected CT images or the real-time 2-D ultrasound images with the patient in the physical space through a head mounted display device, providing an immersive augmented-reality environment. For the performance validation of the proposed system, a brain phantom was utilized to simulate brain-shift scenario. Experimental results reveal that when the shift of an artificial tumor is from 5mm ∼ 12mm, the correction rates can be improved from 32% ∼ 45% to 87% ∼ 95% by using the proposed system.
Le Roux, Cheryl
Full Text Available Living in an image-rich world, as we currently do, does not mean that individuals naturally possess visual literacy skills. This article explores the concept of ‘visual literacy’, and the skills needed to develop visual literacy and visual intelligence. Developing visual literacy in educational environments is important because it can contribute to individual empowerment, and it is therefore necessary to take pedagogical advantage of visual literacy’s place across the disciplines. Doing this means tapping into experiences, expertise and interest in visual communication and building a new paradigm that takes visual education seriously.
Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.
S. Gonizzi Barsanti
Full Text Available Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the “path of the dead”, an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.
Izumi, Masanori; Shimoda, Hiroshi; Ishii, Hirotake
Fugen Nuclear Power Plant, Advanced Thermal Reactor, was permanently shut down in March 2003 and it is at the decommissioning stage. Decommissioning Engineering Support System, DEXUS, has been developed to help planning of the optimal dismantling process and for carrying out the dismantling work safely and efficiently. Worksite Visualization System (WVS), as part of Dismantling Work Support System of DEXUS, has been developed to support the field workers to deal with the information on the dismantling facilities comprehensibly and intuitively. In this article, outline of the dismantling process of Fugen is first introduced, then a feasibility study on WVS is described. (author)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
In recent years, Augmented Reality (AR) is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
David FONSECA ESCUDERO
Full Text Available This paper discuss about the results from the evaluation of the motivation, user profile and level of satisfaction in the workflow using 3D augmented visualization of complex models in educational environments. The study shows the results of different experiments conducted with first and second year students from Architecture and Science and Construction Technologies (Old Spanish degree of Building Engineering, which is recognized at a European level. We have used a mixed method combining both quantitative and qualitative student assessment in order to complete a general overview of using new technologies, mobile devices and advanced visual methods in academic environments. The results show us how the students involved in the experiments improved their academic results and their implication in the subject, which allow us to conclude that the hybrid technologies improve both spatial skills and the student motivation, a key concept in the actual educational framework composed by digital-native students and a great range of different applications and interfaces useful for teaching and learning.
Conclusion: This study shows the significant effect of a virtual environment on the progress of driving rehabilitation, and suggests that incorporating virtual reality into rehabilitation programs will accelerate the maximal recovery of the patient’s driving competence.
Kjærgaard, Hanne Wacher; Kjeldsen, Lars Peter Bech; Rahn, Annette
This chapter describes the use of iPad-facilitated application of augmented reality in the teaching of highly complex anatomical and physiological subjects in the training of nurses at undergraduate level. The general aim of the project is to investigate the potentials of this application in terms...... of making the complex content and context of these subjects more approachable to the students through the visualization made possible through the use of this technology. A case study is described in this chapter. Issues and factors required for the sustainable use of the mobile-facilitated application...... of augmented reality are discussed....
Ehgoetz Martens, Kaylena A; Ellard, Colin G; Almeida, Quincy J
Although dopaminergic replacement therapy is believed to improve sensory processing in PD, while delayed perceptual speed is thought to be caused by a predominantly cholinergic deficit, it is unclear whether sensory-perceptual deficits are a result of corrupt sensory processing, or a delay in updating perceived feedback during movement. The current study aimed to examine these two hypotheses by manipulating visual flow speed and dopaminergic medication to examine which influenced distance estimation in PD. Fourteen PD and sixteen HC participants were instructed to estimate the distance of a remembered target by walking to the position the target formerly occupied. This task was completed in virtual reality in order to manipulate the visual flow (VF) speed in real time. Three conditions were carried out: (1) BASELINE: VF speed was equal to participants' real-time movement speed; (2) SLOW: VF speed was reduced by 50 %; (2) FAST: VF speed was increased by 30 %. Individuals with PD performed the experiment in their ON and OFF state. PD demonstrated significantly greater judgement error during BASELINE and FAST conditions compared to HC, although PD did not improve their judgement error during the SLOW condition. Additionally, PD had greater variable error during baseline compared to HC; however, during the SLOW conditions, PD had significantly less variable error compared to baseline and similar variable error to HC participants. Overall, dopaminergic medication did not significantly influence judgement error. Therefore, these results suggest that corrupt processing of sensory information is the main contributor to sensory-perceptual deficits during movement in PD rather than delayed updating of sensory feedback.
Pelargos, Panayiotis E; Nagasawa, Daniel T; Lagman, Carlito; Tenn, Stephen; Demos, Joanna V; Lee, Seung J; Bui, Timothy T; Barnette, Natalie E; Bhatt, Nikhilesh S; Ung, Nolan; Bari, Ausaf; Martin, Neil A; Yang, Isaac
Neurosurgery has undergone a technological revolution over the past several decades, from trephination to image-guided navigation. Advancements in virtual reality (VR) and augmented reality (AR) represent some of the newest modalities being integrated into neurosurgical practice and resident education. In this review, we present a historical perspective of the development of VR and AR technologies, analyze its current uses, and discuss its emerging applications in the field of neurosurgery. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stephen Craig Pritchard
Full Text Available The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence, and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step towards addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.
Pritchard, Stephen C.; Zopf, Regine; Polito, Vince; Kaplan, David M.; Williams, Mark A.
The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings. PMID:27826275
Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic
Chuang, Tien-Yow; Sung, Wen-Hsu; Chang, Hwa-Ann; Wang, Ray-Yau
Virtual reality (VR) technology has gained importance in many areas of medicine. Knowledge concerning the application and the influence of VR-enhanced exercise programs is limited for patients receiving coronary artery bypass grafting. The purpose of this study was to evaluate the effect of a virtual "country walk" on the number of sessions necessary to reach cardiac rehabilitation goals in patients undergoing coronary artery bypass grafting. Twenty subjects who were seen for cardiac rehabilitation between January and June 2004 comprised the study sample. The protocol for this study included an initial maximum graded exercise tolerance test, given to determine the subsequent training goals for the subject, followed by biweekly submaximal endurance training sessions. All subjects were assigned by lot to 1 of 2 submaximal endurance training programs, one (group 2) with and the other (group 1) without the added VR environment. In all other respects, the 2 programs were identical. Each training session lasted for 30 minutes and was carried out twice per week for about 3 months. The primary outcome measures were maximum load during the work sessions, target oxygen consumption, target heart rate (beats per minute), and number of training sessions required to reach rehabilitation goals. By the end of 20 training sessions, only 4 of the 10 control subjects had reached the heart rate target goal of 85% their maximum heart rate. In contrast, 9 of the 10 subjects in the VR program had attained this goal by 9 or fewer training sessions. When target metabolic cost (75% peak oxygen consumption) was used as the training goal, all 10 subjects in the VR program had reached this target after 2 training sessions (or, in some cases, 1 training session), but not until training session 15 did a cumulative number of 9 control subjects reach this goal. These study outcomes clearly support the notion that incorporating a VR environment into cardiac rehabilitation programs will accelerate
Yim, Ho Bin; Seong, Poong Hyun [Korea Advanced Institute of Technology and Science, Daejeon (Korea, Republic of)
It has been more than a decade since the concept of Augmented Reality (AR) was introduced. Many related technologies, such as tracking and display, to animate this concept have improved to certain levels. AR is well suited for interaction with the cognitive vision system. In contrast to the virtual reality, AR applications enrich the perceived reality with additional visual information which ranges from text annotation and object highlighting to complex 3D objects. AR has been tested its potentiality in various forms of applications. For example, visitors wear Head Mount Display (HMD) to see virtual guides explaining artifacts in a museum or soldiers are informed geographical features about unfamiliar operation sites. Recently, researchers tried to use AR as a means of teaching or training apparatus; however, there are still some technical obstacles to put this fascinating technology into practice. In this study, we will use Cognitive Load Theory (CLT) to design a manual of pump maintenance and convert it to AR technology to propose a proto type of an on-line AR maintenance manual to prove its possibility as an interactive learning tool.
Yim, Ho Bin; Seong, Poong Hyun
It has been more than a decade since the concept of Augmented Reality (AR) was introduced. Many related technologies, such as tracking and display, to animate this concept have improved to certain levels. AR is well suited for interaction with the cognitive vision system. In contrast to the virtual reality, AR applications enrich the perceived reality with additional visual information which ranges from text annotation and object highlighting to complex 3D objects. AR has been tested its potentiality in various forms of applications. For example, visitors wear Head Mount Display (HMD) to see virtual guides explaining artifacts in a museum or soldiers are informed geographical features about unfamiliar operation sites. Recently, researchers tried to use AR as a means of teaching or training apparatus; however, there are still some technical obstacles to put this fascinating technology into practice. In this study, we will use Cognitive Load Theory (CLT) to design a manual of pump maintenance and convert it to AR technology to propose a proto type of an on-line AR maintenance manual to prove its possibility as an interactive learning tool
Augmented reality (AR) technology merges digital information into the real world. It is an effective visualization method; AR enhances user's spatial perception skills and helps to understand spatial dimensions and relationships. It is beneficial for many professional application areas such as assembly, maintenance and repair. AR visualization helps to concretize building and construction projects and interior design plans – also for non-technically oriented people, who might otherwise have d...
Schuster, Stefan; Strauss, Roland; Götz, Karl G
Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.
Suzuki, Keisuke; Wakisaka, Sohei; Fujii, Naotaka
We have developed a novel experimental platform, referred to as a substitutional reality (SR) system, for studying the conviction of the perception of live reality and related metacognitive functions. The SR system was designed to manipulate people's reality by allowing them to experience live scenes (in which they were physically present) and recorded scenes (which were recorded and edited in advance) in an alternating manner without noticing a reality gap. All of the naïve participants (n = 21) successfully believed that they had experienced live scenes when recorded scenes had been presented. Additional psychophysical experiments suggest the depth of visual objects does not affect the perceptual discriminability between scenes, and the scene switch during head movement enhance substitutional performance. The SR system, with its reality manipulation, is a novel and affordable method for studying metacognitive functions and psychiatric disorders.
Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp
Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw
Full Text Available As smartphones, tablet computers, and other mobile devices have continued to dominate our digital world ecosystem, there are many industries using mobile or wearable devices to perform Augmented Reality (AR functions in their workplaces in order to increase productivity and decrease unnecessary workloads. Mobile-based AR can basically be divided into three main types: phone-based AR, wearable AR, and projector-based AR. Among these, projector-based AR or Spatial Augmented Reality (SAR is the most immature and least recognized type of AR for end users. This is because there are a small number of commercial products providing projector-based AR functionalities in a mobile manner. Also, prices of mobile projectors are still relatively high. Moreover, there are still many technical problems regarding projector-based AR that have been left unsolved. Nevertheless, it is projector-based AR that has potential to solve a fundamental problem shared by most mobile-based AR systems. Also the always-visible nature of projector-based AR is one good answer for solving current user experience issues of phone-based AR and wearable AR systems. Hence, in this paper, we analyze what are the user experience issues and technical issues regarding common mobile-based AR systems, recently widespread phone-based AR systems, and rising wearable AR systems. Then for each issue, we propose and explain a new solution of how using projector-based AR can solve the problems and/or help enhance its user experiences. Our proposed framework includes hardware designs and architectures as well as a software computing paradigm towards mobile projector-based AR systems. The proposed design is evaluated by three experts using qualitative and semiquantitative research approaches.
Van Riper, K. A.
We describe new features implemented in the Moritz geometry editing and visualization program to enhance the accuracy and efficiency of viewing complex geometry models. The 3D display is based on OpenGL and requires conversion of the combinatorial surface and solid body geometry used by MCNP and other transport codes to a set of polygons. Calculation of those polygons can take many minutes for complex models. Once calculated, the polygons can be saved to a file and reused when the same or a derivative model is loaded; the file can be read and processed in under a second. A cell can be filled with a collection of other cells constituting a universe. A new option bypasses use of the filled cell's boundaries when calculating the polygons for the filling universe. This option, when applicable, speeds processing, improves the 3D image, and permits reuse of the universe's polygons when other cells are filled with transformed instances of the universe. Surfaces and solid bodies used in a cell description must be converted to polygons before calculating the polygonal representation of a cell; this conversion requires truncation of infinite surfaces. A new method for truncating transformed surfaces ensures the finite surface intersects the entire model. When a surface or solid body is processed in a cell description, an optional test detects when that object does not contribute additional polygons; if so, that object May be extraneous for the cell description. (authors)
Munnerley, Danny; Bacon, Matt; Wilson, Anna; Steele, James; Hedberg, John; Fitzgerald, Robert
How can educators make use of augmented reality technologies and practices to enhance learning and why would we want to embrace such technologies anyway? How can an augmented reality help a learner confront, interpret and ultimately comprehend reality itself ? In this article, we seek to initiate a discussion that focuses on these questions, and…
Rau, Martina A.
Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…
Muhammad Nawaz; Sandeep N. Kundu; Farha Sattar
Augmented reality sandbox adds new dimensions to education and learning process. It can be a core component of geoscience teaching and learning to understand the geographic contexts and landform processes. Augmented reality sandbox is a useful tool not only to create an interactive learning environment through spatial visualization but also it can provide an active learning experience to students and enhances the cognition process of learning. Augmented reality sandbox can be used as an inter...
Mathias Haeger; Otmar Bock; Daniel Memmert; Stefanie Hüttermann
Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and...
Nur Shuhadah Mohd
Full Text Available The formation of tourism experience frequently subjected to complexity of individual tourist psycho-graphical factor, which leads to vast difference in the end experience formed among the respective tourist. However, the fact that travelling is highly subjected to environmental fuzziness and the issue of geographical consciousness may interfere the emotion of tourist and influence the formation of this experience. The evolution and advancement of mobile technologies had been optimized in improving the way human interact with the surrounding environment. Within this context, mobile augmented reality (AR technology is perceived as capable in narrowing the gap between the formation of pleasant experience and the issue of geographical consciousness, thus transform the way tourist interact with the destination. Pertaining to this situation, this conceptual paper is attempted to understand the effectiveness of mobile augmented reality in enhancing tourist travel experience on the tourism destination. In relation to this aim, this study is directed to clarify the mechanism and usability of mobile augmented reality in relation to its capability in improving tourism interpretation and to discover the influence of utilization of this technology towards tourism experience. Critical review of existing literature that relevant to the research area was done in understanding on the extensiveness of impact of mobile AR on tourist and experience formation. Findings revealed the capability of AR in merging virtual information with the real world environment through the platform of mobile device able to create a more dynamic interaction between tourist and surrounding environment.
Lu, Su-Ju; Liu, Ying-Chieh
Marine education comprises rich and multifaceted issues. Raising general awareness of marine environments and issues demands the development of new learning materials. This study adapts concepts from digital game-based learning to design an innovative marine learning program integrating augmented reality (AR) technology for lower grade primary…
Pérez-Sanagustin, Mar; Hernández-Leo, Davinia; Santos, Patricia; Kloos, Carlos Delgado; Blat, Josep
Visits to museums and city tours have been part of higher and secondary education curriculum activities for many years. However these activities are typically considered "less formal" when compared to those carried out in the classroom, mainly because they take place in informal or non-formal settings. Augmented Reality (AR) technologies…
Passig, David; Eden, Sigal
This study sought to test the most efficient representation mode with which children with hearing impairment could express a story while producing connectives indicating relations of time and of cause and effect. Using Bruner's (1973, 1986, 1990) representation stages, we tested the comparative effectiveness of Virtual Reality (VR) as a mode of…
Li, J.; van der Spek, E.D.; Hu, J.; Feijs, L.M.G.
Contemporary primary school students generally spend a lot of time playing digital games, but may be less interested in their schoolwork, such as learning mathematics. Mathematics includes many abstract concepts that can be difficult to grasp for some students. Augmented reality as a technology
Bala Dhandayuthapani Veerasamy
Rapid Application Development (RAD) enables ever expanding needs for speedy development of computer application programs that are sophisticated, reliable, and full-featured. Visual Basic was the first RAD tool for the Windows operating system, and too many people say still it is the best. To provide very good attraction in visual basic 6 applications, this paper directing to use VRML scenes over the visual basic environment.
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
López-Martín, Olga; Segura Fragoso, Antonio; Rodríguez Hernández, Marta; Dimbwadyo Terrer, Iris; Polonio-López, Begoña
To evaluate the effectiveness of a programme based on a virtual reality game to improve cognitive domains in patients with schizophrenia. A randomized controlled trial was conducted in 40 patients with schizophrenia, 20 in the experimental group and 20 in the control group. The experimental group received 10 sessions with Nintendo Wii(®) for 5 weeks, 50 minutes/session, 2 days/week in addition to conventional treatment. The control group received conventional treatment only. Statistically significant differences in the T-Score were found in 5 of the 6 cognitive domains assessed: processing speed (F=12.04, p=0.001), attention/vigilance (F=12.75, p=0.001), working memory (F=18.86, p virtual reality interventions aimed at cognitive training have great potential for significant gains in different cognitive domains assessed in patients with schizophrenia. Copyright © 2015 SESPAS. Published by Elsevier Espana. All rights reserved.
Chris D. Kounavis
Full Text Available This paper discusses the use of Augmented Reality (AR applications for the needs of tourism. It describes the technology’s evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR application development, emphasizing the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR’s substantial end‐ user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR’s full potential within the field.
Full Text Available This paper introduces the Smart Home Simulator, one of the main outcomes of the D4All project. This application takes into account the variety of issues involved in the development of Ambient Assisted Living (AAL solutions, such as the peculiarity of each end-users, appliances, and technologies with their deployment and data-sharing issues. The Smart Home Simulator—a mixed reality application able to support the configuration and customization of domestic environments in AAL systems—leverages on integration capabilities of Semantic Web technologies and the possibility to model relevant knowledge (about both the dwellers and the domestic environment into formal models. It also exploits Virtual Reality technologies as an efficient means to simplify the configuration of customized AAL environments. The application and the underlying framework will be validated through two different use cases, each one foreseeing the customized configuration of a domestic environment for specific segments of users.
Ansari, Zohreh; Fadardi, Javad Salehi
Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Rothbaum, BO; Hodges, L; Kooper, R
It has been proposed that virtual reality (VR) exposure may be an alternative to standard in vivo exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer- generated virtual environment. Virtual reality exposure is potentially an efficient and cost-effective treatment of anxiety disorders. VR exposure therapy reduced the fear of heights in the first control...
Shema-Shiratzky, Shirley; Brozgol, Marina; Cornejo-Thumm, Pablo; Geva-Dayan, Karen; Rotstein, Michael; Leitner, Yael; Hausdorff, Jeffrey M; Mirelman, Anat
To examine the feasibility and efficacy of a combined motor-cognitive training using virtual reality to enhance behavior, cognitive function and dual-tasking in children with Attention-Deficit/Hyperactivity Disorder (ADHD). Fourteen non-medicated school-aged children with ADHD, received 18 training sessions during 6 weeks. Training included walking on a treadmill while negotiating virtual obstacles. Behavioral symptoms, cognition and gait were tested before and after the training and at 6-weeks follow-up. Based on parental report, there was a significant improvement in children's social problems and psychosomatic behavior after the training. Executive function and memory were improved post-training while attention was unchanged. Gait regularity significantly increased during dual-task walking. Long-term training effects were maintained in memory and executive function. Treadmill-training augmented with virtual-reality is feasible and may be an effective treatment to enhance behavior, cognitive function and dual-tasking in children with ADHD.
Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro
Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sil, Soumitri; Dahlquist, Lynnda M; Thompson, Caitlin; Hahn, Amy; Herbert, Linda; Wohlheiter, Karen; Horn, Susan
This study sought to evaluate the effectiveness of virtual reality (VR) enhanced interactive videogame distraction for children undergoing experimentally induced cold pressor pain and examined the role of avoidant and approach coping style as a moderator of VR distraction effectiveness. Sixty-two children (6-13 years old) underwent a baseline cold pressor trial followed by two cold pressor trials in which interactive videogame distraction was delivered both with and without a VR helmet in counterbalanced order. As predicted, children demonstrated significant improvement in pain tolerance during both interactive videogame distraction conditions. However, a differential response to videogame distraction with or without the enhancement of VR technology was not found. Children's coping style did not moderate their response to distraction. Rather, interactive videogame distraction with and without VR technology was equally effective for children who utilized avoidant or approach coping styles.
Berryman, Donna R
Augmented reality is a technology that overlays digital information on objects or places in the real world for the purpose of enhancing the user experience. It is not virtual reality, that is, the technology that creates a totally digital or computer created environment. Augmented reality, with its ability to combine reality and digital information, is being studied and implemented in medicine, marketing, museums, fashion, and numerous other areas. This article presents an overview of augmented reality, discussing what it is, how it works, its current implementations, and its potential impact on libraries.
Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene
To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.
Chiu, Jennifer L.; Linn, Marcia C.
This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent…
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
and Computational Harmonic Analysis, 1:54-81 (1993). 6. Cornsweet, Tom N. "The Staircase-Method in Psychophysics," The American Journal of Psychology ...of a Visual Model," Proceedings of the IEEE, 60(7):828-842 (July 1972). 33. Taylor, M. M. and C Douglas Creelman . "PEST: Efficient Estimates on
Davis, Cheryl D.
Discusses developments in technology that provide high-quality visual access to transition information and multimedia instruction for learners with deafness. Identifies a variety of considerations in using multimedia products and describes the pros and cons of different media in the context of several multimedia projects. (Author/CR)
The importance of mathematical visual images is indicated by the introductory paragraph in the Statistics and Probability content strand of the Australian Curriculum, which draws attention to the importance of learners developing skills to analyse and draw inferences from data and "represent, summarise and interpret data and undertake…
Ivancic, Sonia R.; Hosek, Angela M.
Courses: This unit activity is suited for courses with research and source citation components, such as the Basic Communication; Interpersonal, and Organizational Communication courses. Objectives: Students will (a) visually interpret and analyze instances of plagiarism; (b) revise their work to use proper citations and reduce instances of…
Falter, Christine M.; Elliott, Mark A.; Bailey, Anthony J.
Cognitive functions that rely on accurate sequencing of events, such as action planning and execution, verbal and nonverbal communication, and social interaction rely on well-tuned coding of temporal event-structure. Visual temporal event-structure coding was tested in 17 high-functioning
Virtual reality devices integration in scientific visualization software in the VtkVRPN framework; Integration de peripheriques de realite virtuelle dans des applications de visualisation scientifique au sein de la plate-forme VtkVRPN
Journe, G.; Guilbaud, C
A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)
Rodriguez, W. J.; Chaudhury, S. R.
Undergraduate research projects that utilize remote sensing satellite instrument data to investigate atmospheric phenomena pose many challenges. A significant challenge is processing large amounts of multi-dimensional data. Remote sensing data initially requires mining; filtering of undesirable spectral, instrumental, or environmental features; and subsequently sorting and reformatting to files for easy and quick access. The data must then be transformed according to the needs of the investigation(s) and displayed for interpretation. These multidimensional datasets require views that can range from two-dimensional plots to multivariable-multidimensional scientific visualizations with animations. Science undergraduate students generally find these data processing tasks daunting. Generally, researchers are required to fully understand the intricacies of the dataset and write computer programs or rely on commercially available software, which may not be trivial to use. In the time that undergraduate researchers have available for their research projects, learning the data formats, programming languages, and/or visualization packages is impractical. When dealing with large multi-dimensional data sets appropriate Scientific Visualization tools are imperative in allowing students to have a meaningful and pleasant research experience, while producing valuable scientific research results. The BEST Lab at Norfolk State University has been creating tools for multivariable-multidimensional analysis of Earth Science data. EzSAGE and SAGE4D have been developed to sort, analyze and visualize SAGE II (Stratospheric Aerosol and Gas Experiment) data with ease. Three- and four-dimensional visualizations in interactive environments can be produced. EzSAGE provides atmospheric slices in three-dimensions where the researcher can change the scales in the three-dimensions, color tables and degree of smoothing interactively to focus on particular phenomena. SAGE4D provides a navigable
Menzies, R.J.; Rogers, S.J.; Phillips, A. M.; Chiarovano, E.; Waele de, C.; Verstraten, F.A.J.; MacDougall, H.
Despite decades of development of virtual reality (VR) devices and VR’s recent renaissance, it has been difficult to measure these devices’ effectiveness in immersing the observer. Previously, VR devices have been evaluated using subjective measures of presence, but in this paper, we suggest that
Maples-Keller, Jessica L; Yasinski, Carly; Manjin, Nicole; Rothbaum, Barbara Olasov
Virtual reality (VR) refers to an advanced technological communication interface in which the user is actively participating in a computer-generated 3-dimensional virtual world that includes computer sensory input devices used to simulate real-world interactive experiences. VR has been used within psychiatric treatment for anxiety disorders, particularly specific phobias and post-traumatic stress disorder, given several advantages that VR provides for use within treatment for these disorders. Exposure therapy for anxiety disorder is grounded in fear-conditioning models, in which extinction learning involves the process through which conditioned fear responses decrease or are inhibited. The present review will provide an overview of extinction training and anxiety disorder treatment, advantages for using VR within extinction training, a review of the literature regarding the effectiveness of VR within exposure therapy for specific phobias and post-traumatic stress disorder, and limitations and future directions of the extant empirical literature.
Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantages and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.
Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon
Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright Â© 2012 Elsevier Ltd. All rights reserved.
In 1993, the South Dakota Department of Transportation initiated the Research Project SD93-14, Enhancement of South Dakotas Pavement Management System. As the Research Project progressed, it was determined that to better evaluate the condition of ...
Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.
“How Augmented reality can facilitate learning in visualizing human anatomy “ At this station I demonstrate how Augmented reality can be used to visualize the human lungs in situ and as a wearable technology which establish connection between body, image and technology in education. I will show...
Hanson, Robert M; Lu, Xiang-Jun
The development of PowerSafety International's See-Thru Power Plant has provided the nuclear industry with a bridge that can span the gap between the part-task simulator and the full-scope, high-fidelity plant simulator. The principle behind the See-Thru Power Plant is to provide the use of sensory experience in nuclear training programs. The See-Thru Power Plant is a scaled down, fully functioning model of a commercial nuclear power plant, equipped with a primary system, secondary system, and control console. The major components are constructed of glass, thus permitting visual conceptualization of a working nuclear power plant
Marconi, F.; Moretti, G.; Englund, D.C.
A flexible and powerful procedure for transposing computer-generated images onto video tape is used in flowfield visualization. The result is animated sequences which can be used very effectively in the study of both steady and unsteady flows. The key to the procedure is the fact that the images (i.e., frames) of the animated sequence are recorded on the video tapes one at a time after they are created. Thus, the need for a mass storage system is eliminated because after a frame is recorded it is discarded. 7 references
Levac, Danielle; Glegg, Stephanie M N; Sveistrup, Heidi; Colquhoun, Heather; Miller, Patricia A; Finestone, Hillel; DePaul, Vincent; Harris, Jocelyn E; Velikonja, Diana
Despite increasing evidence for the effectiveness of virtual reality (VR)-based therapy in stroke rehabilitation, few knowledge translation (KT) resources exist to support clinical integration. KT interventions addressing known barriers and facilitators to VR use are required. When environmental barriers to VR integration are less amenable to change, KT interventions can target modifiable barriers related to therapist knowledge and skills. A multi-faceted KT intervention was designed and implemented to support physical and occupational therapists in two stroke rehabilitation units in acquiring proficiency with use of the Interactive Exercise Rehabilitation System (IREX; GestureTek). The KT intervention consisted of interactive e-learning modules, hands-on workshops and experiential practice. Evaluation included the Assessing Determinants of Prospective Take Up of Virtual Reality (ADOPT-VR) Instrument and self-report confidence ratings of knowledge and skills pre- and post-study. Usability of the IREX was measured with the System Usability Scale (SUS). A focus group gathered therapist experiences. Frequency of IREX use was recorded for 6 months post-study. Eleven therapists delivered a total of 107 sessions of VR-based therapy to 34 clients with stroke. On the ADOPT-VR, significant pre-post improvements in therapist perceived behavioral control (p = 0.003), self-efficacy (p = 0.005) and facilitating conditions (p =0.019) related to VR use were observed. Therapist intention to use VR did not change. Knowledge and skills improved significantly following e-learning completion (p = 0.001) and was sustained 6 months post-study. Below average perceived usability of the IREX (19 th percentile) was reported. Lack of time was the most frequently reported barrier to VR use. A decrease in frequency of perceived barriers to VR use was not significant (p = 0.159). Two therapists used the IREX sparingly in the 6 months following the study. Therapists reported
Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157
Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.
A. V. Mikhailov
Full Text Available A critical analysis of recent publications devoted to the NmF2 pre-storm enhancements is performed. There are no convincing arguments that the observed cases of NmF2 enhancements at middle and sub-auroral latitudes bear a relation to the following magnetic storms. In all cases considered the NmF2 pre-storm enhancements were due to previous geomagnetic storms, moderate auroral activity or they presented the class of positive quiet time events (Q-disturbances. Therefore, it is possible to conclude that there is no such an effect as the pre-storm NmF2 enhancement as a phenomenon inalienably related to the following magnetic storm. The observed nighttime NmF2 enhancements at sub-auroral latitudes may result from plasma transfer from the plasma ring area by meridional thermospheric wind. Enhanced plasmaspheric fluxes into the nighttime F2-region resulted from westward substorm-associated electric fields is another possible source of nighttime NmF2 enhancements. Daytime positive Q-disturbances occurring under very low geomagnetic activity level may be related to the dayside cusp activity.
Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.
This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct
The nuclear power industry is facing a very real challenge that affects its day-to-day activities: a rapidly aging workforce. For New Nuclear Build (NNB) countries, the challenge is even greater, having to develop a completely new workforce with little to no prior experience or exposure to nuclear power. The workforce replacement introduces workers of a new generation with different backgrounds and affinities than its predecessors. Major lifestyle differences between the new and the old generation of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve high level of retention. (author)
Jones, Edwin R.; McLaurin, A. P.; Cathey, LeConte
The usual descriptions of depth perception have traditionally required the simultaneous presentation of disparate views presented to separate eyes with the concomitant demand that the resulting binocular parallax be horizontally aligned. Our work suggests that the visual input information is compared in a short-term memory buffer which permits the brain to compute depth as it is normally perceived. However, the mechanism utilized is also capable of receiving and processing the stereographic information even when it is received monocularly or when identical inputs are simultaneously fed to both eyes. We have also found that the restriction to horizontally displaced images is not a necessary requirement and that improvement in image acceptability is achieved by the use of vertical parallax. Use of these ideas permit the presentation of three-dimensional scenes on flat screens in full color without the encumbrance of glasses or other viewing aids.
Haeger, Mathias; Bock, Otmar; Memmert, Daniel; Hüttermann, Stefanie
Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women) took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week) on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.
Full Text Available Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.
Maida, James C.; Bowen, Charles K.; Pace, John W.
One of the most versatile tools designed for use on the International Space Station (ISS) is the Special Purpose Dexterous Manipulator (SPDM) robot. Operators for this system are trained at NASA Johnson Space Center (JSC) using a robotic simulator, the Dexterous Manipulator Trainer (DMT), which performs most SPDM functions under normal static Earth gravitational forces. The SPDM is controlled from a standard Robotic Workstation. A key feature of the SPDM and DMT is the Force/Moment Accommodation (FMA) system, which limits the contact forces and moments acting on the robot components, on its payload, an Orbital Replaceable Unit (ORU), and on the receptacle for the ORU. The FMA system helps to automatically alleviate any binding of the ORU as it is inserted or withdrawn from a receptacle, but it is limited in its correction capability. A successful ORU insertion generally requires that the reference axes of the ORU and receptacle be aligned to within approximately 0.25 inch and 0.5 degree of nominal values. The only guides available for the operator to achieve these alignment tolerances are views from any available video cameras. No special registration markings are provided on the ORU or receptacle, so the operator must use their intrinsic features in the video display to perform the pre-insertion alignment task. Since optimum camera views may not be available, and dynamic orbital lighting conditions may limit viewing periods, long times are anticipated for performing some ORU insertion or extraction operations. This study explored the feasibility of using augmented reality (AR) to assist with SPDM operations. Geometric graphical symbols were overlaid on the end effector (EE) camera view to afford cues to assist the operator in attaining adequate pre-insertion ORU alignment.
Ratamero, Erick Martins; Bellini, Dom; Dowson, Christopher G.; Römer, Rudolf A.
The ability to precisely visualize the atomic geometry of the interactions between a drug and its protein target in structural models is critical in predicting the correct modifications in previously identified inhibitors to create more effective next generation drugs. It is currently common practice among medicinal chemists while attempting the above to access the information contained in three-dimensional structures by using two-dimensional projections, which can preclude disclosure of useful features. A more accessible and intuitive visualization of the three-dimensional configuration of the atomic geometry in the models can be achieved through the implementation of immersive virtual reality (VR). While bespoke commercial VR suites are available, in this work, we present a freely available software pipeline for visualising protein structures through VR. New consumer hardware, such as the uc(HTC Vive) and the uc(Oculus Rift) utilized in this study, are available at reasonable prices. As an instructive example, we have combined VR visualization with fast algorithms for simulating intramolecular motions of protein flexibility, in an effort to further improve structure-led drug design by exposing molecular interactions that might be hidden in the less informative static models. This is a paradigmatic test case scenario for many similar applications in computer-aided molecular studies and design.
Michael H Herzog
Full Text Available The obvious symptoms of schizophrenia are of cognitive and psychopathological nature. However, schizophrenia affects also visual processing which becomes particularly evident when stimuli are presented for short durations and are followed by a masking stimulus. Visual deficits are of great interest because they might be related to the genetic variations underlying the disease (endophenotype concept. Visual masking deficits are usually attributed to specific dysfunctions of the visual system such as a hypo- or hyper-active magnocellular system. Here, we propose that visual deficits are a manifestation of a general deficit related to the enhancement of weak neural signals as occurring in all other sorts of information processing. We summarize previous findings with the shine-through masking paradigm where a shortly presented vernier target is followed by a masking grating. The mask deteriorates visual processing of schizophrenic patients by almost an order of magnitude compared to healthy controls. We propose that these deficits are caused by dysfunctions of attention and the cholinergic system leading to weak neural activity corresponding to the vernier. High density electrophysiological recordings (EEG show that indeed neural activity is strongly reduced in schizophrenic patients which we attribute to the lack of vernier enhancement. When only the masking grating is presented, EEG responses are roughly comparable between patients and control. Our hypothesis is supported by findings relating visual masking to genetic deviants of the nicotinic 7 receptor (CHRNA7.
Çöltekin, A.; Lokka, I.; Zahner, M.
Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.
Ohtani, Hiroaki; Mizuguchi, Naoki; Shoji, Mamoru; Ishiguro, Seiji; Ohno, Nobuaki
We introduce new software for analysis of time-varying simulation data and new approach for contribution of simulation to experiment by virtual reality (VR) technology. In the new software, the objects of time-varying field are visualized in VR space and the particle trajectories in the time-varying electromagnetic field are also traced. In the new approach, both simulation results and experimental device data are simultaneously visualized in VR space. These developments enhance the study of the phenomena in plasma physics and fusion plasmas. (author)
Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.
Flevaris, Anastasia V; Murray, Scott O
Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Hermann, A. J.; Moore, C.; Soreide, N. N.
Ocean circulation is irrefutably three dimensional, and powerful new measurement technologies and numerical models promise to expand our three-dimensional knowledge of the dynamics further each year. Yet, most ocean data and model output is still viewed using two-dimensional maps. Immersive visualization techniques allow the investigator to view their data as a three dimensional world of surfaces and vectors which evolves through time. The experience is not unlike holding a part of the ocean basin in one's hand, turning and examining it from different angles. While immersive, three dimensional visualization has been possible for at least a decade, the technology was until recently inaccessible (both physically and financially) for most researchers. It is not yet fully appreciated by practicing oceanographers how new, inexpensive computing hardware and software (e.g. graphics cards and controllers designed for the huge PC gaming market) can be employed for immersive, three dimensional, color visualization of their increasingly huge datasets and model output. In fact, the latest developments allow immersive visualization through web servers, giving scientists the ability to "fly through" three-dimensional data stored half a world away. Here we explore what additional insight is gained through immersive visualization, describe how scientists of very modest means can easily avail themselves of the latest technology, and demonstrate its implementation on a web server for Pacific Ocean model output.
Minocha, Shailey; Tudor, Ana-Despina
We showed a variety of virtual reality technologies, and through examples, we discussed how virtual reality technology is transforming work styles and workplaces. Virtual reality is becoming pervasive in almost all domains starting from arts, environmental causes to medical education and disaster management training, and to supporting patients with Dementia. Thus, an awareness of the virtual reality technology and its integration in curriculum design will provide and enhance employability ski...
Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.
This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.
Full Text Available Stroke is a leading cause of long-term disability, and virtual reality- (VR- based stroke rehabilitation is effective in increasing motivation and the functional performance. Although much of the functional reach and grasp capabilities of the upper extremities were regained, the pinch movement remains impaired following stroke. In this study, we developed a haptic-enhanced VR system to simulate haptic pinch tasks to assist the recovery of upper-extremity fine motor function. We recruited 16 adults with stroke to verify the efficacy of this new VR system. Each patient received 30 min VR training sessions 3 times per week for 8 weeks. Outcome measures, Fugl-Meyer assessment (FMA, Test Evaluant les Membres superieurs des Personnes Agees (TEMPA, Wolf motor function test (WMFT, Box and Block test (BBT, and Jamar grip dynamometer, showed statistically significant progress from pretest to posttest and follow-up, indicating that the proposed system effectively promoted fine motor recovery of function. Additionally, our evidence suggests that this system was also effective under certain challenging conditions such as being in the chronic stroke phase or a coside of lesion and dominant hand (nondominant hand impaired. System usability assessment indicated that the participants strongly intended to continue using this VR-based system in rehabilitation.
Yu, K. C.; Raynolds, R. G.; Dechesne, M.
New visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. We have impacted the community through topical policy presentations at both state and city levels, adult education classes at the Denver Museum of Nature and Science (DMNS), and public lectures at DMNS. We have constructed three-dimensional models from well data and surface observations which allow policy makers to better understand the distribution of groundwater in sandstone aquifers of the Denver Basin. Our presentations to local governments in the Denver metro area have allowed resource managers to better project future ground water depletion patterns, and to encourage development of alternative sources. DMNS adult education classes on water resources, geography, and regional geology, as well as public lectures on global issues such as earthquakes, tsunamis, and resource depletion, have utilized the visualizations developed from these research models. In addition to presenting GIS models in traditional lectures, we have also made use of the immersive display capabilities of the digital "fulldome" Gates Planetarium at DMNS. The real-time Uniview visualization application installed at Gates was designed for teaching astronomy, but it can be re-purposed for displaying our model datasets in the context of the Earth's surface. The 17-meter diameter dome of the Gates Planetarium allows an audience to have an immersive experience---similar to virtual reality CAVEs employed by the oil exploration industry---that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend
Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori
An Internet-based tele-operative robotic catheter operating system was designed for vascular interventional surgery, to afford unskilled surgeons the opportunity to learn basic catheter/guidewire skills, while allowing experienced physicians to perform surgeries cooperatively. Remote surgical procedures, limited by variable transmission times for visual feedback, have been associated with deterioration in operability and vascular wall damage during surgery. At the patient's location, the catheter shape/position was detected in real time and converted into three-dimensional coordinates in a world coordinate system. At the operation location, the catheter shape was reconstructed in a virtual-reality environment, based on the coordinates received. The data volume reduction significantly reduced visual feedback transmission times. Remote transmission experiments, conducted over inter-country distances, demonstrated the improved performance of the proposed prototype. The maximum error for the catheter shape reconstruction was 0.93 mm and the transmission time was reduced considerably. The results were positive and demonstrate the feasibility of remote surgery using conventional network infrastructures. Copyright © 2015 John Wiley & Sons, Ltd.
Full Text Available The study is about giving overview of employing audio visual dialogue task as students creativity task and self assessment in EFL speaking class of tertiary education to enhance the students speaking ability. The qualitative research was done in one of the speaking classes at English Department, Semarang State University, Central Java, Indonesia. The results that can be seen from the rubric of self assessment show that the oral performance through audio visual recorded tasks done by the students as their self assessment gave positive evidences. The audio visual dialogue task can be very beneficial since it can motivate the students learning and increase their learning experiences. The self-assessment can be a valuable additional means to improve their speaking ability since it is one of the motives that drive self- evaluatioan, along with self- verification and self- enhancement.
Shabiralyani, Ghulam; Hasan, Khuram Shahzad; Hamad, Naqvi; Iqbal, Nadeem
This research explores teachers' opinions on the use of visual aids (e.g., pictures, animation videos, projectors and films) as a motivational tool in enhancing students' attention in reading literary texts. To accomplish the aim of the research, a closed ended questionnaire was used to collect the required data. The targeted population for this…
Brunye, Tad T.; Mahoney, Caroline R.; Lieberman, Harris R.; Giles, Grace E.; Taylor, Holly A.
Recent work suggests that a dose of 200-400mg caffeine can enhance both vigilance and the executive control of visual attention in individuals with low caffeine consumption profiles. The present study seeks to determine whether individuals with relatively high caffeine consumption profiles would show similar advantages. To this end, we examined…
Drijvers, L.; Özyürek, A.
Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech
A study of the effects of assertiveness training to enhance the social/assertiveness skills of 36 adolescents with visual impairments found that parents, the students, teachers, and observers judged the adolescents' social skills differently. However, the training did have some specific effect on increasing assertiveness. (Contains references.)…
Pfannkuch, Maxine; Budgett, Stephanie
Finding ways to enhance introductory students' understanding of probability ideas and theory is a goal of many first-year probability courses. In this article, we explore the potential of a prototype tool for Markov processes using dynamic visualizations to develop in students a deeper understanding of the equilibrium and hitting times…
Drijvers, Linda; Ozyurek, Asli
Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…
van de Kamp, M.-T.; Admiraal, W.; van Drie, J.; Rijlaarsdam, G.
Background: The main purposes of visual arts education concern the enhancement of students’ creative processes and the originality of their art products. Divergent thinking is crucial for finding original ideas in the initial phase of a creative process that aims to result in an original product.
van de Kamp, Marie-Thérèse; Admiraal, Wilfried; van Drie, Jannet; Rijlaarsdam, Gert
Background: The main purposes of visual arts education concern the enhancement of students' creative processes and the originality of their art products. Divergent thinking is crucial for finding original ideas in the initial phase of a creative process that aims to result in an original product. Aims: This study aims to examine the effects…
Tran, Thu Hoang
Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…
Lee, Sang-Ki; Huang, Hung-Tzu
Effects of pedagogical interventions with visual input enhancement on grammar learning have been investigated by a number of researchers during the past decade and a half. The present review delineates this research domain via a systematic synthesis of 16 primary studies (comprising 20 unique study samples) retrieved through an exhaustive…
Comeaux, Ian; McDonald, Janet L.
Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…
The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…
Full Text Available Abstract Background Fingermarks are one of the most important and useful forms of physical evidence in forensic investigations. However, latent fingermarks are not directly visible, but can be visualized due to the presence of other residues (such as inorganic salts, proteins, polypeptides, enzymes and human metabolites which can be detected or recognized through various strategies. Convenient and rapid techniques are still needed to provide obvious contrast between the background and the fingermark ridges and to then visualize latent fingermark with a high degree of selectivity and sensitivity. Results In this work, lysozyme-binding aptamer-conjugated Au nanoparticles (NPs are used to recognize and target lysozyme in the fingermark ridges, and Au+-complex solution is used as a growth agent to reduce Au+ from Au+ to Au0 on the surface of the Au NPs. Distinct fingermark patterns were visualized on a range of professional forensic within 3 min; the resulting images could be observed by the naked eye without background interference. The entire processes from fingermark collection to visualization only entails two steps and can be completed in less than 10 min. The proposed method provides cost and time savings over current fingermark visualization methods. Conclusions We report a simple, inexpensive, and fast method for the rapid visualization of latent fingermarks on the non-porous substrates using Au seed-mediated enhancement. Au seed-mediated enhancement is used to achieve the rapid visualization of latent fingermarks on non-porous substrates by the naked eye without the use of expensive or sophisticated instruments. The proposed approach offers faster detection and visualization of latent fingermarks than existing methods. The proposed method is expected to increase detection efficiency for latent fingermarks and reduce time requirements and costs for forensic investigations.
Kark, Sarah M; Slotnick, Scott D; Kensinger, Elizabeth A
Most studies using a recognition memory paradigm examine the neural processes that support the ability to consciously recognize past events. However, there can also be nonconscious influences from the prior study episode that reflect repetition suppression effects-a reduction in the magnitude of activity for repeated presentations of stimuli-that are revealed by comparing neural activity associated with forgotten items to correctly rejected novel items. The present fMRI study examined the effect of emotional valence (positive vs. negative) on repetition suppression effects. Using a standard recognition memory task, 24 participants viewed line drawings of previously studied negative, positive, and neutral photos intermixed with novel line drawings. For each item, participants made an old-new recognition judgment and a sure-unsure confidence rating. Collapsed across valence, repetition suppression effects were found in ventral occipital-temporal cortex and frontal regions. Activity levels in the majority of these regions were not modulated by valence. However, repetition enhancement of the amygdala and ventral occipital-temporal cortex functional connectivity reflected nonconscious memory for negative items. In this study, valence had little effect on activation patterns but had a larger effect on functional connectivity patterns that were markers of nonconscious memory. Beyond memory and emotion, these findings are relevant to other cognitive and social neuroscientists that utilize fMRI repetition effects to investigate perception, attention, social cognition, and other forms of learning and memory.
Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya
The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…
Trivedi, Chintan A.; Bollmann, Johann H.
Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322
Chintan A Trivedi
Full Text Available Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed towards the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim-triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.
Serafin, Stefania; Erkut, Cumhur; Kojs, Juraj
The rapid development and availability of low-cost technologies have created a wide interest in virtual reality. In the field of computer music, the term “virtual musical instruments” has been used for a long time to describe software simulations, extensions of existing musical instruments......, and ways to control them with new interfaces for musical expression. Virtual reality musical instruments (VRMIs) that include a simulated visual component delivered via a head-mounted display or other forms of immersive visualization have not yet received much attention. In this article, we present a field...
Isidro Navarro Delgado
Full Text Available Web 3.0 technologies provide effective tools for interpreting architecture and culture in general. Thus, a project may have an emotional impact on people while also having a more widespread effect in society as a whole. This project defines a methodology for evaluating accessibility of architecture for people with visual disabilities and the application of this to visiting emblematic buildings such as the Basilica of the Holly Family in Barcelona, designed by the architect, Antoni Gaudí.
Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio
Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented Breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. The contrast agent used was (CA) gadolinium (Gd) injection (0.1 mmol/kg at 2 ml/s) by a programmable power injector. Dicom formated images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into Augmented reality images. Augmented breast imaging (ABI) demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. Augmented breast imaging (ABI) to the breast can improve clinical outcomes, giving an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.
Martens, J.B.; Qi, W.; Aliakseyeu, D.; Kok, A.J.F.; Liere, van R.; Hoven, van den E.; Ijsselsteijn, W.; Kortuem, G.; Laerhoven, van K.; McClelland, I.; Perik, E.; Romero, N.; Ruyter, de B.
We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical
Sehati, Samira; Khodabandehlou, Morteza
The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
Brigham, Tara J
Augmented, virtual, and mixed reality applications all aim to enhance a user's current experience or reality. While variations of this technology are not new, within the last few years there has been a significant increase in the number of artificial reality devices or applications available to the general public. This column will explain the difference between augmented, virtual, and mixed reality and how each application might be useful in libraries. It will also provide an overview of the concerns surrounding these different reality applications and describe how and where they are currently being used.
Kose, Ahmet; Tepljakov, Aleksei; Astapov, Sergei; Draheim, Dirk; Petlenkov, Eduard; Vassiljeva, Kristina
In this paper, we present our findings related to the problem of localization and visualization of a sound source placed in the same room as the listener. The particular effect that we aim to investigate is called synesthesia—the act of experiencing one sense modality as another, e.g., a person may vividly experience flashes of colors when listening to a series of sounds. Towards that end, we apply a series of recently developed methods for detecting sound source in a three-dimensional space ...
Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.
Simon, B.H.; Raghavan, R.
The operators of nuclear power plants are presented with an often uncoordinated and arbitrary array of displays and controls. Information is presented in different formats and on physically dissimilar instruments. In an accident situation, an operator must be very alert to quickly diagnose and respond to the state of the plant as represented by the control room displays. Improvements in display technology and increased automation have helped reduce operator burden; however, too much automation may lead to operator apathy and decreased efficiency. A proposed approach to the human-system interface uses modern graphics technology and advances in computational power to provide a visualization or ''virtual reality'' framework for the operator. This virtual reality comprises a simulated perception of another existence, complete with three-dimensional structures, backgrounds, and objects. By placing the operator in an environment that presents an integrated, graphical, and dynamic view of the plant, his attention is directly engaged. Through computer simulation, the operator can view plant equipment, read local displays, and manipulate controls as if he were in the local area. This process not only keeps an operator involved in plant operation and testing procedures, but also reduces personnel exposure. In addition, operator stress is reduced because, with realistic views of plant areas and equipment, the status of the plant can be accurately grasped without interpreting a large number of displays. Since a single operator can quickly ''visit'' many different plant areas without physically moving from the control room, these techniques are useful in reducing labor requirements for surveillance and maintenance activities. This concept requires a plant dynamic model continuously updated via real-time process monitoring. This model interacts with a three-dimensional, solid-model architectural configuration of the physical plant
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Fuggetta, Giorgio; Duke, Philip A
The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second
Anna A. Kosovicheva
Full Text Available Acetylcholine (ACh reduces the spatial spread of excitatory fMRI responses in early visual cortex and the receptive field sizes of V1 neurons. We investigated the perceptual consequences of these physiological effects of ACh with surround suppression and crowding, two tasks that involve spatial interactions between visual field locations. Surround suppression refers to the reduction in perceived stimulus contrast by a high-contrast surround stimulus. For grating stimuli, surround suppression is selective for the relative orientations of the center and surround, suggesting that it results from inhibitory interactions in early visual cortex. Crowding refers to impaired identification of a peripheral stimulus in the presence of flankers and is thought to result from excessive integration of visual features. We increased synaptic ACh levels by administering the cholinesterase inhibitor donepezil to healthy human subjects in a placebo-controlled, double-blind design. In Exp. 1, we measured surround suppression of a central grating using a contrast discrimination task with three conditions: 1 surround grating with the same orientation as the center (parallel, 2 surround orthogonal to the center, or 3 no surround. Contrast discrimination thresholds were higher in the parallel than in the orthogonal condition, demonstrating orientation-specific surround suppression (OSSS. Cholinergic enhancement reduced thresholds only in the parallel condition, thereby reducing OSSS. In Exp. 2, subjects performed a crowding task in which they reported the identity of a peripheral letter flanked by letters on either side. We measured the critical spacing between the target and flanking letters that allowed reliable identification. Cholinergic enhancement had no effect on critical spacing. Our findings suggest that ACh reduces spatial interactions in tasks involving segmentation of visual field locations but that these effects may be limited to early visual cortical
until exhausted. SECURITY CLASSIFICATION OF THIS PAGE All other editions are obsolete. UNCLASSIFIED " VIRTUAL REALITY JAMES F. DAILEY, LIEUTENANT COLONEL...US" This paper reviews the exciting field of virtual reality . The author describes the basic concepts of virtual reality and finds that its numerous...potential benefits to society could revolutionize everyday life. The various components that make up a virtual reality system are described in detail
Mölbert, S C; Thaler, A; Mohler, B J; Streuber, S; Romero, J; Black, M J; Zipfel, S; Karnath, H-O; Giel, K E
Body image disturbance (BID) is a core symptom of anorexia nervosa (AN), but as yet distinctive features of BID are unknown. The present study aimed at disentangling perceptual and attitudinal components of BID in AN. We investigated n = 24 women with AN and n = 24 controls. Based on a three-dimensional (3D) body scan, we created realistic virtual 3D bodies (avatars) for each participant that were varied through a range of ±20% of the participants' weights. Avatars were presented in a virtual reality mirror scenario. Using different psychophysical tasks, participants identified and adjusted their actual and their desired body weight. To test for general perceptual biases in estimating body weight, a second experiment investigated perception of weight and shape matched avatars with another identity. Women with AN and controls underestimated their weight, with a trend that women with AN underestimated more. The average desired body of controls had normal weight while the average desired weight of women with AN corresponded to extreme AN (DSM-5). Correlation analyses revealed that desired body weight, but not accuracy of weight estimation, was associated with eating disorder symptoms. In the second experiment, both groups estimated accurately while the most attractive body was similar to Experiment 1. Our results contradict the widespread assumption that patients with AN overestimate their body weight due to visual distortions. Rather, they illustrate that BID might be driven by distorted attitudes with regard to the desired body. Clinical interventions should aim at helping patients with AN to change their desired weight.
Full Text Available Gracias a los avances de la tecnología, el número de datos que son generados y su complejidad crece exponencialmente. Las nuevas técnicas de adquisición de datos, así como las técnicas de computación de altas prestaciones han posibilitado la gestión de grandes conjuntos de datos con mayor resolución. Sin embargo, nuestra capacidad en el incremento de cantidad y complejidad en la generación de datos actuales, supera nuestra habilidad para entenderlos fácilmente y darles sentido. En el presente trabajo, proponemos el uso de técnicas de realidad virtual que exploten los diferentes canales sensoriales para facilitar la visualización, análisis, interpretación e interacción de grandes conjuntos de datos complejos. Abstract Advances in technology have made it possible to increase the data size generated causing its complexity grows exponentially. New data acquisition and high performance computing techniques have enabled to manage large data sets with high resolution. However, the amount and complexity of current generation of data exceeds our ability to easily understand and make sense of them. In this paper, we propose the use of virtual reality techniques that exploit different sensory channels for improving the visualization, analysis, interpretation and interaction of large complex datasets.
Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M
Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.
Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate
Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....
Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate
Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....
Pasricha, Neel Dave; Bhullar, Paramjit Kaur; Shieh, Christine; Viehland, Christian; Carrasco-Zevallos, Oscar Mijail; Keller, Brenton; Izatt, Joseph Adam; Toth, Cynthia Ann; Challa, Pratap; Kuo, Anthony Nanlin
We report the first use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT) capable of live four-dimensional (4D) (three-dimensional across time) imaging intraoperatively to directly visualize tube shunt placement and trabeculectomy surgeries in two patients with severe open-angle glaucoma and elevated intraocular pressure (IOP) that was not adequately managed by medical intervention or prior surgery. We performed tube shunt placement and trabeculectomy surgery and used SS-MIOCT to visualize and record surgical steps that benefitted from the enhanced visualization. In the case of tube shunt placement, SS-MIOCT successfully visualized the scleral tunneling, tube shunt positioning in the anterior chamber, and tube shunt suturing. For the trabeculectomy, SS-MIOCT successfully visualized the scleral flap creation, sclerotomy, and iridectomy. Postoperatively, both patients did well, with IOPs decreasing to the target goal. We found the benefit of SS-MIOCT was greatest in surgical steps requiring depth-based assessments. This technology has the potential to improve clinical outcomes.
Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi
There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme-color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi
There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme–color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. PMID:23892185
Full Text Available During the vertebrate visual cycle, all-trans-retinal is exported from photoreceptors to the adjacent RPE or Müller glia wherein 11-cis-retinal is regenerated. The 11-cis chromophore is returned to photoreceptors, forming light-sensitive visual pigments with opsin GPCRs. Dysfunction of this process perturbs phototransduction because functional visual pigment cannot be generated. Mutations in visual cycle genes can result in monogenic inherited forms of blindness. Though key enzymatic processes are well characterized, questions remain as to the physiological role of visual cycle proteins in different retinal cell types, functional domains of these proteins in retinoid biochemistry and in vivo pathogenesis of disease mutations. Significant progress is needed to develop effective and accessible treatments for inherited blindness arising from mutations in visual cycle genes. Here, we review opportunities to apply gene editing technology to two crucial visual cycle components, RPE65 and CRALBP. Expressed exclusively in the human RPE, RPE65 enzymatically converts retinyl esters into 11-cis retinal. CRALBP is an 11-cis-retinal binding protein expressed in human RPE and Muller glia. Loss-of-function mutations in either protein results in autosomal recessive forms of blindness. Modeling these human conditions using RPE65 or CRALBP murine knockout models have enhanced our understanding of their biochemical function, associated disease pathogenesis and development of therapeutics. However, rod-dominated murine retinae provide a challenge to assess cone function. The cone-rich zebrafish model is amenable to cost-effective maintenance of a variety of strains. Interestingly, gene duplication in zebrafish resulted in three Rpe65 and two Cralbp isoforms with differential temporal and spatial expression patterns. Functional investigations of zebrafish Rpe65 and Cralbp were restricted to gene knockdown with morpholino oligonucleotides. However, transient
Crawford, H J; Allen, S N
To investigate the hypothesis that hypnosis has an enhancing effect on imagery processing, as mediated by hypnotic responsiveness and cognitive strategies, four experiments compared performance of low and high, or low, medium, and high, hypnotically responsive subjects in waking and hypnosis conditions on a successive visual memory discrimination task that required detecting differences between successively presented picture pairs in which one member of the pair was slightly altered. Consistently, hypnotically responsive individuals showed enhanced performance during hypnosis, whereas nonresponsive ones did not. Hypnotic responsiveness correlated .52 (p less than .001) with enhanced performance during hypnosis, but it was uncorrelated with waking performance (Experiment 3). Reaction time was not affected by hypnosis, although high hypnotizables were faster than lows in their responses (Experiments 1 and 2). Subjects reported enhanced imagery vividness on the self-report Vividness of Visual Imagery Questionnaire during hypnosis. The differential effect between lows and highs was in the anticipated direction but not significant (Experiments 1 and 2). As anticipated, hypnosis had no significant effect on a discrimination task that required determining whether there were differences between pairs of simultaneously presented pictures. Two cognitive strategies that appeared to mediate visual memory performance were reported: (a) detail strategy, which involved the memorization and rehearsal of individual details for memory, and (b) holistic strategy, which involved looking at and remembering the whole picture with accompanying imagery. Both lows and highs reported similar predominantly detail-oriented strategies during waking; only highs shifted to a significantly more holistic strategy during hypnosis. These findings suggest that high hypnotizables have a greater capacity for cognitive flexibility (Batting, 1979) than do lows. Results are discussed in terms of several
Full Text Available Repetitive visual training paired with electrical activation of cholinergic projections to the primary visual cortex (V1 induces long-term enhancement of cortical processing in response to the visual training stimulus. To better determine the receptor subtypes mediating this effect the selective pharmacological blockade of V1 nicotinic (nAChR, M1 and M2 muscarinic (mAChR or GABAergic A (GABAAR receptors was performed during the training session and visual evoked potentials (VEPs were recorded before and after training. The training session consisted of the exposure of awake, adult rats to an orientation-specific 0.12 CPD grating paired with an electrical stimulation of the basal forebrain for a duration of 1 week for 10 minutes per day. Pharmacological agents were infused intracortically during this period. The post-training VEP amplitude was significantly increased compared to the pre-training values for the trained spatial frequency and to adjacent spatial frequencies up to 0.3 CPD, suggesting a long-term increase of V1 sensitivity. This increase was totally blocked by the nAChR antagonist as well as by an M2 mAChR subtype and GABAAR antagonist. Moreover, administration of the M2 mAChR antagonist also significantly decreased the amplitude of the control VEPs, suggesting a suppressive effect on cortical responsiveness. However, the M1 mAChR antagonist blocked the increase of the VEP amplitude only for the high spatial frequency (0.3 CPD, suggesting that M1 role was limited to the spread of the enhancement effect to a higher spatial frequency. More generally, all the drugs used did block the VEP increase at 0.3 CPD. Further, use of each of the aforementioned receptor antagonists blocked training-induced changes in gamma and beta band oscillations. These findings demonstrate that visual training coupled with cholinergic stimulation improved perceptual sensitivity by enhancing cortical responsiveness in V1. This enhancement is mainly mediated by n
Full Text Available An important unresolved question in sensory neuroscience is whether, and if so with what time course, tactile perception is enhanced by visual deprivation. In three experiments involving 158 normally sighted human participants, we assessed whether tactile spatial acuity improves with short-term visual deprivation over periods ranging from under 10 to over 110 minutes. We used an automated, precisely controlled two-interval forced-choice grating orientation task to assess each participant's ability to discern the orientation of square-wave gratings pressed against the stationary index finger pad of the dominant hand. A two-down one-up staircase (Experiment 1 or a Bayesian adaptive procedure (Experiments 2 and 3 was used to determine the groove width of the grating whose orientation each participant could reliably discriminate. The experiments consistently showed that tactile grating orientation discrimination does not improve with short-term visual deprivation. In fact, we found that tactile performance degraded slightly but significantly upon a brief period of visual deprivation (Experiment 1 and did not improve over periods of up to 110 minutes of deprivation (Experiments 2 and 3. The results additionally showed that grating orientation discrimination tends to improve upon repeated testing, and confirmed that women significantly outperform men on the grating orientation task. We conclude that, contrary to two recent reports but consistent with an earlier literature, passive tactile spatial acuity is not enhanced by short-term visual deprivation. Our findings have important theoretical and practical implications. On the theoretical side, the findings set limits on the time course over which neural mechanisms such as crossmodal plasticity may operate to drive sensory changes; on the practical side, the findings suggest that researchers who compare tactile acuity of blind and sighted participants should not blindfold the sighted participants.
Jola, Corinne; Abedian-Amiri, Ali; Kuppuswamy, Annapoorna; Pollick, Frank E.; Grosbras, Marie-Hélène
The human “mirror-system” is suggested to play a crucial role in action observation and execution, and is characterized by activity in the premotor and parietal cortices during the passive observation of movements. The previous motor experience of the observer has been shown to enhance the activity in this network. Yet visual experience could also have a determinant influence when watching more complex actions, as in dance performances. Here we tested the impact visual experience has on motor simulation when watching dance, by measuring changes in corticospinal excitability. We also tested the effects of empathic abilities. To fully match the participants' long-term visual experience with the present experimental setting, we used three live solo dance performances: ballet, Indian dance, and non-dance. Participants were either frequent dance spectators of ballet or Indian dance, or “novices” who never watched dance. None of the spectators had been physically trained in these dance styles. Transcranial magnetic stimulation was used to measure corticospinal excitability by means of motor-evoked potentials (MEPs) in both the hand and the arm, because the hand is specifically used in Indian dance and the arm is frequently engaged in ballet dance movements. We observed that frequent ballet spectators showed larger MEP amplitudes in the arm muscles when watching ballet compared to when they watched other performances. We also found that the higher Indian dance spectators scored on the fantasy subscale of the Interpersonal Reactivity Index, the larger their MEPs were in the arms when watching Indian dance. Our results show that even without physical training, corticospinal excitability can be enhanced as a function of either visual experience or the tendency to imaginatively transpose oneself into fictional characters. We suggest that spectators covertly simulate the movements for which they have acquired visual experience, and that empathic abilities heighten
Gan, Hong-Seng; Swee, Tan Tian; Abdul Karim, Ahmad Helmy; Sayuti, Khairil Amir; Abdul Kadir, Mohammed Rafiq; Tham, Weng-Kit; Wong, Liang-Xuan; Chaudhary, Kashif T.; Yupapin, Preecha P.
Well-defined image can assist user to identify region of interest during segmentation. However, complex medical image is usually characterized by poor tissue contrast and low background luminance. The contrast improvement can lift image visual quality, but the fundamental contrast enhancement methods often overlook the sudden jump problem. In this work, the proposed bihistogram Bezier curve contrast enhancement introduces the concept of “adequate contrast enhancement” to overcome sudden jump problem in knee magnetic resonance image. Since every image produces its own intensity distribution, the adequate contrast enhancement checks on the image's maximum intensity distortion and uses intensity discrepancy reduction to generate Bezier transform curve. The proposed method improves tissue contrast and preserves pertinent knee features without compromising natural image appearance. Besides, statistical results from Fisher's Least Significant Difference test and the Duncan test have consistently indicated that the proposed method outperforms fundamental contrast enhancement methods to exalt image visual quality. As the study is limited to relatively small image database, future works will include a larger dataset with osteoarthritic images to assess the clinical effectiveness of the proposed method to facilitate the image inspection. PMID:24977191
Full Text Available Well-defined image can assist user to identify region of interest during segmentation. However, complex medical image is usually characterized by poor tissue contrast and low background luminance. The contrast improvement can lift image visual quality, but the fundamental contrast enhancement methods often overlook the sudden jump problem. In this work, the proposed bihistogram Bezier curve contrast enhancement introduces the concept of “adequate contrast enhancement” to overcome sudden jump problem in knee magnetic resonance image. Since every image produces its own intensity distribution, the adequate contrast enhancement checks on the image’s maximum intensity distortion and uses intensity discrepancy reduction to generate Bezier transform curve. The proposed method improves tissue contrast and preserves pertinent knee features without compromising natural image appearance. Besides, statistical results from Fisher’s Least Significant Difference test and the Duncan test have consistently indicated that the proposed method outperforms fundamental contrast enhancement methods to exalt image visual quality. As the study is limited to relatively small image database, future works will include a larger dataset with osteoarthritic images to assess the clinical effectiveness of the proposed method to facilitate the image inspection.
Alwadani, Fahad; Morsi, Mohammed Saad
To compare the surgical proficiency of medical students who underwent traditional training or virtual reality training for argon laser trabeculoplasty with the PixEye simulator. The cohort comprised of 47 fifth year male medical students from the College of Medicine, King Faisal University, Saudi Arabia. The cohort was divided into two groups: students (n = 24), who received virtual reality training (VR Group) and students (n = 23), who underwent traditional training (Control Group). After training, the students performed the trabeculoplasty procedure. All trainings were included concurrent power point presentations describing the details of the procedure. Evaluation of surgical performance was based on the following variables: missing the exact location with the laser, overtreatment, undertreatment and inadvertent laser shots to iris and cornea. The target was missed by 8% of the VR Group compared to 55% in the Control Group. Overtreatment and undertreatment was observed in 7% of the VR Group compared to 46% of the Control Group. Inadvertent laser application to the cornea or iris was performed by 4.5% of the VR Group compared to 34% of the Control Group. Virtual reality training on PixEye simulator may enhance the proficiency of medical students and limit possible surgical errors during laser trabeculoplasty. The authors have no financial interest in the material mentioned in this study.
Fecich, Samantha J.
During this collective case study, I explored the use of augmented reality books on an iPad 2 with students diagnosed with disabilities. Students in this study attended a high school life skills class in a rural school district during the fall 2013 semester. Four students participated in this study, two males and two females. Specifically, the…
Samsudin, Khairulanuar; Rafi, Ahmad; Mohamad Ali, Ahmad Zamzuri; Abd. Rashid, Nazre
The aim of this study is to develop and to test a low-cost virtual reality spatial trainer in terms of its effectiveness in spatial training. The researchers adopted three features deriving from the constructivist perspective to guide the design of the trainer, namely interaction, instruction, and support. The no control pre test post test…
Woolgar, Alexandra; Williams, Mark A; Rich, Anina N
Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
Byers, Anna; Serences, John T
Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.
Duarte, Audrey; Hearons, Patricia; Jiang, Yashu; Delvin, Mary Courtney; Newsome, Rachel N.; Verhaeghen, Paul
Behavioral evidence from the young suggests spatial cues that orient attention toward task relevant items in visual working memory (VWM) enhance memory capacity. Whether older adults can also use retrospective cues (“retro-cues”) to enhance VWM capacity is unknown. In the current event-related potential (ERP) study, young and old adults performed a VWM task in which spatially informative retro-cues were presented during maintenance. Young but not older adults’ VWM capacity benefitted from retro-cueing. The contralateral delay activity (CDA) ERP index of VWM maintenance was attenuated after the retro-cue, which effectively reduced the impact of memory load. CDA amplitudes were reduced prior to retro-cue onset in the old only. Despite a preserved ability to delete items from VWM, older adults may be less able to use retrospective attention to enhance memory capacity when expectancy of impending spatial cues disrupts effective VWM maintenance. PMID:23445536
Full Text Available How can educators make use of augmented reality technologies and practices to enhance learning and why would we want to embrace such technologies anyway? How can an augmented reality help a learner confront, interpret and ultimately comprehend reality itself? In this article, we seek to initiate a discussion that focuses on these questions, and suggest that they be used as drivers for research into effective educational applications of augmented reality. We discuss how multi-modal, sensorial augmentation of reality links to existing theories of education and learning, focusing on ideas of cognitive dissonance and the confrontation of new realities implied by exposure to new and varied perspectives. We also discuss connections with broader debates brought on by the social and cultural changes wrought by the increased digitalisation of our lives, especially the concept of the extended mind. Rather than offer a prescription for augmentation, our intention is to throw open debate and to provoke deep thinking about what interacting with and creating an augmented reality might mean for both teacher and learner.
This qualitative design study addressed the enhancement of nursing assessment skills through the use of Visual Thinking Strategies and reflection. This study advances understanding of the use of Visual Thinking Strategies and reflection as ways to explore new methods of thinking and observing patient situations relating to health care. Sixty nursing students in a licensed practical nursing program made up the sample of participants who attended an art gallery as part of a class assignment. Participants replied to a survey of interest for participation at the art gallery. Participants reviewed artwork at the gallery and shared observations with the larger group during a post-conference session in a gathering area of the museum at the end of the visit. A reflective exercise on the art gallery experience exhibited further thoughts about the art gallery experience and demonstrated the connections made to clinical practice by the student. The findings of this study support the use of Visual Thinking Strategies and reflection as effective teaching and learning tools for enhancing nursing skills. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jasani, Sona K; Saks, Norma S
Clinical observation is fundamental in practicing medicine, but these skills are rarely taught. Currently no evidence-based exercises/courses exist for medical student training in observation skills. The goal was to develop and teach a visual arts-based exercise for medical students, and to evaluate its usefulness in enhancing observation skills in clinical diagnosis. A pre- and posttest and evaluation survey were developed for a three-hour exercise presented to medical students just before starting clerkships. Students were provided with questions to guide discussion of both representational and non-representational works of art. Quantitative analysis revealed that the mean number of observations between pre- and posttests was not significantly different (n=70: 8.63 vs. 9.13, p=0.22). Qualitative analysis of written responses identified four themes: (1) use of subjective terminology, (2) scope of interpretations, (3) speculative thinking, and (4) use of visual analogies. Evaluative comments indicated that students felt the exercise enhanced both mindfulness and skills. Using visual art images with guided questions can train medical students in observation skills. This exercise can be replicated without specially trained personnel or art museum partnerships.
Full Text Available The purpose of the present study was to replicate and extend our original findings of enhanced neural inhibitory control in bilinguals. We compared English monolinguals to Spanish/English bilinguals on a non-linguistic, auditory Go/NoGo task while recording event-related brain potentials. New to this study was the visual Go/NoGo task, which we included to investigate whether enhanced neural inhibition in bilinguals extends from the auditory to the visual modality. Results confirmed our original findings and revealed greater inhibition in bilinguals compared to monolinguals. As predicted, compared to monolinguals, bilinguals showed increased N2 amplitude during the auditory NoGo trials, which required inhibitory control, but no differences during the Go trials, which required a behavioral response and no inhibition. Interestingly, during the visual Go/NoGo task, event related brain potentials did not distinguish the two groups, and behavioral responses were similar between the groups regardless of task modality. Thus, only auditory trials that required inhibitory control revealed between-group differences indicative of greater neural inhibition in bilinguals. These results show that experience-dependent neural changes associated with bilingualism are specific to the auditory modality and that the N2 event-related brain potential is a sensitive marker of this plasticity.
Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.
Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Andrew J Kolarik
Full Text Available Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation and tactile (using a sensory substitution device, SSD guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Shnitzer-Meirovich, Shlomit; Lifshitz, Hefziba; Mashal, Nira
This study is the first to investigate the effectiveness of deep and shallow intervention programs in the acquisition of visual metaphor comprehension in individuals with non-specific intellectual disability (NSID; aged 15-59, N = 53) or Down syndrome (DS; aged 15-52, N = 50). The deep intervention program was based on dynamic assessment model for enhancing analogical thinking. The shallow intervention program involves memorizing a metaphorical relationship between pairs of pictures. Visual metaphor comprehension was measured by the construction of a metaphorical connection between pairs of pictures. The results indicated that both etiology groups exhibited poor understanding of visual metaphors before the intervention. A significant improvement was observed in both interventions and both etiology groups, with greater improvement among individuals who underwent the deep processing. Moreover, the latter procedure led to greater generalization ability. The results also indicated that vocabulary contributed significantly to understanding unstudied metaphors and that participants with poorer linguistic abilities exhibited greater improvement in their metaphorical thinking. Thus, individuals with ID with or without DS are able to recruit the higher-order cognitive abilities required for visual metaphor comprehension. Copyright © 2018 Elsevier Ltd. All rights reserved.
Blacker, Kara J; Curby, Kim M
Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.
Elizabeth Anggraeni Amalo
Full Text Available The teaching of English-expressions has always been done through conversation samples in form of written texts, audio recordings, and videos. In the meantime, the development of computer-aided learning technology has made autonomous language learning possible. Game, as one of computer-aided learning technology products, can serve as a medium to provide educational contents like that of language teaching and learning. Visual Novel is considered as a conversational game that is suitable to be combined with English-expressions material. Unlike the other click-based interaction Visual Novel Games, the visual novel game in this research implements speech recognition as the interaction trigger. Hence, this paper aims at elaborating how visual novel games are utilized to deliver English-expressions with speech recognition command for the interaction. This research used Research and Development (R&D method with Experimental design through control and experimental groups to measure its effectiveness in enhancing students’ English-expressions mastery. ANOVA was utilized to prove the significant differences between the control and experimental groups. It is expected that the result of this development and experiment can devote benefits to the English teaching and learning, especially on English-expressions.
Shaun L Cloherty
Full Text Available Primates use saccadic eye movements to make gaze changes. In many visual areas, including the dorsal medial superior temporal area (MSTd of macaques, neural responses to visual stimuli are reduced during saccades but enhanced afterwards. How does this enhancement arise – from an internal mechanism associated with saccade generation or through visual mechanisms activated by the saccade sweeping the image of the visual scene across the retina? Spontaneous activity in MSTd is elevated even after saccades made in darkness, suggesting a central mechanism for post-saccadic enhancement. However, based on the timing of this effect, it may arise from a different mechanism than occurs in normal vision. Like neural responses in MSTd, initial ocular following eye speed is enhanced after saccades, with evidence suggesting both internal and visually mediated mechanisms. Here we recorded from visual neurons in MSTd and measured responses to motion stimuli presented soon after saccades and soon after simulated saccades – saccade-like displacements of the background image during fixation. We found that neural responses in MSTd were enhanced when preceded by real saccades but not when preceded by simulated saccades. Furthermore, we also observed enhancement following real saccades made across a blank screen that generated no motion signal within the recorded neurons’ receptive fields. We conclude that in MSTd the mechanism leading to post-saccadic enhancement has internal origins.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Bentwich, Miriam Ethel; Gilbey, Peter
Comfort with ambiguity, mostly associated with the acceptance of multiple meanings, is a core characteristic of successful clinicians. Yet past studies indicate that medical students and junior physicians feel uncomfortable with ambiguity. Visual Thinking Strategies (VTS) is a pedagogic approach involving discussions of art works and deciphering the different possible meanings entailed in them. However, the contribution of art to the possible enhancement of the tolerance for ambiguity among medical students has not yet been adequately investigated. We aimed to offer a novel perspective on the effect of art, as it is experienced through VTS, on medical students' tolerance of ambiguity and its possible relation to empathy. Quantitative method utilizing a short survey administered after an interactive VTS session conducted within mandatory medical humanities course for first-year medical students. The intervention consisted of a 90-min session in the form of a combined lecture and interactive discussions about art images. The VTS session and survey were filled by 67 students in two consecutive rounds of first-year students. 67% of the respondents thought that the intervention contributed to their acceptance of multiple possible meanings, 52% thought their visual observation ability was enhanced and 34% thought that their ability to feel the sufferings of other was being enhanced. Statistically significant moderate-to-high correlations were found between the contribution to ambiguity tolerance and contribution to empathy (0.528-0.744; p ≤ 0.01). Art may contribute especially to the development of medical students' tolerance of ambiguity, also related to the enhancement of empathy. The potential contribution of visual art works used in VTS to the enhancement of tolerance for ambiguity and empathy is explained based on relevant literature regarding the embeddedness of ambiguity within art works, coupled with reference to John Dewey's theory of learning. Given the
Brandt, Harald; Nielsen, Birgitte Lund; Georgsen, Marianne
Augmented reality (AR) holds great promise as a learning tool. So far, however, most research has looked at the technology itself – and AR has been used primarily for commercial purposes. As a learning tool, AR supports an inquiry-based approach to science education with a high level of student...... involvement. The AR-sci-project (Augmented Reality for SCIence education) addresses the issue of applying augmented reality in developing innovative science education and enhancing the quality of science teaching and learning....
Ariga, Taeko; Watanabe, Takashi; Otani, Toshio; Masuzawa, Toshimitsu
This study proposes a basic learning program for enhancing visual literacy using an original Web content management system (Web CMS) to share students' outcomes in class as a blog post. It seeks to reinforce students' understanding and awareness of the design of visual content. The learning program described in this research focuses on to address…
Kofoed, Lise B.; Reng, Lars
The technical subjects chosen are within programming. Using image-processing algorithms as means to provide direct visual feedback for learning basic C/C++. The pedagogical approach is within a PBL framework and is based on dialogue and collaborative learning. At the same time the intention...... was to establish a community of practice among the students and the teachers. A direct visual feedback and a higher level of merging between the artistic, creative, and technical lectures have been the focus of motivation as well as a complete restructuring of the elements of the technical lectures. The paper...... abilities and enhanced balance between the interdisciplinary disciplines of the study are analyzed. The conclusion is that the technical courses have got a higher status for the students. The students now see it as a very important basis for their further study, and their learning results have improved...
Tieri, Gaetano; Gioia, Annamaria; Scandola, Michele; Pavone, Enea F; Aglioti, Salvatore M
To explore the link between Sense of Embodiment (SoE) over a virtual hand and physiological regulation of skin temperature, 24 healthy participants were immersed in virtual reality through a Head Mounted Display and had their real limb temperature recorded by means of a high-sensitivity infrared camera. Participants observed a virtual right upper limb (appearing either normally, or with the hand detached from the forearm) or limb-shaped non-corporeal control objects (continuous or discontinuous wooden blocks) from a first-person perspective. Subjective ratings of SoE were collected in each observation condition, as well as temperatures of the right and left hand, wrist and forearm. The observation of these complex, body and body-related virtual scenes resulted in increased real hand temperature when compared to a baseline condition in which a 3d virtual ball was presented. Crucially, observation of non-natural appearances of the virtual limb (discontinuous limb) and limb-shaped non-corporeal objects elicited high increase in real hand temperature and low SoE. In contrast, observation of the full virtual limb caused high SoE and low temperature changes in the real hand with respect to the other conditions. Interestingly, the temperature difference across the different conditions occurred according to a topographic rule that included both hands. Our study sheds new light on the role of an external hand's visual appearance and suggests a tight link between higher-order bodily self-representations and topographic regulation of skin temperature. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Marques, Luís.; Roca Cladera, Josep; Tenedório, José António
The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.
Vávra, P.; Zonča, P.; Ihnát, P.; El-Gendi, A.
Introduction The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice. PMID:29065604
Full Text Available Introduction. The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods. We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions. The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice.
Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L
Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. We have hypothesized that the changes in neural activity observed during increased cholinergic function reflect an increase in neural efficiency that leads to improved task performance. The current study tested this hypothesis by assessing neural efficiency based on cholinergically-mediated effects on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover fMRI study. Following an infusion of physostigmine (1 mg/h) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Physostigmine administration also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus
Full Text Available Today we can obtain in a simple and rapid way most of the information that we need. Devices, such as personal computers and mobile phones, enable access to information in different formats (written, pictorial, audio or video whenever and wherever. Daily we use and encounter information that can be seen as virtual objects or objects that are part of the virtual world of computers. Everyone, at least once, wanted to bring these virtual objects from the virtual world of computers into real environments and thus mix virtual and real worlds. In such a mixed reality, real and virtual objects coexist in the same environment. The reality, where users watch and use the real environment upgraded with virtual objects is called augmented reality. In this article we describe the main properties of augmented reality. In addition to the basic properties that define a reality as augmented reality, we present the various building elements (possible hardware and software that provide an insight into such a reality and practical applications of augmented reality. The applications are divided into three groups depending on the information and functions that augmented reality offers, such as help, guide and simulator.
Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang
This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...
B. S. Rajaratnam
Full Text Available This randomised controlled and double-blinded pilot study evaluated if interactive virtual reality balance related games integrated within conventional rehabilitation sessions resulted in more superior retraining of dynamic balance compared to CR after stroke. 19 subjects diagnosed with a recent episode of stroke were recruited from a local rehabilitation hospital and randomly assigned to either a control or an experimental group. Subjects in the control groups underwent 60 minutes of conventional rehabilitation while those in the experimental groups underwent 40 minutes of convention rehabilitation and 20 minutes of self-directed virtual reality balanced rehabilitation. Functional Reach Test, Timed Up and Go, Modified Barthel Index, Berg Balance Scale, and Centre of Pressure of subjects in both groups were evaluated before and on completion of the rehabilitation sessions. Results indicate that the inclusion of interactive virtual reality balance related games within conventional rehabilitation can lead to improved functional mobility and balance after a recent episode of stroke without increasing treatment time that requires more health professional manpower.
Memel, Molly; Ryan, Lee
The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was
Grossmann, Rafael J
The new term improved reality (i-Reality) is suggested to include virtual reality (VR) and augmented reality (AR). It refers to a real world that includes improved, enhanced and digitally created features that would offer an advantage on a particular occasion (i.e., a medical act). I-Reality may help us bridge the gap between the high demand for medical providers and the low supply of them by improving the interaction between providers and patients.
Tietyen, Ann C; Richards, Allan G
A new and innovative pedagogical approach that administers hands-on visual arts activities to persons with dementia based on the field of Visual Arts Education is reported in this paper. The aims of this approach are to enhance cognition and improve quality of life. These aims were explored in a small qualitative study with eight individuals with moderate dementia, and the results are published as a thesis. In this paper, we summarize and report the results of this small qualitative study and expand upon the rationale for the Visual Arts Education pedagogical approach that has shown promise for enhancing cognitive processes and improving quality of life for persons with dementia.
Tomita, Masaaki; Minematsu, Kazuo; Choki, Junichiro; Yamaguchi, Takenori [National Cardiovascular Center, Suita, Osaka (Japan)
A 77-year-old woman with a history of valvular heart disease, atrial fibrillation and a massive infarction in the right cerebral hemisphere developed contralateral infarction due to occlusion of the internal carotid artery. A string-like structure with higher density than normal brain was demonstrated on non-enhanced computed tomography that was performed in the acute stage. This abnormal structure seen in the left hemisphere was thought to be consistent with the middle cerebral artery trunk of the affected side. Seventeen days after the onset, the abnormal structure was no more visualized on non-enhanced CT. These findings suggested that the abnormal structure with increased density was compatible with thromboembolus or intraluminal clot formed in the distal part of the occluded internal carotid artery. The importance of this finding as a diagnostic sign of the cerebral arterial occlusion was discussed.
Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu
Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.
Tomita, Masaaki; Minematsu, Kazuo; Choki, Junichiro; Yamaguchi, Takenori
A 77-year-old woman with a history of valvular heart disease, atrial fibrillation and a massive infarction in the right cerebral hemisphere developed contralateral infarction due to occlusion of the internal carotid artery. A string-like structure with higher density than normal brain was demonstrated on non-enhanced computed tomography that was performed in the acute stage. This abnormal structure seen in the left hemisphere was thought to be consistent with the middle cerebral artery trunk of the affected side. Seventeen days after the onset, the abnormal structure was no more visualized on non-enhanced CT. These findings suggested that the abnormal structure with increased density was compatible with thromboembolus or intraluminal clot formed in the distal part of the occluded internal ca rotid artery. An importance of this finding as a diagnostic sign of the cerebral arterial occlusion was discussed. (author)
Parsons, T. D.; Riva, G.; Parsons, S. J.; Mantovani, F.; Newbutt, N.; Lin, L.; Venturini, E.; Hall, T.
Virtual reality technologies allow for controlled simulations of affectively engaging background narratives. These virtual environments offer promise for enhancing emotionally relevant experiences and social interactions. Within this context virtual reality can allow instructors, therapists, neuropsychologists, and service providers to offer safe, repeatable, and diversifiable interventions that can benefit assessments and learning in both typically developing children and children with disab...
Walsh, V; Ellison, A; Battelli, L; Cowey, A
Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subjects were impaired in a motion but not a form 'pop-out' task when TMS was applied over V5. When motion was present, but irrelevant, or when attention to colour and form were required, TMS applied to V5 enhanced performance. When attention to motion was required in a motion-form conjunction search task, irrespective of whether the target was moving or stationary, TMS disrupted performance. These data suggest that attention to different visual attributes involves mutual inhibition between different extrastriate visual areas.
Communicative Positioning Program/Text Representation Systems (CPP-TRS) is a visual language based on a system of 12 canvasses, 10 signals and 14 symbols. CPP-TRS is based on the fact that every communication action is the result of a set of cognitive processes and the whole system is based on the concept that you can enhance communication by visually perceiving text. With a simple syntax, CPP-TRS is capable of representing meaning and intention as well as communication functions visually. Those are precisely invisible aspects of natural language that are most relevant to getting the global meaning of a text. CPP-TRS reinforces natural language in human machine interaction systems. It complements natural language by adding certain important elements that are not represented by natural language by itself. These include communication intention and function of the text expressed by the sender, as well as the role the reader is supposed to play. The communication intention and function of a text and the reader's role are invisible in natural language because neither specific words nor punctuation conveys them sufficiently and unambiguously; they are therefore non-transparent.
van de Kamp, Marie-Thérèse; Admiraal, Wilfried; van Drie, Jannet; Rijlaarsdam, Gert
The main purposes of visual arts education concern the enhancement of students' creative processes and the originality of their art products. Divergent thinking is crucial for finding original ideas in the initial phase of a creative process that aims to result in an original product. This study aims to examine the effects of explicit instruction of meta-cognition on students' divergent thinking. A quasi-experimental design was implemented with 147 secondary school students in visual arts education. In the experimental condition, students attended a series of regular lessons with assignments on art reception and production, and they attended one intervention lesson with explicit instruction of meta-cognition. In the control condition, students attended a series of regular lessons only. Pre-test and post-test instances tests measured fluency, flexibility, and originality as indicators of divergent thinking. Explicit instruction of meta-cognitive knowledge had a positive effect on fluency and flexibility, but not on originality. This study implies that in the domain of visual arts, instructional support in building up meta-cognitive knowledge about divergent thinking may improve students' creative processes. This study also discusses possible reasons for the demonstrated lack of effect for originality. © 2014 The British Psychological Society.
Skov, Kirsten; Bahn, Anne Louise
Projektets grundlæggende idé er udvikling af visuel, æstetisk læring med Augmented Reality, hvor intentionen er at bidrage med konkrete undersøgelser og udforskning af begrebet Augmented Reality – herunder koblingen mellem det analoge og digitale i forhold til læring, multimodalitet og it...
Bahmani, Moslem; Wulf, Gabriele; Ghadiri, Farhad; Karimi, Saeed; Lewthwaite, Rebecca
In a recent study by Chauvel, Wulf, and Maquestiaux (2015), golf putting performance was found to be affected by the Ebbinghaus illusion. Specifically, adult participants demonstrated more effective learning when they practiced with a hole that was surrounded by small circles, making it look larger, than when the hole was surrounded by large circles, making it look smaller. The present study examined whether this learning advantage would generalize to children who are assumed to be less sensitive to the visual illusion. Two groups of 10-year olds practiced putting golf balls from a distance of 2m, with perceived larger or smaller holes resulting from the visual illusion. Self-efficacy was increased in the group with the perceived larger hole. The latter group also demonstrated more accurate putting performance during practice. Importantly, learning (i.e., delayed retention performance without the illusion) was enhanced in the group that practiced with the perceived larger hole. The findings replicate previous results with adult learners and are in line with the notion that enhanced performance expectancies are key to optimal motor learning (Wulf & Lewthwaite, 2016). Copyright © 2017 Elsevier B.V. All rights reserved.
Luo, Teng; Lu, Yuan; Liu, Shaoxiong; Lin, Danying; Qu, Junle
The phasor approach to fluorescence lifetime imaging microscopy (FLIM) is used to identify different types of tissues from hematoxylin and eosin (H&E) stained basal cell carcinoma (BCC) sections. The results suggest that working directly on the phasor space with the clustering assignment achieves immunofluorescence like simultaneous five or six-color imaging by using multiplexed fluorescence lifetimes of H&E. The phase approach is of particular effectiveness for enhanced visualization of the abnormal morphology of a suspected nidus. Moreover, the phasor approach to H&E FLIM data can determine the actual paths or the infiltrating trajectories of basophils and immune cells associated with the preneoplastic or neoplastic skin lesions. The integration of the phasor approach with routine histology proved its available value for skin cancer prevention and early detection. We therefore believe that the phasor analysis of H&E tissue sections is an enhanced visualization tool with the potential to simplify the preparation process of special staining and serve as color contrast aided imaging in clinical pathological examination.
Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J
Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Berent, Gerald P.; Kelly, Ronald R.; Schmitz, Kathryn L.; Kenney, Patricia
This study explored the efficacy of visual input enhancement, specifically "essay enhancement", for facilitating deaf college students' improvement in English grammatical knowledge. Results documented students' significant improvement immediately after a 10-week instructional intervention, a replication of recent research. Additionally, the…
Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
Foley, Nicholas C; Jangraw, David C; Peck, Christopher; Gottlieb, Jacqueline
Novelty modulates sensory and reward processes, but it remains unknown how these effects interact, i.e., how the visual effects of novelty are related to its motivational effects. A widespread hypothesis, based on findings that novelty activates reward-related structures, is that all the effects of novelty are explained in terms of reward. According to this idea, a novel stimulus is by default assigned high reward value and hence high salience, but this salience rapidly decreases if the stimulus signals a negative outcome. Here we show that, contrary to this idea, novelty affects visual salience in the monkey lateral intraparietal area (LIP) in ways that are independent of expected reward. Monkeys viewed peripheral visual cues that were novel or familiar (received few or many exposures) and predicted whether the trial will have a positive or a negative outcome--i.e., end in a reward or a lack of reward. We used a saccade-based assay to detect whether the cues automatically attracted or repelled attention from their visual field location. We show that salience--measured in saccades and LIP responses--was enhanced by both novelty and positive reward associations, but these factors were dissociable and habituated on different timescales. The monkeys rapidly recognized that a novel stimulus signaled a negative outcome (and withheld anticipatory licking within the first few presentations), but the salience of that stimulus remained high for multiple subsequent presentations. Therefore, novelty can provide an intrinsic bonus for attention that extends beyond the first presentation and is independent of physical rewards. Copyright © 2014 the authors 0270-6474/14/347947-11$15.00/0.
Orzechowski, M.A.; Timmermans, H.J.P.; Vries, de B.; Timmermans, H.J.P.; Vries, de B.
This paper describes Virtual Reality as an environment to collect information about user satisfaction. Because Virtual Reality (VR) allows visualization with added interactivity, this form of representation bas particular advantages when presenting new designs. The paper reports on the development
Finger, T; Schaumann, A; Schulz, M; Thomale, Ulrich-W
Individual planning of the entry point and the use of navigation has become more relevant in intraventricular neuroendoscopy. Navigated neuroendoscopic solutions are continuously improving. We describe experimentally measured accuracy and our first experience with augmented reality-enhanced navigated neuroendoscopy for intraventricular pathologies. Augmented reality-enhanced navigated endoscopy was tested for accuracy in an experimental setting. Therefore, a 3D-printed head model with a right parietal lesion was scanned with a thin-sliced computer tomography. Segmentation of the tumor lesion was performed using Scopis NovaPlan navigation software. An optical reference matrix is used to register the neuroendoscope's geometry and its field of view. The pre-planned ROI and trajectory are superimposed in the endoscopic image. The accuracy of the superimposed contour fitting on endoscopically visualized lesion was acquired by measuring the deviation of both midpoints to one another. The technique was subsequently used in 29 cases with CSF circulation pathologies. Navigation planning included defining the entry points, regions of interests and trajectories, superimposed as augmented reality on the endoscopic video screen during intervention. Patients were evaluated for postoperative imaging, reoperations, and possible complications. The experimental setup revealed a deviation of the ROI's midpoint from the real target by 1.2 ± 0.4 mm. The clinical study included 18 cyst fenestrations, ten biopsies, seven endoscopic third ventriculostomies, six stent placements, and two shunt implantations, being eventually combined in some patients. In cases of cyst fenestrations postoperatively, the cyst volume was significantly reduced in all patients by mean of 47%. In biopsies, the diagnostic yield was 100%. Reoperations during a follow-up period of 11.4 ± 10.2 months were necessary in two cases. Complications included one postoperative hygroma and one insufficient
Full Text Available In health sciences education, there is growing evidence that simulation improves learners’ safety, competence, and skills, especially when compared to traditional didactic methods or no simulation training. However, this approach to simulation becomes difficult when students are studying at a distance, leading to the need to develop simulations that suit this pedagogical problem and the logistics of this intervention method. This paper describes the use of a design-based research (DBR methodology, combined with a new model for putting ‘pedagogy before technology’ when approaching these types of education problems, to develop a mixed reality education solution. This combined model is used to analyse a classroom learning problem in paramedic health sciences with respect to student evidence, assisting the educational designer to identify a solution, and subsequently develop a technology-based mixed reality simulation via a mobile phone application and three-dimensional (3D printed tools to provide an analogue approximation for an on-campus simulation experience. The developed intervention was tested with students and refined through a repeat of the process, showing that a DBR process, supported by a model that puts ‘pedagogy before technology’, can produce over several iterations a much-improved simulation that results in a simulation that satisfies student pedagogical needs.
Full Text Available Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and inanimated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found an enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Rao, Donepudi V.; Swapna, Medasani; Cesareo, Roberto; Brunetti, Antonio; Zhong, Zhong; Akatsuka, Takao; Yuasa, Tetsuya; Takeda, Tohoru; Gigante, Giovanni E.
Images of terrestrial and marine invertebrates (snails and bivalves) have been obtained by using an X-ray phase-contrast imaging technique, namely, synchrotron-based diffraction-enhanced imaging. Synchrotron X-rays of 20, 30 and 40 keV were used, which penetrate deep enough into animal soft tissues. The phase of X-ray photons shifts slightly as they traverse an object, such as animal soft tissue, and interact with its atoms. Biological features, such as shell morphology and animal physiology, have been visualized. The contrast of the images obtained at 40 keV is the best. This optimum energy provided a clear view of the internal structural organization of the soft tissue with better contrast. The contrast is higher at edges of internal soft-tissue structures. The image improvements achieved with the diffraction-enhanced imaging technique are due to extinction, i.e., elimination of ultra-small-angle scattering. They enabled us to identify a few embedded internal shell features, such as the origin of the apex, which is the firmly attached region of the soft tissue connecting the umbilicus to the external morphology. Diffraction-enhanced imaging can provide high-quality images of soft tissues valuable for biology.
Rao, Donepudi V., E-mail: email@example.com [Istituto di Matematica e Fisica, Universita degli Studi di Sassari, Via Vienna 2, 07100 Sassari (Italy); Swapna, Medasani, E-mail: firstname.lastname@example.org [Istituto di Matematica e Fisica, Universita degli Studi di Sassari, Via Vienna 2, 07100 Sassari (Italy); Cesareo, Roberto; Brunetti, Antonio [Istituto di Matematica e Fisica, Universita degli Studi di Sassari, Via Vienna 2, 07100 Sassari (Italy); Zhong, Zhong [National Synchrotron Light Source, Brookhaven National Laboratory, Upton, NY 11973 (United States); Akatsuka, Takao; Yuasa, Tetsuya [Department of Bio-System Engineering, Faculty of Engineering, Yamagata University, Yonezawa-shi, Yamagata-992-8510 (Japan); Takeda, Tohoru [Allied Health Science, Kitasato University 1-15-1 Kitasato, Sagamihara, Kanagawa 228-8555 (Japan); Gigante, Giovanni E. [Dipartimento di Fisica, Universita di Roma, La Sapienza, 00185 Roma (Italy)
Images of terrestrial and marine invertebrates (snails and bivalves) have been obtained by using an X-ray phase-contrast imaging technique, namely, synchrotron-based diffraction-enhanced imaging. Synchrotron X-rays of 20, 30 and 40 keV were used, which penetrate deep enough into animal soft tissues. The phase of X-ray photons shifts slightly as they traverse an object, such as animal soft tissue, and interact with its atoms. Biological features, such as shell morphology and animal physiology, have been visualized. The contrast of the images obtained at 40 keV is the best. This optimum energy provided a clear view of the internal structural organization of the soft tissue with better contrast. The contrast is higher at edges of internal soft-tissue structures. The image improvements achieved with the diffraction-enhanced imaging technique are due to extinction, i.e., elimination of ultra-small-angle scattering. They enabled us to identify a few embedded internal shell features, such as the origin of the apex, which is the firmly attached region of the soft tissue connecting the umbilicus to the external morphology. Diffraction-enhanced imaging can provide high-quality images of soft tissues valuable for biology.
James Stuart Wolffsohn
Full Text Available AIM:To develop a short, enhanced functional ability Quality of Vision (faVIQ instrument based on previous questionnaires employing comprehensive modern statistical techniques to ensure the use of an appropriate response scale, items and scoring of the visual related difficulties experienced by patients with visual impairment.METHODS:Items in current quality-of-life questionnaires for the visually impaired were refined by a multi-professional group and visually impaired focus groups. The resulting 76 items were completed by 293 visually impaired patients with stable vision on two occasions separated by a month. The faVIQ scores of 75 patients with no ocular pathology were compared to 75 age and gender matched patients with visual impairment.RESULTS:Rasch analysis reduced the faVIQ items to 27. Correlation to standard visual metrics was moderate (r=0.32-0.46 and to the NEI-VFQ was 0.48. The faVIQ was able to clearly discriminate between age and gender matched populations with no ocular pathology and visual impairment with an index of 0.983 and 95% sensitivity and 95% specificity using a cut off of 29.CONCLUSION:The faVIQ allows sensitive assessment of quality-of-life in the visually impaired and should support studies which evaluate the effectiveness of low vision rehabilitation services.
Yiasemidou, Marina; Glassman, Daniel; Mushtaq, Faisal; Athanasiou, Christos; Williams, Mark-Mon; Jayne, David; Miskovic, Danilo
Evidence suggests that Mental Practice (MP) could be used to finesse surgical skills. However, MP is cognitively demanding and may be dependent on the ability of individuals to produce mental images. In this study, we hypothesised that the provision of interactive 3D visual aids during MP could facilitate surgical skill performance. 20 surgical trainees were case-matched to one of three different preparation methods prior to performing a simulated Laparoscopic Cholecystectomy (LC). Two intervention groups underwent a 25-minute MP session; one with interactive 3D visual aids depicting the relevant surgical anatomy (3D-MP group, n = 5) and one without (MP-Only, n = 5). A control group (n = 10) watched a didactic video of a real LC. Scores relating to technical performance and safety were recorded by a surgical simulator. The Control group took longer to complete the procedure relative to the 3D&MP condition (p = .002). The number of movements was also statistically different across groups (p = .001), with the 3D&MP group making fewer movements relative to controls (p = .001). Likewise, the control group moved further in comparison to the 3D&MP condition and the MP-Only condition (p = .004). No reliable differences were observed for safety metrics. These data provide evidence for the potential value of MP in improving performance. Furthermore, they suggest that 3D interactive visual aids during MP could potentially enhance performance, beyond the benefits of MP alone. These findings pave the way for future RCTs on surgical preparation and performance.
Yan Naing Aye
Full Text Available The intelligent handheld instrument, ITrem2, enhances manual positioning accuracy by cancelling erroneous hand movements and, at the same time, provides automatic micromanipulation functions. Visual data is acquired from a high speed monovision camera attached to the optical surgical microscope and acceleration measurements are acquired from the inertial measurement unit (IMU on board ITrem2. Tremor estimation and canceling is implemented via Band-limited Multiple Fourier Linear Combiner (BMFLC filter. The piezoelectric actuated micromanipulator in ITrem2 generates the 3D motion to compensate erroneous hand motion. Preliminary bench-top 2-DOF experiments have been conducted. The error motions simulated by a motion stage is reduced by 67% for multiple frequency oscillatory motions and 56.16% for pre-conditioned recorded physiological tremor.
Pinky A. Bautista
Full Text Available In this paper we proposed a multispectral enhancement scheme in which the spectral colors of the stained tissue-structure of interest and its background can be independently modified by the user to further improve their visualization and color discrimination. The colors of the background objects are modified by transforming their N-band spectra through an NxN transformation matrix, which is derived by mapping the representative samples of their original spectra to the spectra of their target colors using least mean square method. On the other hand, the color of the tissue structure of interest is modified by modulating the transformed spectra with the sum of the pixel’s spectral residual-errors at specific bands weighted through an NxN weighting matrix; the spectral error is derived by taking the difference between the pixel’s original spectrum and its reconstructed spectrum using the first M dominant principal component vectors in principal component analysis. Promising results were obtained on the visualization of the collagen fiber and the non-collagen tissue structures, e.g., nuclei, cytoplasm and red blood cells (RBC, in a hematoxylin and eosin (H&E stained image.
Markovic, Jelena; Anderson, Adam K; Todd, Rebecca M
Emotionally arousing events reach awareness more easily and evoke greater visual cortex activation than more mundane events. Recent studies have shown that they are also perceived more vividly and that emotionally enhanced perceptual vividness predicts memory vividness. We propose that affect-biased attention (ABA) - selective attention to emotionally salient events - is an endogenous attentional system tuned by an individual's history of reward and punishment. We present the Biased Attention via Norepinephrine (BANE) model, which unifies genetic, neuromodulatory, neural and behavioural evidence to account for ABA. We review evidence supporting BANE's proposal that a key mechanism of ABA is locus coeruleus-norepinephrine (LC-NE) activity, which interacts with activity in hubs of affective salience networks to modulate visual cortex activation and heighten the subjective vividness of emotionally salient stimuli. We further review literature on biased competition and look at initial evidence for its potential as a neural mechanism behind ABA. We also review evidence supporting the role of the LC-NE system as a driving force of ABA. Finally, we review individual differences in ABA and memory including differences in sensitivity to stimulus category and valence. We focus on differences arising from a variant of the ADRA2b gene, which codes for the alpha2b adrenoreceptor as a way of investigating influences of NE availability on ABA in humans. Copyright © 2013 Elsevier B.V. All rights reserved.
Mouchi, Vincent; Crowley, Quentin G.; Ubide, Teresa
Interpretation of high spatial resolution elemental mineral maps can be hindered by high frequency fluctuations, as well as by strong naturally-occurring or analytically-induced variations. We have developed a new standalone program named AERYN (Aspect Enhancement by Removing Yielded Noise) to produce more reliable element distribution maps from previously reduced geochemical data. The program is Matlab-based, designed with a graphic user interface and is capable of rapidly generating elemental maps from data acquired by a range of analytical techniques. A visual interface aids selection of appropriate outlier rejection and drift-correction parameters, thereby facilitating recognition of subtle elemental fluctuations which may otherwise be obscured. Examples of use are provided for quantitative trace element maps acquired using both laser ablation (LA-) ICP-MS and electron probe microanalysis (EPMA) of the cold-water coral Lophelia pertusa. We demonstrate how AERYN allows recognition of high frequency elemental fluctuations, including those which occur perpendicular to the maximum concentration gradient. Such data treatment compliments commonly used processing methods to provide greater flexibility and control in producing elemental maps from micro-analytical techniques. - Highlights: • Matlab-based application to improve visualization of elemental maps. • Capable of detrending when data set shows drift. • Compatible with processed data text files from LA-ICP-MS, EDS and EPMA. • Option to filter geochemical trends to observe high-frequency fluctuations.
Manjaly, Zina M; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E; Brieber, Sarah; Marshall, John C; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is "weak central coherence", i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients.
Full Text Available This study investigated the effect of visual input enhancement on the vocabulary learning of Iranian EFL learners. One hundred and thirty-two EFL learners from elementary, intermediate and advanced proficiency levels were assigned to six groups, two groups at each proficiency level with one being an experimental and the other a control group. The study employed pretests, treatment reading texts, and posttests. T-test was used for the analysis of the data. The results revealed positive effects for visual input enhancement in the advanced level based on within group and between groups’ comparisons. However this positive effect was not found for the elementary and intermediate levels based on between groups’ comparisons. It was concluded that although visual input enhancement may have beneficial effects for elementary and intermediate levels, it is much more effective for the advanced EFL learners. This study may provide useful guiding principles for EFL teachers and syllabus designers.
Sabatino DiCriscio, Antoinette; Troiani, Vanessa
Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of…
Xie, Weizhen; Zhang, Weiwei
Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.
In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.
Sanchez, Christopher A.; Ruddell, Benjamin L.; Schiesser, Roy; Merwade, Venkatesh
Previous research has suggested that the use of more authentic learning activities can produce more robust and durable knowledge gains. This is consistent with calls within civil engineering education, specifically hydrology, that suggest that curricula should more often include professional perspective and data analysis skills to better develop the "T-shaped" knowledge profile of a professional hydrologist (i.e., professional breadth combined with technical depth). It was expected that the inclusion of a data-driven simulation lab exercise that was contextualized within a real-world situation and more consistent with the job duties of a professional in the field, would provide enhanced learning and appreciation of job duties beyond more conventional paper-and-pencil exercises in a lower-division undergraduate course. Results indicate that while students learned in both conditions, learning was enhanced for the data-driven simulation group in nearly every content area. This pattern of results suggests that the use of data-driven modeling and visualization activities can have a significant positive impact on instruction. This increase in learning likely facilitates the development of student perspective and conceptual mastery, enabling students to make better choices about their studies, while also better preparing them for work as a professional in the field.
Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H
Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.
Full Text Available Visual function has been shown to deteriorate prior to the onset of retinopathy in some diabetic patients and experimental animal models. This suggests the involvement of the brain's visual system in the early stages of diabetes. In this study, we tested this hypothesis by examining the integrity of the visual pathway in a diabetic rat model using in vivo multi-modal magnetic resonance imaging (MRI. Ten-week-old Sprague-Dawley rats were divided into an experimental diabetic group by intraperitoneal injection of 65 mg/kg streptozotocin in 0.01 M citric acid, and a sham control group by intraperitoneal injection of citric acid only. One month later, diffusion tensor MRI (DTI was performed to examine the white matter integrity in the brain, followed by chromium-enhanced MRI of retinal integrity and manganese-enhanced MRI of anterograde manganese transport along the visual pathway. Prior to MRI experiments, the streptozotocin-induced diabetic rats showed significantly smaller weight gain and higher blood glucose level than the control rats. DTI revealed significantly lower fractional anisotropy and higher radial diffusivity in the prechiasmatic optic nerve of the diabetic rats compared to the control rats. No apparent difference was observed in the axial diffusivity of the optic nerve, the chromium enhancement in the retina, or the manganese enhancement in the lateral geniculate nucleus and superior colliculus between groups. Our results suggest that streptozotocin-induced diabetes leads to early injury in the optic nerve when no substantial change in retinal integrity or anterograde transport along the visual pathways was observed in MRI using contrast agent enhancement. DTI may be a useful tool for detecting and monitoring early pathophysiological changes in the visual system of experimental diabetes non-invasively.
Saunder, Lorna; Berridge, Emma-Jane
Poor preparation of nurses, regarding learning disabilities can have devastating consequences. High-profile reports and the Nursing and Midwifery Council requirements led this University to introduce Shareville into the undergraduate and postgraduate nursing curriculum. Shareville is a virtual environment developed at Birmingham City University, in which student nurses learn from realistic, problem-based scenarios featuring people with learning disabilities. Following the implementation of the resource an evaluation of both staff and student experience was undertaken. Students reported that problem-based scenarios were sufficiently real and immersive. Scenarios presented previously unanticipated considerations, offering new insights, and giving students the opportunity to practise decision-making in challenging scenarios before encountering them in practice. The interface and the quality of the graphics were criticised, but, this did not interfere with learning. Nine lecturers were interviewed, they generally felt positively towards the resource and identified strengths in terms of blended learning and collaborative teaching. The evaluation contributes to understandings of learning via simulated reality, and identifies process issues that will inform the development of further resources and their roll-out locally, and may guide other education providers in developing and implementing resources of this nature. There was significant parity between lecturers' expectations of students' experience of Shareville. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.
Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants wer...
Shiell, Martha M.; Champoux, Fran?ois; Zatorre, Robert J.
After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenita...
Hultsch; Schleuss; Todt
In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.
Petkovska, Iva; Shah, Sumit K.; McNitt-Gray, Michael F.; Goldin, Jonathan G.; Brown, Matthew S.; Kim, Hyun J.; Brown, Kathleen; Aberle, Denise R.
Purpose: To determine whether conventional nodule densitometry or analysis based on contrast enhancement maps of indeterminate lung nodules imaged with contrast-enhanced CT can distinguish benign from malignant lung nodules. Materials and method: Thin section, contrast-enhanced CT (baseline, and post-contrast series acquired at 45, 90,180, and 360 s) was performed on 29 patients with indeterminate lung nodules (14 benign, 15 malignant). A thoracic radiologist identified the boundary of each nodule using semi-automated contouring to form a 3D region-of-interest (ROI) on each image series. The post-contrast series having the maximum mean enhancement was then volumetrically registered to the baseline series. The two series were subtracted volumetrically and the subtracted voxels were quantized into seven color-coded bins, forming a contrast enhancement map (CEM). Conventional nodule densitometry was performed to obtain the maximum difference in mean enhancement values for each nodule from a circular ROI. Three thoracic radiologists performed visual semi-quantitative analysis of each nodule, scoring each map for: (a) magnitude and (b) heterogeneity of enhancement throughout the entire volume of the nodule on a five-point scale. Receiver operator characteristic (ROC) analysis was conducted on these features to evaluate their diagnostic efficacy. Finally, 14 quantitative texture features were calculated for each map. A statistical analysis was performed to combine the 14 texture features to a single factor. ROC analysis of the derived aggregate factor was done as an indicator of malignancy. All features were analyzed for differences between benign and malignant nodules. Results: Using 15 HU as a threshold, 93% (14/15) of malignant and 79% (11/14) of benign nodules demonstrated enhancement. The ROC curve when higher values of enhancement indicate malignancy was generated and area under the curve (AUC) was 0.76. The visually scored magnitude of enhancement was found to be
Loewen, Shawn; Inceoglu, Solène
Textual manipulation is a common pedagogic tool used to emphasize specific features of a second language (L2) text, thereby facilitating noticing and, ideally, second language development. Visual input enhancement has been used to investigate the effects of highlighting specific grammatical structures in a text. The current study uses a…
Manzoni, Gian Mauro; Cesa, Gian Luca; Bacchetta, Monica; Castelnuovo, Gianluca; Conti, Sara; Gaggioli, Andrea; Mantovani, Fabrizia; Molinari, Enrico; Cárdenas-López, Georgina; Riva, Giuseppe
It is well known that obesity has a multifactorial etiology, including biological, environmental, and psychological causes. For this reason, obesity treatment requires a more integrated approach than the standard behavioral treatment based on dietary and physical activity only. To test the long-term efficacy of an enhanced cognitive-behavioral therapy (CBT) of obesity, including a virtual reality (VR) module aimed at both unlocking the negative memory of the body and to modify its behavioral and emotional correlates, 163 female morbidly obese inpatients (body mass index >40) were randomly assigned to three conditions: a standard behavioral inpatient program (SBP), SBP plus standard CBT, and SBP plus VR-enhanced CBT. Patients' weight, eating behavior, and body dissatisfaction were measured at the start and upon completion of the inpatient program. Weight was assessed also at 1 year follow-up. All measures improved significantly at discharge from the inpatient program, and no significant difference was found among the conditions. However, odds ratios showed that patients in the VR condition had a greater probability of maintaining or improving weight loss at 1 year follow-up than SBP patients had (48% vs. 11%, p = 0.004) and, to a lesser extent, than CBT patients had (48% vs. 29%, p = 0.08). Indeed, only the VR-enhanced CBT was effective in further improving weight loss at 1 year follow-up. On the contrary, participants who received only the inpatient program regained back, on average, most of the weight they had lost. Findings support the hypothesis that a VR module addressing the locked negative memory of the body may enhance the long-term efficacy of standard CBT.
Lee, Byoungho; Lee, Seungjae; Jang, Changwon; Hong, Jong-Young; Li, Gang
With the virtue of rapid progress in optics, sensors, and computer science, we are witnessing that commercial products or prototypes for augmented reality (AR) are penetrating into the consumer markets. AR is spotlighted as expected to provide much more immersive and realistic experience than ordinary displays. However, there are several barriers to be overcome for successful commercialization of AR. Here, we explore challenging and important topics for AR such as image combiners, enhancement of display performance, and focus cue reproduction. Image combiners are essential to integrate virtual images with real-world. Display performance (e.g. field of view and resolution) is important for more immersive experience and focus cue reproduction may mitigate visual fatigue caused by vergence-accommodation conflict. We also demonstrate emerging technologies to overcome these issues: index-matched anisotropic crystal lens (IMACL), retinal projection displays, and 3D display with focus cues. For image combiners, a novel optical element called IMACL provides relatively wide field of view. Retinal projection displays may enhance field of view and resolution of AR displays. Focus cues could be reconstructed via multi-layer displays and holographic displays. Experimental results of our prototypes are explained.
Gomez, Jocelyn; Hoffman, Hunter G; Bistricky, Steven L; Gonzalez, Miriam; Rosenberg, Laura; Sampaio, Mariana; Garcia-Palacios, Azucena; Navarro-Haro, Maria V; Alhalabi, Wadee; Rosenberg, Marta; Meyer, Walter J; Linehan, Marsha M
Sustaining a burn injury increases an individual's risk of developing psychological problems such as generalized anxiety, negative emotions, depression, acute stress disorder, or post-traumatic stress disorder. Despite the growing use of Dialectical Behavioral Therapy® (DBT®) by clinical psychologists, to date, there are no published studies using standard DBT® or DBT® skills learning for severe burn patients. The current study explored the feasibility and clinical potential of using Immersive Virtual Reality (VR) enhanced DBT® mindfulness skills training to reduce negative emotions and increase positive emotions of a patient with severe burn injuries. The participant was a hospitalized (in house) 21-year-old Spanish speaking Latino male patient being treated for a large (>35% TBSA) severe flame burn injury. Methods: The patient looked into a pair of Oculus Rift DK2 virtual reality goggles to perceive the computer-generated virtual reality illusion of floating down a river, with rocks, boulders, trees, mountains, and clouds, while listening to DBT® mindfulness training audios during 4 VR sessions over a 1 month period. Study measures were administered before and after each VR session. Results: As predicted, the patient reported increased positive emotions and decreased negative emotions. The patient also accepted the VR mindfulness treatment technique. He reported the sessions helped him become more comfortable with his emotions and he wanted to keep using mindfulness after returning home. Conclusions: Dialectical Behavioral Therapy is an empirically validated treatment approach that has proved effective with non-burn patient populations for treating many of the psychological problems experienced by severe burn patients. The current case study explored for the first time, the use of immersive virtual reality enhanced DBT® mindfulness skills training with a burn patient. The patient reported reductions in negative emotions and increases in positive emotions
Using the concept of augmented reality, this article will investigate how places in various ways have become augmented by means of different mediatization strategies. Augmentation of reality implies an enhancement of the places' emotional character: a certain mood, atmosphere or narrative surplus......, physical damage: they are all readable and interpretable signs. As augmented reality the crime scene carries a narrative which at first is hidden and must be revealed. Due to the process of investigation and the detective's ability to reason and deduce, the crime scene as place is reconstructed as virtual...
Technology Enhanced Learning is a feature of 21st century education. Innovations in ICT have provided unbound access to information in support of the learning process (APTEL, 2010; Allert et al, 2002; Baldry et al, 2006; Frustenberg et al, 2001; Sarkis, 2010). LMS has been extensively put to use in universities and educational institutions to…
Maizels, Max; Mickelson, Jennie; Yerkes, Elizabeth; Maizels, Evelyn; Stork, Rachel; Young, Christine; Corcoran, Julia; Holl, Jane; Kaplan, William E
Changes in health care are stimulating residency training programs to develop new methods for teaching surgical skills. We developed Computer-Enhanced Visual Learning (CEVL) as an innovative Internet-based learning and assessment tool. The CEVL method uses the educational procedures of deliberate practice and performance to teach and learn surgery in a stylized manner. CEVL is a learning and assessment tool that can provide students and educators with quantitative feedback on learning a specific surgical procedure. Methods involved examine quantitative data of improvement in surgical skills. Herein, we qualitatively describe the method and show how program directors (PDs) may implement this technique in their residencies. CEVL allows an operation to be broken down into teachable components. The process relies on feedback and remediation to improve performance, with a focus on learning that is applicable to the next case being performed. CEVL has been shown to be effective for teaching pediatric orchiopexy and is being adapted to additional adult and pediatric procedures and to office examination skills. The CEVL method is available to other residency training programs.
Margaret C Jackson
Full Text Available Fluid and effective social communication requires that both face identity and emotional expression information are encoded and maintained in visual short-term memory (VSTM to enable a coherent, ongoing picture of the world and its players. This appears to be of particular evolutionary importance when confronted with potentially threatening displays of emotion - previous research has shown better VSTM for angry versus happy or neutral face identities.Using functional magnetic resonance imaging, here we investigated the neural correlates of this angry face benefit in VSTM. Participants were shown between one and four to-be-remembered angry, happy, or neutral faces, and after a short retention delay they stated whether a single probe face had been present or not in the previous display. All faces in any one display expressed the same emotion, and the task required memory for face identity. We find enhanced VSTM for angry face identities and describe the right hemisphere brain network underpinning this effect, which involves the globus pallidus, superior temporal sulcus, and frontal lobe. Increased activity in the globus pallidus was significantly correlated with the angry benefit in VSTM. Areas modulated by emotion were distinct from those modulated by memory load.Our results provide evidence for a key role of the basal ganglia as an interface between emotion and cognition, supported by a frontal, temporal, and occipital network.
Ramakrishnan, Sowmya; Alvino, Christopher; Grady, Leo; Kiraly, Atilla
We present a complete automatic system to extract 3D centerlines of ribs from thoracic CT scans. Our rib centerline system determines the positional information for the rib cage consisting of extracted rib centerlines, spinal canal centerline, pairing and labeling of ribs. We show an application of this output to produce an enhanced visualization of the rib cage by the method of Kiraly et al., in which the ribs are digitally unfolded along their centerlines. The centerline extraction consists of three stages: (a) pre-trace processing for rib localization, (b) rib centerline tracing, and (c) post-trace processing to merge the rib traces. Then we classify ribs from non-ribs and determine anatomical rib labeling. Our novel centerline tracing technique uses the Random Walker algorithm to segment the structural boundary of the rib in successive 2D cross sections orthogonal to the longitudinal direction of the ribs. Then the rib centerline is progressively traced along the rib using a 3D Kalman filter. The rib centerline extraction framework was evaluated on 149 CT datasets with varying slice spacing, dose, and under a variety of reconstruction kernels. The results of the evaluation are presented. The extraction takes approximately 20 seconds on a modern radiology workstation and performs robustly even in the presence of partial volume effects or rib pathologies such as bone metastases or fractures, making the system suitable for assisting clinicians in expediting routine rib reading for oncology and trauma applications.
Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping
Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment. (paper)
Akyürek, Elkan G; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik
Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.
Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping
Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.
Newby, Gregory B.
Discusses the current state of the art in virtual reality (VR), its historical background, and future possibilities. Highlights include applications in medicine, art and entertainment, science, business, and telerobotics; and VR for information science, including graphical display of bibliographic data, libraries and books, and cyberspace.…
Full Text Available In 2004, an unnamed Bush adviser accused a senior Wall Street Journal reporter of belonging to the “reality based community”—a community that believed solutions stem from the judicious study of reality. “We're history's actors, “ he told the journalist, “and you, all of you, will be left to just study what we do.” Overwhelmingly, the response of those on the left, and of US progressives to this comment was to smugly deride the irrationalism and the arrogance of the Bush Administration. This paper, in contrast, will examine what is missed in the rush to accept membership of the reality based community. It will suggest that that the advisor's comments express something that was once a central tenet of the left: the belief that political action is capable of transforming reality. Today, on the left, this belief has been all but abandoned in the face of a seemingly unstoppable onslaught of free market capitalism and increasingly repressive state power. This paper will ask what it would mean today, to begin to re-imagine political action as capable of remaking the world.
Nielsen, Birgitte Lund; Brandt, Harald; Radmer, Ole
Artiklen præsenterer resultater fra pilotafprøvning i 7.-klasses fysik/kemi og biologi af to Augmented Reality (AR)-apps til naturfagsundervisning. Muligheder og udfordringer ved lærerens stilladsering af elevernes undersøgende samtale og modelleringskompetence er undersøgt med interview...
This thesis provides an overview of (mobile) augmented and mixed reality by clarifying the different concepts of reality, briefly covering the technology behind mobile augmented and mixed reality systems, conducting a concise survey of existing and emerging mobile augmented and mixed reality applications and devices. Based on the previous analysis and the survey, this work will next attempt to assess what mobile augmented and mixed reality could make possible, and what related applications an...
Shiell, Martha M; Champoux, François; Zatorre, Robert J
After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenitally deaf cat, and not in humans. Using T1-weighted magnetic resonance imaging, we measured cortical thickness in the planum temporale, Heschl's gyrus and sulcus, the middle temporal area MT+, and the calcarine sulcus, in early-deaf persons. We tested for a correlation between this measure and visual motion detection thresholds, a visual function where deaf people show enhancements as compared to hearing. We found that the cortical thickness of a region in the right hemisphere planum temporale, typically an auditory region, was greater in deaf individuals with better visual motion detection thresholds. This same region has previously been implicated in functional imaging studies as important for functional reorganization. The structure-behaviour correlation observed here demonstrates this area's involvement in compensatory vision and indicates an anatomical correlate, increased cortical thickness, of cross-modal plasticity.
Full Text Available Aims. Increasing evidence shows that imbalanced suppressive drive prior to binocular combination may be the key factor in amblyopia. We described a novel binocular approach, interocular shift of visual attention (ISVA, for treatment of amblyopia in adult patients. Methods. Visual stimuli were presented anaglyphically on a computer screen. A square target resembling Landolt C had 2 openings, one in red and one in cyan color. Through blue-red goggles, each eye could only see one of the two openings. The patient was required to report the location of the opening presented to the amblyopic eye. It started at an opening size of 800 sec of arc, went up and down in 160 sec of arc step, and stopped when reaching the 5th reversals. Ten patients with anisometropic amblyopia older than age 14 (average age: 26.7 were recruited and received ISVA treatment for 6 weeks, with 2 training sessions per day. Results. Both Titmus stereopsis (z=-2.809, P=0.005 and Random-dot stereopsis (z=-2.317, P=0.018 were significantly improved. Average improvement in best corrected visual acuity (BCVA was 0.74 line (t=5.842, P<0.001. Conclusions. The ISVA treatment may be effective in treating amblyopia and restoring stereoscopic function.
Andrew J Latham
Full Text Available Increasing behavioural evidence suggests that expert video game players (VGPs show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years. Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented. IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction.
Latham, Andrew J; Patston, Lucy L M; Westermann, Christine; Kirk, Ian J; Tippett, Lynette J
Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction.
Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj
Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and
Dye, Matthew W. G.; Hauser, Peter C.; Bavelier, Daphne
Background Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals. Methodology/Principal Findings We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age. Conclusions/Significance This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task. PMID:19462009
Arief, Rifiana; Umniati, Naeli
Augmanted Reality for android handphone has been a trend among collage students of computer department who join New Media course. To develop this application, the knowladge about visual presentation theory and case study of Augmanted Reality on android phoneneed to be conducted. Learning media through virtual class can facilitate the students' needs in learning and developing Augmanted Reality. The method of this study in developing virtual class for Augmented Reality learning were: a) having...
Rifiana Arief; Naeli Umniati
ABSTRACT Augmanted Reality for android handphone has been a trend among collage students of computer department who join New Media course. To develop this application, the knowladge about visual presentation theory and case study of Augmanted Reality on android phoneneed to be conducted. Learning media through virtual class can facilitate the students’ needs in learning and developing Augmanted Reality. The method of this study in developing virtual class for Augmented Reality learning we...
Man, David Wai Kwong; Poon, Wai Sang; Lam, Chow
People with traumatic brain injury (TBI) often experience cognitive deficits in attention, memory, executive functioning and problem-solving. The purpose of the present research study was to examine the effectiveness of an artificial intelligent virtual reality (VR)-based vocational problem-solving skill training programme designed to enhance employment opportunities for people with TBI. This was a prospective randomized controlled trial (RCT) comparing the effectiveness of the above programme with that of the conventional psycho-educational approach. Forty participants with mild (n = 20) or moderate (n = 20) brain injury were randomly assigned to each training programme. Comparisons of problem-solving skills were performed with the Wisconsin Card Sorting Test, the Tower of London Test and the Vocational Cognitive Rating Scale. Improvement in selective memory processes and perception of memory function were found. Across-group comparison showed that the VR group performed more favourably than the therapist-led one in terms of objective and subjective outcome measures and better vocational outcomes. These results support the potential use of a VR-based approach in memory training in people with MCI. Further VR applications, limitations and future research are described.
Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph
The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions.
Gimenez, Luis E.; Vishnivetskiy, Sergey A.; Baameur, Faiza; Gurevich, Vsevolod V.
Based on the identification of residues that determine receptor selectivity of arrestins and the analysis of the evolution in the arrestin family, we introduced 10 mutations of “receptor discriminator” residues in arrestin-3. The recruitment of these mutants to M2 muscarinic (M2R), D1 (D1R) and D2 (D2R) dopamine, and β2-adrenergic receptors (β2AR) was assessed using bioluminescence resonance energy transfer-based assays in cells. Seven of 10 mutations differentially affected arrestin-3 binding to individual receptors. D260K and Q262P reduced the binding to β2AR, much more than to other receptors. The combination D260K/Q262P virtually eliminated β2AR binding while preserving the interactions with M2R, D1R, and D2R. Conversely, Y239T enhanced arrestin-3 binding to β2AR and reduced the binding to M2R, D1R, and D2R, whereas Q256Y selectively reduced recruitment to D2R. The Y239T/Q256Y combination virtually eliminated the binding to D2R and reduced the binding to β2AR and M2R, yielding a mutant with high selectivity for D1R. Eleven of 12 mutations significantly changed the binding to light-activated phosphorhodopsin. Thus, manipulation of key residues on the receptor-binding surface modifies receptor preference, enabling the construction of non-visual arrestins specific for particular receptor subtypes. These findings pave the way to the construction of signaling-biased arrestins targeting the receptor of choice for research or therapeutic purposes. PMID:22787152
Gimenez, Luis E; Vishnivetskiy, Sergey A; Baameur, Faiza; Gurevich, Vsevolod V
Based on the identification of residues that determine receptor selectivity of arrestins and the analysis of the evolution in the arrestin family, we introduced 10 mutations of "receptor discriminator" residues in arrestin-3. The recruitment of these mutants to M2 muscarinic (M2R), D1 (D1R) and D2 (D2R) dopamine, and β(2)-adrenergic receptors (β(2)AR) was assessed using bioluminescence resonance energy transfer-based assays in cells. Seven of 10 mutations differentially affected arrestin-3 binding to individual receptors. D260K and Q262P reduced the binding to β(2)AR, much more than to other receptors. The combination D260K/Q262P virtually eliminated β(2)AR binding while preserving the interactions with M2R, D1R, and D2R. Conversely, Y239T enhanced arrestin-3 binding to β(2)AR and reduced the binding to M2R, D1R, and D2R, whereas Q256Y selectively reduced recruitment to D2R. The Y239T/Q256Y combination virtually eliminated the binding to D2R and reduced the binding to β(2)AR and M2R, yielding a mutant with high selectivity for D1R. Eleven of 12 mutations significantly changed the binding to light-activated phosphorhodopsin. Thus, manipulation of key residues on the receptor-binding surface modifies receptor preference, enabling the construction of non-visual arrestins specific for particular receptor subtypes. These findings pave the way to the construction of signaling-biased arrestins targeting the receptor of choice for research or therapeutic purposes.
Setchell Kenneth DR
Full Text Available Abstract Background In learning and memory tasks, requiring visual spatial memory (VSM, males exhibit superior performance to females (a difference attributed to the hormonal influence of estrogen. This study examined the influence of phytoestrogens (estrogen-like plant compounds on VSM, utilizing radial arm-maze methods to examine varying aspects of memory. Additionally, brain phytoestrogen, calbindin (CALB, and cyclooxygenase-2 (COX-2 levels were determined. Results Female rats receiving lifelong exposure to a high-phytoestrogen containing diet (Phyto-600 acquired the maze faster than females fed a phytoestrogen-free diet (Phyto-free; in males the opposite diet effect was identified. In a separate experiment, at 80 days-of-age, animals fed the Phyto-600 diet lifelong either remained on the Phyto-600 or were changed to the Phyto-free diet until 120 days-of-age. Following the diet change Phyto-600 females outperformed females switched to the Phyto-free diet, while in males the opposite diet effect was identified. Furthermore, males fed the Phyto-600 diet had significantly higher phytoestrogen concentrations in a number of brain regions (frontal cortex, amygdala & cerebellum; in frontal cortex, expression of CALB (a neuroprotective calcium-binding protein decreased while COX-2 (an inducible inflammatory factor prevalent in Alzheimer's disease increased. Conclusions Results suggest that dietary phytoestrogens significantly sex-reversed the normal sexually dimorphic expression of VSM. Specifically, in tasks requiring the use of reference, but not working, memory, VSM was enhanced in females fed the Phyto-600 diet, whereas, in males VSM was inhibited by the same diet. These findings suggest that dietary soy derived phytoestrogens can influence learning and memory and alter the expression of proteins involved in neural protection and inflammation in rats.
Kirsten E Smayda
Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
This thesis is focused on a technology called Augmented reality, especially on its use in marketing. The main objective of the thesis is to define why this technology is a suitable tool for marketing and to assess its use in real conditions. This is achieved by defining specific devices and use cases of this technology in practice, whereas evaluation of its use in real enviroment is based on statistics. The contribution of the thesis is objective evaluation of this technology and provision of...
Obeidy, Waqas Khalid; Arshad, Haslina; Huang, Jiung Yao
Recent mobile technologies have revolutionized the way people experience their environment. Although, there is only limited research on users' acceptance of AR in the cultural tourism context, previous researchers have explored the opportunities of using augmented reality (AR) in order to enhance user experience. Recent AR research lack works that integrates dimensions which are specific to cultural tourism and smart glass specific context. Hence, this work proposes an AR acceptance model in the context of cultural heritage tourism and smart glasses capable of performing augmented reality. Therefore, in this paper we aim to present an AR acceptance model to understand the AR usage behavior and visiting intention for tourists who use Smart Glass based AR at UNESCO cultural heritage destinations in Malaysia. Furthermore, this paper identifies information quality, technology readiness, visual appeal, and facilitating conditions as external variables and key factors influencing visitors' beliefs, attitudes and usage intention.
Sato, Takeshi; Suzuki, Akio
The aim of this study is to optimize CALL environments as a learning tool rather than a gloss, focusing on the learning of polysemous words which refer to spatial relationship between objects. A lot of research has already been conducted to examine the efficacy of visual glosses while reading L2 texts and has reported that visual glosses can be…
Africa, Eileen K.; van Deventer, Karel J.
Pre-schoolers are in a window period for motor skill development. Visual-motor integration (VMI) is the foundation for academic and sport skills. Therefore, it must develop before formal schooling. This study attempted to improve VMI skills. VMI skills were measured with the "Beery-Buktenica developmental test of visual-motor integration 6th…
Schoevers, E.M.; Kroesbergen, E.H.; Pitta-Pantazi, D.
This article describes a new pedagogical method, an integrated visual art and geometry program, which has the aim to increase primary school students' creative problem solving and geometrical ability. This paper presents the rationale for integrating visual art and geometry education. Furthermore
Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.
Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…
N. Shirzadian (Najereh); J.A. Redi (Judith); T. Röggla (Tom); A. Panza (Alice); F.-M. Nack (Frank); P.S. Cesar Garcia (Pablo Santiago)
textabstractThis paper evaluates the influence of an additional visual aesthetic layer on the experience of concert goers during a live event. The additional visual layer incorporates musical features as well as bio-sensing data collected during the concert, which is coordinated by our audience
Gregory D. Scott
Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in
Visalli, Antonino; Vallesi, Antonino
Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages.
Full Text Available Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target while in the other block the target was a Smurfette doll (neutral target. The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages.
Minocha, Shailey; Tudor, Ana-Despina
Virtual reality is becoming pervasive in several domains - in arts and film-making, for environmental causes, in medical education, in disaster management training, in sports broadcasting, in entertainment, and in supporting patients with dementia. An awareness of virtual reality technology and its integration in curriculum design will provide and enhance employability skills for current and future workplaces.\\ud \\ud In this webinar, we will describe the evolution of virtual reality technolog...
V. A. Abramova
In post-nonclassical science in studying of spontaneous systems it is important to consider a narrow orientation of perception in the solution of specific objectives, in this context, perception of symbolical transformations at various levels – subjective and objective. The virtual reality widespread now thanks to enhancement of information and communication technologies consists of hypertrophied effects of virtualization of reality where the virtual image has nothing in common with reality, ...
Brooks, Frederick P., Jr.
The utility of virtual reality computer graphics in telepresence applications is not hard to grasp and promises to be great. When the virtual world is entirely synthetic, as opposed to real but remote, the utility is harder to establish. Vehicle simulators for aircraft, vessels, and motor vehicles are proving their worth every day. Entertainment applications such as Disney World's StarTours are technologically elegant, good fun, and economically viable. Nevertheless, some of us have no real desire to spend our lifework serving the entertainment craze of our sick culture; we want to see this exciting technology put to work in medicine and science. The topics covered include the following: testing a force display for scientific visualization -- molecular docking; and testing a head-mounted display for scientific and medical visualization.
Gokeler, Alli; Bisschop, Marsha; Myer, Gregory D; Benjaminse, Anne; Dijkstra, Pieter U; van Keeken, Helco G; van Raay, Jos J A M; Burgerhof, Johannes G M; Otten, Egbert
The purpose of this study was to evaluate the influence of immersion in a virtual reality environment on knee biomechanics in patients after ACL reconstruction (ACLR). It was hypothesized that virtual reality techniques aimed to change attentional focus would influence altered knee flexion angle, knee extension moment and peak vertical ground reaction force (vGRF) in patients following ACLR. Twenty athletes following ACLR and 20 healthy controls (CTRL) performed a step-down task in both a non-virtual reality environment and a virtual reality environment displaying a pedestrian traffic scene. A motion analysis system and force plates were used to measure kinematics and kinetics during a step-down task to analyse each single-leg landing. A significant main effect was found for environment for knee flexion excursion (P = n.s.). Significant interaction differences were found between environment and groups for vGRF (P = 0.004), knee moment (P virtual reality environment on knee biomechanics in patients after ACLR compared with controls. Patients after ACLR immersed in virtual reality environment demonstrated knee joint biomechanics that approximate those of CTRL. The results of this study indicate that a realistic virtual reality scenario may distract patients after ACLR from conscious motor control. Application of clinically available technology may aid in current rehabilitation programmes to target altered movement patterns after ACLR. Diagnostic study, Level III.
Poort, Jasper; Self, Matthew W; van Vugt, Bram; Malkki, Hemi; Roelfsema, Pieter R
Segregation of images into figures and background is fundamental for visual perception. Cortical neurons respond more strongly to figural image elements than to background elements, but the mechanisms of figure-ground modulation (FGM) are only partially understood. It is unclear whether FGM in early and mid-level visual cortex is caused by an enhanced response to the figure, a suppressed response to the background, or both.We studied neuronal activity in areas V1 and V4 in monkeys performing a texture segregation task. We compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background. Across neurons, the strength of figure enhancement was independent of the strength of background suppression.We also examined activity in the different V1 layers. Both figure enhancement and ground suppression were strongest in superficial and deep layers and weaker in layer 4. The current-source density profiles suggested that figure enhancement was caused by stronger synaptic inputs in feedback-recipient layers 1, 2, and 5 and ground suppression by weaker inputs in these layers, suggesting an important role for feedback connections from higher level areas. These results provide new insights into the mechanisms for figure-ground organization. © The Author 2016. Published by Oxford University Press.
Berland, Kristian [Centre for Materials Science and Nanotechnology (SMN), University of Oslo, P.O.B. 1126 Blindern, NO-0318 Oslo (Norway); Song, Xin [Department of Physics, University of Oslo, P.O.B. 1048 Blindern, NO-0316 Oslo (Norway); Carvalho, Patricia A. [SINTEF Materials and Chemistry, Forskningsveien 1, NO-0314 Oslo (Norway); Persson, Clas; Finstad, Terje G. [Centre for Materials Science and Nanotechnology (SMN), University of Oslo, P.O.B. 1126 Blindern, NO-0318 Oslo (Norway); Department of Physics, University of Oslo, P.O.B. 1048 Blindern, NO-0316 Oslo (Norway); Løvvik, Ole Martin [Department of Physics, University of Oslo, P.O.B. 1048 Blindern, NO-0316 Oslo (Norway); SINTEF Materials and Chemistry, Forskningsveien 1, NO-0314 Oslo (Norway)
Energy filtering has been suggested by many authors as a means to improve thermoelectric properties. The idea is to filter away low-energy charge carriers in order to increase Seebeck coefficient without compromising electronic conductivity. This concept was investigated in the present paper for a specific material (ZnSb) by a combination of first-principles atomic-scale calculations, Boltzmann transport theory, and experimental studies of the same system. The potential of filtering in this material was first quantified, and it was as an example found that the power factor could be enhanced by an order of magnitude when the filter barrier height was 0.5 eV. Measured values of the Hall carrier concentration in bulk ZnSb were then used to calibrate the transport calculations, and nanostructured ZnSb with average grain size around 70 nm was processed to achieve filtering as suggested previously in the literature. Various scattering mechanisms were employed in the transport calculations and compared with the measured transport properties in nanostructured ZnSb as a function of temperature. Reasonable correspondence between theory and experiment could be achieved when a combination of constant lifetime scattering and energy filtering with a 0.25 eV barrier was employed. However, the difference between bulk and nanostructured samples was not sufficient to justify the introduction of an energy filtering mechanism. The reasons for this and possibilities to achieve filtering were discussed in the paper.
Wright, Jennifer M; Ren, Suelynn; Constantin, Annie; Clarke, Paul B S
Nicotine and D-amphetamine can strengthen reinforcing effects of unconditioned visual stimuli. We investigated whether these reinforcement-enhancing effects reflect a slowing of stimulus habituation and depend on food restriction. Adult male rats pressed an active lever to illuminate a cue light during daily 60-min sessions. Depending on the experiment, rats were challenged with fixed or varying doses of D-amphetamine (0.25-2 mg/kg IP) and nicotine (0.025-0.2 mg/kg SC) or with the tobacco constituent norharman (0.03-10 μg/kg IV). Experiment 1 tested for possible reinforcement-enhancing effects of D-amphetamine and norharman. Experiment 2 investigated whether nicotine and amphetamine inhibited the spontaneous within-session decline in lever pressing. Experiment 3 assessed the effects of food restriction. Amphetamine (0.25-1 mg/kg) and nicotine (0.1 mg/kg) increased active lever pressing specifically (two- to threefold increase). The highest doses of nicotine and amphetamine also affected inactive lever responding (increase and decrease, respectively). With the visual reinforcer omitted, responding was largely extinguished. Neither drug appeared to slow habituation, as assessed by the within-session decline in lever pressing, and reinforcement-enhancing effects still occurred if the drugs were given after this decline had occurred. Food restriction enhanced the reinforcement-enhancing effect of amphetamine but not that of nicotine. Responding remained goal-directed after several weeks of testing. Low doses of D-amphetamine and nicotine produced reinforcement enhancement even in free-feeding subjects, independent of the spontaneous within-session decline in responding. Reinforcement enhancement by amphetamine, but not nicotine, was enhanced by concurrent subchronic food restriction.
de Tommaso, Marina; Ricci, Katia; Delussi, Marianna; Montemurno, Anna; Vecchio, Eleonora; Brunetti, Antonio; Bevilacqua, Vitoantonio
We propose a virtual reality (VR) model, reproducing a house environment, where color modification of target places, obtainable by home automation in a real ambient, was tested by means of a P3b paradigm. The target place (bathroom door) was designed to be recognized during a virtual wayfinding in a realistic reproduction of a house environment. Different color and luminous conditions, easily obtained in the real ambient from a remote home automation control, were applied to the target and standard places, all the doors being illuminated in white (W), and only target doors colored with a green (G) or red (R) spotlight. Three different Virtual Environments (VE) were depicted, as the bathroom was designed in the aisle (A), living room (L) and bedroom (B). EEG was recorded from 57 scalp electrodes in 10 healthy subjects in the 60-80 year age range (O-old group) and 12 normal cases in the 20-30 year age range (Y-young group). In Young group, all the target stimuli determined a significant increase in P3b amplitude on the parietal, occipital and central electrodes compared to frequent stimuli condition, whatever was the color of the target door, while in elderly group the P3b obtained by the green and red colors was significantly different from the frequent stimulus, on the parietal, occipital, and central derivations, while the White stimulus did not evoke a significantly larger P3b with respect to frequent stimulus. The modulation of P3b amplitude, obtained by color and luminance change of target place, suggests that cortical resources, able to compensate the age-related progressive loss of cognitive performance, need to be facilitated even in normal elderly. The event-related responses obtained by virtual reality may be a reliable method to test the environmental feasibility to age-related cognitive changes.
The same scientific visualizations, animations, and images that are powerful tools for geoscientists can serve an important role in K-12 geoscience education by encouraging students to communicate in ways that help them develop habits of thought that are similar to those used by scientists. Resources such as those created by NASA's Scientific Visualization Studio (SVS), which are intended to inform researchers and the public about NASA missions, can be used in classrooms to promote thoughtful, engaged learning. Instructional materials that make use of those visualizations have been developed and are being used in K-12 classrooms in ways that demonstrate the vitality of the geosciences. For example, the Center for Geoscience and Society at the American Geosciences Institute (AGI) helped to develop a publication that outlines an inquiry-based approach to introducing students to the interpretation of scientific visualizations, even when they have had little to no prior experience with such media. To facilitate these uses, the SVS team worked with Center staff and others to adapt the visualizations, primarily by removing most of the labels and annotations. Engaging with these visually compelling resources serves as an invitation for students to ask questions, interpret data, draw conclusions, and make use of other processes that are key components of scientific thought. This presentation will share specific resources for K-12 teaching (all of which are available online, from NASA, and/or from AGI), as well as the instructional principles that they incorporate.
Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Using a Virtual - Reality Team Cohesion Test PRINCIPAL INVESTIGATOR: Josh Woolley MD/PhD CONTRACTING ORGANIZATION: NORTHERN CALIFORNIA INSTITUTE SAN...Team Cohesion and Psychological Resilience in ROTC Cadets Using a Virtual - Reality Team Cohesion Test 5b. GRANT NUMBER W81XWH-15-1-0042 5c. PROGRAM...targets while flying a virtual air vehicle. No individual has access to all the necessary information or controls, so operating as a team is crucial
Cidota, M.A.; Lukosch, S.G.; Datcu, D.; Lukosch, H.K.
In many fields of activity, working in teams is necessary for completing tasks in a proper manner and often requires visual context-related information to be exchanged between team members. In such a collaborative environment, awareness of other people’s activity is an important feature of
D.E. Meuffels (Duncan); J.W. Potters (Jan Willem); A.H.J. Koning (Anton); C.H. Brown Jr Jr. (Charles); J.A.N. Verhaar (Jan); M. Reijman (Max)
textabstractBackground and purpose: Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans,
Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A.; Karim, Naz; Merck, Derek L.
Introduction Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular
Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A; Karim, Naz; Merck, Derek L
Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients' de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based "blind insertion" invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner's AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage
Full Text Available Introduction Augmented reality (AR, mixed reality (MR, and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator and those seeking understanding and insight (patient/family; non-expert; learner. Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1 implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2 use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable
Chou, Te-Lien; Chanlin, Lih-Juan
A context-aware and mixed-reality exploring tool cannot only effectively provide an information-rich environment to users, but also allows them to quickly utilize useful resources and enhance environment awareness. This study integrates Augmented Reality (AR) technology into smartphones to create a stimulating learning experience at a university…
Zopf, Regine; Polito, Vince; Moore, James
Embodiment and agency are key aspects of how we perceive ourselves that have typically been associated with independent mechanisms. Recent work, however, has suggested that these mechanisms are related. The sense of agency arises from recognising a causal influence on the external world. This influence is typically realised through bodily movements and thus the perception of the bodily self could also be crucial for agency. We investigated whether a key index of agency - intentional binding - was modulated by body-specific information. Participants judged the interval between pressing a button and a subsequent tone. We used virtual reality to manipulate two aspects of movement feedback. First, form: participants viewed a virtual hand or sphere. Second, movement congruency: the viewed object moved congruently or incongruently with the participant's hidden hand. Both factors, form and movement congruency, significantly influenced embodiment. However, only movement congruency influenced intentional binding. Binding was increased for congruent compared to incongruent movement feedback irrespective of form. This shows that the comparison between viewed and performed movements provides an important cue for agency, whereas body-specific visual form does not. We suggest that embodiment and agency mechanisms both depend on comparisons across sensorimotor signals but that they are influenced by distinct factors.
Chou, Betty; Handa, Victoria L
This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.
Full Text Available Most electrical substations are remotely monitored and controlled by using Supervisory Control and Data Acquisition (SCADA applications. Current SCADA systems have been significantly enhanced by utilizing standardized communication protocols and the most prominent is the IEC 61850 international standard. These enhancements enable improvements in different domains of SCADA systems such as communication engineering, data management and visualization of automation process data in SCADA applications. Process data visualization is usually achieved through Human Machine Interface (HMI screens in substation control centres. However, this visualization method sometimes makes supervision, control and maintenance procedures executed by engineers slow and error-prone because it separates equipment from its automation data. Augmented reality (AR and mixed reality (MR visualization techniques have matured enough to provide new possibilities of displaying relevant data wherever needed. This paper presents a novel methodology for visualizing process related SCADA data to enhance and facilitate human-centric activities in substations such as regular equipment maintenance. The proposed solution utilizes AR visualization techniques together with standards-based communication protocols used in substations. The developed proof-of-concept AR application that enables displaying SCADA data on the corresponding substation equipment with the help of AR markers demonstrates originality and benefits of the proposed visualization method. Additionally, the application enables displaying widgets and 3D models of substation equipment to make the visualization more user-friendly and intuitive. The visualized SCADA data needs to be refreshed considering soft real-time data delivery restrictions. Therefore, the proposed solution is thoroughly tested to demonstrate the applicability of proposed methodology in real substations.
Full Text Available It is known that the training of intelligent virtual reality, through the use of computer games, can improve spatial skills especially visualization and enhances academic achievements. Through an experiment of using Tetris software, two objectives were achieved: developing spatial as well as intelligence skills and enhancing academic achievements, focusing on mathematics. This study followed studies dealing with the impact on putting the learner into action in 3d space software. During teaching a transition from 2d to 3d spatial perception and operation occurred. A positive transfer from 3d virtual reality rotation training to structural induction skills, by means of mental imaging, was also achieved. At the same time the motivation for learning was enhanced, without using extrinsic reinforcements. The duration of concentration while using the intelligent software increased gradually up to 60 minutes.
Bender, Melinda S; Martinez, Suzanna; Kennedy, Christine
Rapid proliferation of smartphone ownership and use among Latinos offers a unique opportunity to employ innovative visually enhanced low-text (VELT) mobile health applications (mHealth app) to promote health behavior change for Latinos at risk for lifestyle-related diseases. Using focus groups and in-depth interviews with 16 promotores and 5 health care providers recruited from California clinics, this qualitative study explored perceptions of visuals for a VELT mHealth app promoting physical activity (PA) and limiting sedentary behavior (SB) for Latinos. In this Phase 1 study, participants endorsed visuals portraying PA guidelines and recommended visuals depicting family and socially oriented PA. Overall, participants supported a VELT mHealth app as an alternative to text-based education. Findings will inform the future Phase 2 study development of a culturally appropriate VELT mHealth app to promote PA for Latinos, improve health literacy, and provide an alternative to traditional clinic text-based health education materials. © The Author(s) 2015.
Drawing on ethnographic fieldwork in a transnational Fortune 50 company headquarters' environmental management team, this paper opens up a range of situations that took part in enacting the company's carbon footprint. Common to all these situations is that the environmental realities enacted have......, order, infra-critique) as well as with these two authors intends to contribute to the identification and problematisation of the theoretical and political “mechanics” in the ontological turn.......-order critique may be generated with these scholars' work. By focussing on the capacities and modes of critique, the paper questions the character of the political in these authors' versions of ontological and ontic politics. This comparison of the possibilities and modes of criticising within the field (first...
Buesing, Mark; Cook, Michael
Augmented reality (AR) is a technology used on computing devices where processor-generated graphics are rendered over real objects to enhance the sensory experience in real time. In other words, what you are really seeing is augmented by the computer. Many AR games already exist for systems such as Kinect and Nintendo 3DS and mobile apps, such as…
The twenty-first century hosts a well-established global economy, where leaders are required to have increasingly complex skills that include creativity, innovation, vision, relatability, critical thinking and well-honed communications methods. The experience gained by learning to be visually literate includes the ability to see, observe, analyze,…
Philominraj, Andrew; Jeyabalan, David; Vidal-Silva, Christian
This article presents an empirical study carried out among the students of higher secondary schools to find out how English language learning occurs naturally in an environment where learners are encouraged by an appropriate method such as visual learning. The primary data was collected from 504 students with different pretested questionnaires. A…
Hovgesen, Caroline Harder; Wilhjelm, Jens E.; Vilmann, Peter
: A systematic search was performed in five databases: Cochrane Library, Embase (through Ovid), MEDLINE (through PubMed), Scopus, and Web of Science from inception to April 12th, 2017. Each search was based on the search terms: ultrasound, needle, visualization, and comparison, with related synonyms and spelling...
Ganz, Jennifer B.; Boles, Margot B.; Goodwyn, Fara D.; Flores, Margaret M.
Although electronic tools such as handheld computers have become increasingly common throughout society, implementation of such tools to improve skills in individuals with intellectual and developmental disabilities has lagged in the professional literature. However, the use of visual scripts for individuals with disabilities, particularly those…
Day, Janice Neibaur; McDonnell, Andrea P.; Heathfield, Lora Tuesday
Emergent literacy can be viewed as skills that are precursors to later reading and writing (Sulzby & Teale, 1991) or can be more broadly conceptualized as literacy acquisition that occurs along a developmental continuum. Because children with disabilities, such as visual impairments, can be at risk for later reading difficulties, it is critical…
Falter, Christine M.; Braeutigam, Sven; Nathan, Roger; Carrington, Sarah; Bailey, Anthony J.
We compared judgements of the simultaneity or asynchrony of visual stimuli in individuals with autism spectrum disorders (ASD) and typically-developing controls using Magnetoencephalography (MEG). Two vertical bars were presented simultaneously or non-simultaneously with two different stimulus onset
Matsangidou, Maria; Ang, Chee Siang; Sakel, Mohamed
Virtual Reality is a technology that allows users to experience a computer-simulated reality with visual, auditory, tactile and olfactory interactions. In the past decades, there have been considerable interests in using Virtual Reality for clinical purposes, including pain management. This article provides a systematic review of research on Virtual Reality and pain management, with an aim to understand the feasibilities of current Virtual Reality technologies and content design approaches in...
The objective is to analyze the use of the emerging 3D computer technology of VirtualReality in the use of relieving pain in physically impaired conditions such as burn victims,amputees, and phantom limb patients, during therapy and medical procedures. Virtualtechnology generates a three dimensional visual virtual world in which enables interaction.Comparison will be made between the emerging technology of the Virtual Reality and methodsusually used, which are the use of medicine. Medicine ha...
Full Text Available Elvan Yalcin, Ozlem BalciWorld Eye Hospital, Department of Pediatric Ophthalmology, Istanbul, TurkeyBackground: The purpose of this study was to evaluate the efficacy of neural vision therapy, also termed perceptual vision therapy, in enhancing best corrected visual acuity (BCVA and contrast sensitivity function in amblyopic patients.Methods: This prospective study enrolled 99 subjects previously diagnosed with unilateral hypermetropic amblyopia aged 9–50 years. The subjects were divided into two groups, with 53 subjects (53 eyes in the perceptual vision therapy group and 46 subjects (46 eyes in the control group. Because the nature of the treatment demands hard work and strict compliance, we enrolled the minimal number of subjects required to achieve statistically significant results. Informed consent was obtained from all subjects. Study phases included a baseline screening, a series of 45 training sessions with perceptual vision therapy, and an end-of-treatment examination. BCVA and contrast sensitivity function at 1.5, 3, 6, 12, and 18 cycles per degree spatial frequencies were obtained for statistical analysis in both groups. All subjects had follow-up examinations within 4–8 months. With the exception of one subject from the study group and two subjects from the control group, all subjects had occlusion during childhood. The study was not masked.Results: The results for the study group demonstrated a mean improvement of 2.6 logarithm of the minimum angle of resolution (logMAR lines in visual acuity (from 0.42 to 0.16 logMAR. Contrast sensitivity function improved at 1.5, 3, 6, 12, and 18 cycles per degree spatial frequencies. The control group did not show any significant change in visual acuity or contrast sensitivity function. None of the treated eyes showed a drop in visual acuity. Manifest refractions remained unchanged during the study.Conclusion: The results of our study demonstrate the efficacy of perceptual vision therapy in
Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun
Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p vision optical correction of the human eye.
Lee, Minyoung; Blake, Randolph; Kim, Sujin; Kim, Chai-Youn
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Ahmad, Muhammad; Hippolyte, Jean-Laurent; Reynolds, Jonathan; Mourshed, Monjur; Rezgui, Yacine
Buildings account for 40% of total global energy use and contribute towards 30% of total CO2 emissions. Heating ventilation and air conditioning (HVAC) systems are the major sources of energy consumption in buildings, and there has been extensive research focusing on efficiently control them. However, in most cases, this is achieved at the cost of sacrificing thermal, visual and/or IAQ comfort. High level of carbon dioxide – which is commonly used a metric for measuring air quality, can affec...
Reinhart, Robert M. G.; Woodman, Geoffrey F.
Theories of attention propose that we rely on working memory to control attention by maintaining target presentations in this active store as our visual systems are used to search for certain objects. Here, we show that the tuning of perceptual attention can be sharply accelerated by noninvasive brain stimulation. Our electrophysiological measurements showed that these improvements in attentional tuning were preceded by changes in event-related potentials thought to index long-term memory, bu...