WorldWideScience

Sample records for visual feedback navigation

  1. Virtual environment to evaluate multimodal feedback strategies for augmented navigation of the visually impaired.

    Science.gov (United States)

    Hara, Masayuki; Shokur, Solaiman; Yamamoto, Akio; Higuchi, Toshiro; Gassert, Roger; Bleuler, Hannes

    2010-01-01

    This paper proposes a novel experimental environment to evaluate multimodal feedback strategies for augmented navigation of the visually impaired. The environment consists of virtual obstacles and walls, an optical tracking system and a simple device with audio and vibrotactile feedback that interacts with the virtual environment, and presents many advantages in terms of safety, flexibility, control over experimental parameters and cost. The subject can freely move in an empty room, while the position of head and arm are tracked in real time. A virtual environment (walls, obstacles) is randomly generated, and audio and vibrotactile feedback are given according to the distance from the subjects arm to the virtual walls/objects. We investigate the applicability of our environment using a simple, commercially available feedback device. Experiments with unimpaired subjects show that it is possible to use the setup to "blindly" navigate in an unpredictable virtual environment. This validates the environment as a test platform to investigate navigation and exploration strategies of the visually impaired, and to evaluate novel technologies for augmented navigation.

  2. Model-base visual navigation of a mobile robot

    International Nuclear Information System (INIS)

    Roening, J.

    1992-08-01

    The thesis considers the problems of visual guidance of a mobile robot. A visual navigation system is formalized consisting of four basic components: world modelling, navigation sensing, navigation and action. According to this formalization an experimental system is designed and realized enabling real-world navigation experiments. A priori knowledge of the world is used for global path finding, aiding scene analysis and providing feedback information to the close the control loop between planned and actual movements. Two world models were developed. The first approach was a map-based model especially designed for low-level description of indoor environments. The other was a higher level and more symbolic representation of the surroundings utilizing the spatial graph concept. Two passive vision approaches were developed to extract navigation information. With passive three- camera stereovision a sparse depth map of the scene was produced. Another approach employed a fish-eye lens to map the entire scene of the surroundings without camera scanning. The local path planning of the system is supported by three-dimensional scene interpreter providing a partial understanding of scene contents. The interpreter consists of data-driven low-level stages and a model-driven high-level stage. Experiments were carried out in a simulator and test vehicle constructed in the laboratory. The test vehicle successfully navigated indoors

  3. Vibrotactile Feedbacks System for Assisting the Physically Impaired Persons for Easy Navigation

    Science.gov (United States)

    Safa, M.; Geetha, G.; Elakkiya, U.; Saranya, D.

    2018-04-01

    NAYAN architecture is for a visually impaired person to help for navigation. As well known, all visually impaired people desperately requires special requirements even to access services like the public transportation. This prototype system is a portable device; it is so easy to carry in any conduction to travel through a familiar and unfamiliar environment. The system consists of GPS receiver and it can get NEMA data through the satellite and it is provided to user's Smartphone through Arduino board. This application uses two vibrotactile feedbacks that will be placed in the left and right shoulder for vibration feedback, which gives information about the current location. The ultrasonic sensor is used for obstacle detection which is found in front of the visually impaired person. The Bluetooth modules connected with Arduino board is to send information to the user's mobile phone which it receives from GPS.

  4. Autonomous Robot Navigation based on Visual Landmarks

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2005-01-01

    The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully a...... automatically learn and store visual landmarks, and later recognize these landmarks from arbitrary positions and thus estimate robot position and heading.......The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully...... autonomous navigation and self-localization using automatically selected landmarks. The thesis investigates autonomous robot navigation and proposes a new method which benefits from the potential of the visual sensor to provide accuracy and reliability to the navigation process while relying on naturally...

  5. An Outdoor Navigation System for Blind Pedestrians Using GPS and Tactile-Foot Feedback

    Directory of Open Access Journals (Sweden)

    Ramiro Velázquez

    2018-04-01

    Full Text Available This paper presents a novel, wearable navigation system for visually impaired and blind pedestrians that combines a global positioning system (GPS for user outdoor localization and tactile-foot stimulation for information presentation. Real-time GPS data provided by a smartphone are processed by dedicated navigation software to determine the directions to a destination. Navigational directions are then encoded as vibrations and conveyed to the user via a tactile display that inserts into the shoe. The experimental results showed that users were capable of recognizing with high accuracy the tactile feedback provided to their feet. The preliminary tests conducted in outdoor locations involved two blind users who were guided along 380–420 m predetermined pathways, while sharing the space with other pedestrians and facing typical urban obstacles. The subjects successfully reached the target destinations. The results suggest that the proposed system enhances independent, safe navigation of blind pedestrians and show the potential of tactile-foot stimulation in assistive devices.

  6. Visual Guided Navigation

    National Research Council Canada - National Science Library

    Banks, Martin

    1999-01-01

    .... Similarly, the problem of visual navigation is the recovery of an observer's self-motion with respect to the environment from the moving pattern of light reaching the eyes and the complex of extra...

  7. Conceptual Design of Haptic-Feedback Navigation Device for Individuals with Alzheimer's Disease.

    Science.gov (United States)

    Che Me, Rosalam; Biamonti, Alessandro; Mohd Saad, Mohd Rashid

    2015-01-01

    Wayfinding ability in older adults with Alzheimer's disease (AD) is progressively impaired due to ageing and deterioration of cognitive domains. Usually, the sense of direction is deteriorated as visuospatial and spatial cognition are associated with the sensory acuity. Therefore, navigation systems that support only visual interactions may not be appropriate in case of AD. This paper presents a concept of wearable navigation device that integrates the haptic-feedback technology to facilitate the wayfinding of individuals with AD. The system provides the simplest instructions; left/right using haptic signals, as to avoid users' distraction during navigation. The advantages of haptic/tactile modality for wayfinding purpose based on several significant studies are presented. As preliminary assessment, a survey is conducted to understand the potential of this design concept in terms of (1) acceptability, (2) practicality, (3) wearability, and (4) environmental settings. Results indicate that the concept is highly acceptable and commercially implementable. A working prototype will be developed based on the results of the preliminary assessment. Introducing a new method of navigation should be followed by continuous practices for familiarization purpose. Improved navigability allows the good performance of activities of daily living (ADLs) hence maintain the good quality of life in older adults with AD.

  8. Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device.

    Science.gov (United States)

    Katzschmann, Robert K; Araki, Brandon; Rus, Daniela

    2018-03-01

    This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases.

  9. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    Directory of Open Access Journals (Sweden)

    Emmanuele eTidoni

    2014-06-01

    Full Text Available Advancement in brain computer interfaces (BCI technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid’s walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI’s user and help in the feeling of control over it. Our results shed light on the possibility to increase robot’s control through the combination of multisensory feedback to a BCI user.

  10. Visual Navigation of Complex Information Spaces

    Directory of Open Access Journals (Sweden)

    Sarah North

    1995-11-01

    Full Text Available The authors lay the foundation for the introduction of visual navigation aid to assist computer users in direct manipulation of the complex information spaces. By exploring present research on scientific data visualisation and creating a case for improved information visualisation tools, they introduce the design of an improved information visualisation interface utilizing dynamic slider, called Visual-X, incorporating icons with bindable attributes (glyphs. Exploring the improvement that these data visualisations, make to a computing environment, the authors conduct an experiment to compare the performance of subjects who use traditional interfaces and Visual-X. Methodology is presented and conclusions reveal that the use of Visual-X appears to be a promising approach in providing users with a navigation tool that does not overload their cognitive processes.

  11. Navigating nuclear science: Enhancing analysis through visualization

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H.; Berkel, J. van; Johnson, D.K.; Wylie, B.N.

    1997-09-01

    Data visualization is an emerging technology with high potential for addressing the information overload problem. This project extends the data visualization work of the Navigating Science project by coupling it with more traditional information retrieval methods. A citation-derived landscape was augmented with documents using a text-based similarity measure to show viability of extension into datasets where citation lists do not exist. Landscapes, showing hills where clusters of similar documents occur, can be navigated, manipulated and queried in this environment. The capabilities of this tool provide users with an intuitive explore-by-navigation method not currently available in today`s retrieval systems.

  12. Satellite Imagery Assisted Road-Based Visual Navigation System

    Science.gov (United States)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  13. Differential effects of visual feedback on subjective visual vertical accuracy and precision.

    Directory of Open Access Journals (Sweden)

    Daniel Bjasch

    Full Text Available The brain constructs an internal estimate of the gravitational vertical by integrating multiple sensory signals. In darkness, systematic head-roll dependent errors in verticality estimates, as measured by the subjective visual vertical (SVV, occur. We hypothesized that visual feedback after each trial results in increased accuracy, as physiological adjustment errors (A-/E-effect are likely based on central computational mechanisms and investigated whether such improvements were related to adaptational shifts of perceived vertical or to a higher cognitive strategy. We asked 12 healthy human subjects to adjust a luminous arrow to vertical in various head-roll positions (0 to 120deg right-ear down, 15deg steps. After each adjustment visual feedback was provided (lights on, display of previous adjustment and of an earth-vertical cross. Control trials consisted of SVV adjustments without feedback. At head-roll angles with the largest A-effect (90, 105, and 120deg, errors were reduced significantly (p0.05 influenced. In seven subjects an additional session with two consecutive blocks (first with, then without visual feedback was completed at 90, 105 and 120deg head-roll. In these positions the error-reduction by the previous visual feedback block remained significant over the consecutive 18-24 min (post-feedback block, i.e., was still significantly (p<0.002 different from the control trials. Eleven out of 12 subjects reported having consciously added a bias to their perceived vertical based on visual feedback in order to minimize errors. We conclude that improvements of SVV accuracy by visual feedback, which remained effective after removal of feedback for ≥18 min, rather resulted from a cognitive strategy than by adapting the internal estimate of the gravitational vertical. The mechanisms behind the SVV therefore, remained stable, which is also supported by the fact that SVV precision - depending mostly on otolith input - was not affected by visual

  14. Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

    Directory of Open Access Journals (Sweden)

    Zhen Jia

    2008-12-01

    Full Text Available This paper surveys the developments of last 10 years in the area of vision based target tracking for autonomous vehicles navigation. First, the motivations and applications of using vision based target tracking for autonomous vehicles navigation are presented in the introduction section. It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles. Then this paper reviews the recent techniques in three different categories: vision based target tracking for the applications of land, underwater and aerial vehicles navigation. Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed. Through data fusion the tracking performance is improved and becomes more robust. Based on the review, the remaining research challenges are summarized and future research directions are investigated.

  15. Voluntarily controlled but not merely observed visual feedback affects postural sway

    Science.gov (United States)

    Asai, Tomohisa; Hiromitsu, Kentaro; Imamizu, Hiroshi

    2018-01-01

    Online stabilization of human standing posture utilizes multisensory afferences (e.g., vision). Whereas visual feedback of spontaneous postural sway can stabilize postural control especially when observers concentrate on their body and intend to minimize postural sway, the effect of intentional control of visual feedback on postural sway itself remains unclear. This study assessed quiet standing posture in healthy adults voluntarily controlling or merely observing visual feedback. The visual feedback (moving square) had either low or high gain and was either horizontally flipped or not. Participants in the voluntary-control group were instructed to minimize their postural sway while voluntarily controlling visual feedback, whereas those in the observation group were instructed to minimize their postural sway while merely observing visual feedback. As a result, magnified and flipped visual feedback increased postural sway only in the voluntary-control group. Furthermore, regardless of the instructions and feedback manipulations, the experienced sense of control over visual feedback positively correlated with the magnitude of postural sway. We suggest that voluntarily controlled, but not merely observed, visual feedback is incorporated into the feedback control system for posture and begins to affect postural sway. PMID:29682421

  16. Feature-Specific Organization of Feedback Pathways in Mouse Visual Cortex.

    Science.gov (United States)

    Huh, Carey Y L; Peach, John P; Bennett, Corbett; Vega, Roxana M; Hestrin, Shaul

    2018-01-08

    Higher and lower cortical areas in the visual hierarchy are reciprocally connected [1]. Although much is known about how feedforward pathways shape receptive field properties of visual neurons, relatively little is known about the role of feedback pathways in visual processing. Feedback pathways are thought to carry top-down signals, including information about context (e.g., figure-ground segmentation and surround suppression) [2-5], and feedback has been demonstrated to sharpen orientation tuning of neurons in the primary visual cortex (V1) [6, 7]. However, the response characteristics of feedback neurons themselves and how feedback shapes V1 neurons' tuning for other features, such as spatial frequency (SF), remain largely unknown. Here, using a retrograde virus, targeted electrophysiological recordings, and optogenetic manipulations, we show that putatively feedback neurons in layer 5 (hereafter "L5 feedback") in higher visual areas, AL (anterolateral area) and PM (posteromedial area), display distinct visual properties in awake head-fixed mice. AL L5 feedback neurons prefer significantly lower SF (mean: 0.04 cycles per degree [cpd]) compared to PM L5 feedback neurons (0.15 cpd). Importantly, silencing AL L5 feedback reduced visual responses of V1 neurons preferring low SF (mean change in firing rate: -8.0%), whereas silencing PM L5 feedback suppressed responses of high-SF-preferring V1 neurons (-20.4%). These findings suggest that feedback connections from higher visual areas convey distinctly tuned visual inputs to V1 that serve to boost V1 neurons' responses to SF. Such like-to-like functional organization may represent an important feature of feedback pathways in sensory systems and in the nervous system in general. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Box jellyfish use terrestrial visual cues for navigation

    DEFF Research Database (Denmark)

    Garm, Anders; Oskarsson, Magnus; Nilsson, Dan-Eric

    2011-01-01

    been a puzzle why they need such a complex set of eyes. Here we report that medusae of the box jellyfish Tripedalia cystophora are capable of visually guided navigation in mangrove swamps using terrestrial structures seen through the water surface. They detect the mangrove canopy by an eye type...... that is specialized to peer up through the water surface and that is suspended such that it is constantly looking straight up, irrespective of the orientation of the jellyfish. The visual information is used to navigate to the preferred habitat at the edge of mangrove lagoons....

  18. Visual navigation using edge curve matching for pinpoint planetary landing

    Science.gov (United States)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  19. Sonification and haptic feedback in addition to visual feedback enhances complex motor task learning.

    Science.gov (United States)

    Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter

    2015-03-01

    Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.

  20. Measuring voluntary quadriceps activation: Effect of visual feedback and stimulus delivery.

    Science.gov (United States)

    Luc, Brittney A; Harkey, Matthew H; Arguelles, Gabrielle D; Blackburn, J Troy; Ryan, Eric D; Pietrosimone, Brian

    2016-02-01

    Quadriceps voluntary activation, assessed via the superimposed burst technique, has been extensively studied in a variety of populations as a measure of quadriceps function. However, a variety of stimulus delivery techniques have been employed, which may influence the level of voluntary activation as calculated via the central activation ratio (CAR). The purpose was to determine the effect of visual feedback, stimulus delivery, and perceived discomfort on maximal voluntary isometric contraction (MVIC) peak torque and the CAR. Quadriceps CAR was assessed in 14 individuals on two days using three stimulus delivery methods; (1) manual without visual feedback, (2) manual with visual feedback, and (3) automated with visual feedback. MVIC peak torque and the CAR were not different between the automated with visual feedback (MVIC=3.25, SE=0.14Nm/kg; CAR=88.63, SE=1.75%) and manual with visual feedback (MVIC=3.26, SE=0.13Nm/kg, P=0.859; CAR=89.06, SE=1.70%, P=0.39) stimulus delivery methods. MVIC (2.99, SE=0.12Nm/kg) and CAR (85.32, SE=2.10%) were significantly lower using manual without visual feedback compared to manual with visual feedback and automated with visual feedback (CAR P<0.001; MVIC P<0.001). Perceived discomfort was lower in the second session (P<0.05). Utilizing visual feedback ensures participant MVIC, and may provide a more accurate assessment of quadriceps voluntary activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Towards automated visual flexible endoscope navigation.

    Science.gov (United States)

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  2. Self-motivated visual scanning predicts flexible navigation in a virtual environment

    Directory of Open Access Journals (Sweden)

    Elisabeth Jeannette Ploran

    2014-01-01

    Full Text Available The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  3. Visual navigation in adolescents with early periventricular lesions: knowing where, but not getting there.

    Science.gov (United States)

    Pavlova, Marina; Sokolov, Alexander; Krägeloh-Mann, Ingeborg

    2007-02-01

    Visual navigation in familiar and unfamiliar surroundings is an essential ingredient of adaptive daily life behavior. Recent brain imaging work helps to recognize that establishing connectivity between brain regions is of importance for successful navigation. Here, we ask whether the ability to navigate is impaired in adolescents who were born premature and suffer congenital bilateral periventricular brain damage that might affect the pathways interconnecting subcortical structures with cortex. Performance on a set of visual labyrinth tasks was significantly worse in patients with periventricular leukomalacia (PVL) as compared with premature-born controls without lesions and term-born adolescents. The ability for visual navigation inversely relates to the severity of motor disability, leg-dominated bilateral spastic cerebral palsy. This agrees with the view that navigation ability substantially improves with practice and might be compromised in individuals with restrictions in active spatial exploration. Visual navigation is negatively linked to the volumetric extent of lesions over the right parietal and frontal periventricular regions. Whereas impairments of visual processing of point-light biological motion are associated in patients with PVL with bilateral parietal periventricular lesions, navigation ability is specifically linked to the frontal lesions in the right hemisphere. We suggest that more anterior periventricular lesions impair the interrelations between the right hippocampus and cortical areas leading to disintegration of neural networks engaged in visual navigation. For the first time, we show that the severity of right frontal periventricular damage and leg-dominated motor disorders can serve as independent predictors of the visual navigation disability.

  4. Manipulating the fidelity of lower extremity visual feedback to identify obstacle negotiation strategies in immersive virtual reality.

    Science.gov (United States)

    Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M

    2017-07-01

    The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.

  5. Image processing and applications based on visualizing navigation service

    Science.gov (United States)

    Hwang, Chyi-Wen

    2015-07-01

    When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.

  6. An Indoor Navigation System for the Visually Impaired

    Directory of Open Access Journals (Sweden)

    Luis A. Guerrero

    2012-06-01

    Full Text Available Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user’s trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  7. An indoor navigation system for the visually impaired.

    Science.gov (United States)

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  8. The Effect of Concurrent Visual Feedback on Controlling Swimming Speed

    Directory of Open Access Journals (Sweden)

    Szczepan Stefan

    2016-03-01

    Full Text Available Introduction. Developing the ability to control the speed of swimming is an important part of swimming training. Maintaining a defined constant speed makes it possible for the athlete to swim economically at a low physiological cost. The aim of this study was to determine the effect of concurrent visual feedback transmitted by the Leader device on the control of swimming speed in a single exercise test. Material and methods. The study involved a group of expert swimmers (n = 20. Prior to the experiment, the race time for the 100 m distance was determined for each of the participants. In the experiment, the participants swam the distance of 100 m without feedback and with visual feedback. In both variants, the task of the participants was to swim the test distance in a time as close as possible to the time designated prior to the experiment. In the first version of the experiment (without feedback, the participants swam the test distance without receiving real-time feedback on their swimming speed. In the second version (with visual feedback, the participants followed a beam of light moving across the bottom of the swimming pool, generated by the Leader device. Results. During swimming with visual feedback, the 100 m race time was significantly closer to the time designated. The difference between the pre-determined time and the time obtained was significantly statistically lower during swimming with visual feedback (p = 0.00002. Conclusions. Concurrently transmitting visual feedback to athletes improves their control of swimming speed. The Leader device has proven useful in controlling swimming speed.

  9. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  10. Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.

    Science.gov (United States)

    de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie

    2017-09-01

    Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.

  11. Effects of Visual, Auditory, and Tactile Navigation Cues on Navigation Performance, Situation Awareness, and Mental Workload

    National Research Council Canada - National Science Library

    Davis, Bradley M

    2007-01-01

    .... Results from both experiments indicate that augmented visual displays reduced time to complete navigation, maintained situation awareness, and drastically reduced mental workload in comparison...

  12. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate

  13. Deliverable D.8.4. Social data visualization and navigation services -3rd Year Update-

    NARCIS (Netherlands)

    Bitter-Rijpkema, Marlies; Brouns, Francis; Drachsler, Hendrik; Fazeli, Soude; Sanchez-Alonso, Salvador; Rajabi, Enayat; Kolovou, Lamprini

    2015-01-01

    Within the Open Discovery Space our study (T.8.4) focused on ”Enhanced Social Data Visualization & Navigation Services. This deliverable provides the prototype report regarding the deployment of adapted visualization and navigation services to be integrated in the ODS Social Data Management Layer.

  14. Street navigation using visual information on mobile phones

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen; Høilund, Carsten

    2010-01-01

    Applications with street navigation have been recently introduced on mobile phone devices. A major part of existing systems use integrated GPS as input for indicating the location. However, these systems often fail or make abrupt shifts in urban environment due to occlusion of satellites....... Furthermore, they only give the position of a person and not the object of his attention, which is just as important for localization based services. In this paper we introduce a system using mobile phones built-in cameras for navigation and localization using visual information in accordance with the way we...

  15. Effect of visual feedback on brain activation during motor tasks: an FMRI study.

    Science.gov (United States)

    Noble, Jeremy W; Eng, Janice J; Boyd, Lara A

    2013-07-01

    This study examined the effect of visual feedback and force level on the neural mechanisms responsible for the performance of a motor task. We used a voxel-wise fMRI approach to determine the effect of visual feedback (with and without) during a grip force task at 35% and 70% of maximum voluntary contraction. Two areas (contralateral rostral premotor cortex and putamen) displayed an interaction between force and feedback conditions. When the main effect of feedback condition was analyzed, higher activation when visual feedback was available was found in 22 of the 24 active brain areas, while the two other regions (contralateral lingual gyrus and ipsilateral precuneus) showed greater levels of activity when no visual feedback was available. The results suggest that there is a potentially confounding influence of visual feedback on brain activation during a motor task, and for some regions, this is dependent on the level of force applied.

  16. Dissociable cerebellar activity during spatial navigation and visual memory in bilateral vestibular failure.

    Science.gov (United States)

    Jandl, N M; Sprenger, A; Wojak, J F; Göttlich, M; Münte, T F; Krämer, U M; Helmchen, C

    2015-10-01

    Spatial orientation and navigation depends on information from the vestibular system. Previous work suggested impaired spatial navigation in patients with bilateral vestibular failure (BVF). The aim of this study was to investigate event-related brain activity by functional magnetic resonance imaging (fMRI) during spatial navigation and visual memory tasks in BVF patients. Twenty-three BVF patients and healthy age- and gender matched control subjects performed learning sessions of spatial navigation by watching short films taking them through various streets from a driver's perspective along a route to the Cathedral of Cologne using virtual reality videos (adopted and modified from Google Earth). In the scanner, participants were asked to respond to questions testing for visual memory or spatial navigation while they viewed short video clips. From a similar but not identical perspective depicted video frames of routes were displayed which they had previously seen or which were completely novel to them. Compared with controls, posterior cerebellar activity in BVF patients was higher during spatial navigation than during visual memory tasks, in the absence of performance differences. This cerebellar activity correlated with disease duration. Cerebellar activity during spatial navigation in BVF patients may reflect increased non-vestibular efforts to counteract the development of spatial navigation deficits in BVF. Conceivably, cerebellar activity indicates a change in navigational strategy of BVF patients, i.e. from a more allocentric, landmark or place-based strategy (hippocampus) to a more sequence-based strategy. This interpretation would be in accord with recent evidence for a cerebellar role in sequence-based navigation. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. The Effect of Visual Feedback on Writing Size in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Adriaan R. E. Potgieser

    2015-01-01

    Full Text Available Parkinson’s disease (PD leads to impairment in multiple cognitive domains. Micrographia is a relatively early PD sign of visuomotor dysfunction, characterized by a global reduction in writing size and a decrement in size during writing. Here we aimed to investigate the effect of withdrawal of visual feedback on writing size in patients with PD. Twenty-five patients with non-tremor-dominant PD without cognitive dysfunction and twenty-five age-matched controls had to write a standard sentence with and without visual feedback. We assessed the effect of withdrawal of visual feedback by measuring vertical word size (i, horizontal length of the sentence (ii, and the summed horizontal word length without interspacing (iii, comparing patients with controls. In both patients and controls, writing was significantly larger without visual feedback. This enlargement did not significantly differ between the groups. Smaller handwriting significantly correlated with increased disease severity. Contrary to previous observations that withdrawal of visual feedback caused increased writing size in specifically PD, we did not find differences between patients and controls. Both groups wrote larger without visual feedback, which adds insight in general neuronal mechanisms underlying the balance between feed-forward and feedback in visuomotor control, mechanisms that also hold for grasping movements.

  18. Outer navigation of a inspection robot by means of feedback of global guidance

    International Nuclear Information System (INIS)

    Segovia de los R, A.; Bucio V, F.; Garduno G, M.

    2008-01-01

    The objective of this article is the presentation of an inspection system to mobile robot navigating in exteriors by means of the employment of a feedback of instantaneous guidance with respect to a global reference throughout moment of the displacement. The robot evolves obeying the commands coming from the one tele operator which indicates the diverse addresses by means of the operation console that the robot should take using for it information provided by an electronic compass. The mobile robot employee in the experimentations is a Pioneer 3-AT, which counts with a sensor series required to obtain an operation of more autonomy. The electronic compass offers geographical information coded in a format SPI, reason for which a micro controller (μC) economic of general use has been an employee for to transfer the information to the format RS-232, originally used by the Pioneer 3-AT. The orientation information received by the robot by means of their serial port RS-232 secondary it is forwarded to the computer hostess in the one which a program Java is used to generate the commands for the robot navigation control and to deploy one graphic interface user utilized to receive the order of the operator. This research is part of an ambitious project in which it is tried to count on an inspection system and monitoring of sites in which risks of high radiation levels could exist, thus a navigation systems in exteriors could be very useful. The complete system will count besides the own sensors of the robot, with certain numbers of agree sensors to the variables that are desired to monitor. The resulting values of such measurements will be visualized in real time in the graphic interface user, thanks to a bidirectional wireless communication among the station of operation and the mobile robot. (Author)

  19. Effects of Visual Feedback Distortion on Gait Adaptation: Comparison of Implicit Visual Distortion Versus Conscious Modulation on Retention of Motor Learning.

    Science.gov (United States)

    Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho

    2015-09-01

    Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.

  20. Blind's Eye: Employing Google Directions API for Outdoor Navigation of Visually Impaired Pedestrians

    Directory of Open Access Journals (Sweden)

    SABA FEROZMEMON

    2017-07-01

    Full Text Available Vision plays a paramount role in our everyday life and assists human in almost every walk of life. The people lacking vision sense require assistance to move freely. The inability of unassisted navigation and orientation in outdoor environments is one of the most important constraints for people with visual impairment. Motivated by this problem, we developed a simplified and user friendly navigation system that allows visually impaired pedestrians to reach their desired outdoor location. We designed a Braille keyboard to allow the blind user to input their destination. The proposed system makes use of Google Directions API (Application Program Interface to generate the right path to a destination. The visually impaired pedestrians have to wear a vibration belt to keep them on the track. The evaluation exposes shortcomings of Google Directions API when used for navigating the visually impaired pedestrians in an outdoor environment.

  1. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    Science.gov (United States)

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  2. Indoor navigation by people with visual impairment using a digital sign system.

    Directory of Open Access Journals (Sweden)

    Gordon E Legge

    Full Text Available There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.

  3. Indoor navigation by people with visual impairment using a digital sign system.

    Science.gov (United States)

    Legge, Gordon E; Beckmann, Paul J; Tjan, Bosco S; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.

  4. Self-Produced Tickle Sensation by Manipulating Visual Feedback

    Directory of Open Access Journals (Sweden)

    Hiroyuki Iizuka

    2011-10-01

    Full Text Available The aim of the present paper was to clarify how the distinction of self- (sense of agency, SOA and other-produced behavior can be synthesized and recognized in multisensory integration as our cognitive processes. To address this issue, we used tickling paradigm that it is hard for us to tickle ourselves. Previous studies show that tickle sensation by their own motion increases if more delay is given between self-motion of tickling and tactile stimulation (Blakemore et al. 1998, 1999. We introduced visual feedbacks to the tickling experiments. In our hypothesis, integration of vision, proprioception, and motor commands forms the SOA and disintegration causes the breakdown the SOA, which causes the feeling of others, producing tickling sensation even by tickling oneself. We used video-see-through HMD to suddenly delay the real-time images of their hand tickling motions. The tickle sensation was measured by subjective response in the following conditions; 1 tickling oneself without any visual modulation, 2 tickled by others, 3 tickling oneself with visual feedback manipulation. The statistical analysis of ranked evaluation of tickle sensations showed that the delay of visual feedback causes the increase of tickle sensation. The SOA was discussed with Blakemore's and our results.

  5. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  6. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  7. Isometric force exaggeration in simulated weightlessness by water immersion: role of visual feedback.

    Science.gov (United States)

    Dalecki, Marc; Bock, Otmar

    2014-06-01

    Previous studies reported that humans produce exaggerated isometric forces (20-50%) in microgravity, hypergravity, and under water. Subjects were not provided with visual feedback and exaggerations were attributed to proprioceptive deficits. The few studies that provided visual feedback in micro- and hypergravity found no deficits. The present work was undertaken to find out whether visual feedback can reduce or eliminate isometric force exaggerations during shallow water immersion, a working environment for astronauts and divers. There were 48 subjects who had to produce isometric forces of 15 N with a joystick; targets were presented via screen. Procedures were similar to earlier studies, but provided visual feedback. Subjects were tested 16.4 ft (5 m) under water (WET) and on dry land (DRY). Response accuracy was calculated with landmarks such as initial and peak force magnitude, and response timing. Initial force and response timing were equal in WET compared to DRY. A small but significant force exaggeration (+5%) remained for peak force in WET that was limited to directions toward the trunk. Force exaggeration under water is largely compensated, but not completely eliminated by visual feedback. As in earlier studies without visual feedback, force exaggeration manifested during later but not early response parts, speaking for impaired proprioceptive feedback rather than for erroneous central motor planning. Since in contrast to micro/hypergravity, visual feedback did not sufficiently abolish force deficits under water, proprioceptive information seems to be weighted differently in micro/hypergravity and shallow water immersion, probably because only the latter environment produces increased ambient pressure, which is known to induce neuronal changes.

  8. Towards a Sign-Based Indoor Navigation System for People with Visual Impairments.

    Science.gov (United States)

    Rituerto, Alejandro; Fusco, Giovanni; Coughlan, James M

    2016-10-01

    Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.

  9. Three Principles for the Design of Energy Feedback Visualizations

    DEFF Research Database (Denmark)

    Brewer, Robert S.; Xu, Yongwen; Lee, George E.

    2013-01-01

    , online educational activities, and real-world activities such as workshops and excursions. We describe our experiences developing energy feedback visualizations in the Kukui Cup based on in-lab evaluations and field studies in college residence halls. We learned that energy feedback systems should...

  10. Brain-actuated gait trainer with visual and proprioceptive feedback

    Science.gov (United States)

    Liu, Dong; Chen, Weihai; Lee, Kyuhwa; Chavarriaga, Ricardo; Bouri, Mohamed; Pei, Zhongcai; Millán, José del R.

    2017-10-01

    Objective. Brain-machine interfaces (BMIs) have been proposed in closed-loop applications for neuromodulation and neurorehabilitation. This study describes the impact of different feedback modalities on the performance of an EEG-based BMI that decodes motor imagery (MI) of leg flexion and extension. Approach. We executed experiments in a lower-limb gait trainer (the legoPress) where nine able-bodied subjects participated in three consecutive sessions based on a crossover design. A random forest classifier was trained from the offline session and tested online with visual and proprioceptive feedback, respectively. Post-hoc classification was conducted to assess the impact of feedback modalities and learning effect (an improvement over time) on the simulated trial-based performance. Finally, we performed feature analysis to investigate the discriminant power and brain pattern modulations across the subjects. Main results. (i) For real-time classification, the average accuracy was 62.33 +/- 4.95 % and 63.89 +/- 6.41 % for the two online sessions. The results were significantly higher than chance level, demonstrating the feasibility to distinguish between MI of leg extension and flexion. (ii) For post-hoc classification, the performance with proprioceptive feedback (69.45 +/- 9.95 %) was significantly better than with visual feedback (62.89 +/- 9.20 %), while there was no significant learning effect. (iii) We reported individual discriminate features and brain patterns associated to each feedback modality, which exhibited differences between the two modalities although no general conclusion can be drawn. Significance. The study reported a closed-loop brain-controlled gait trainer, as a proof of concept for neurorehabilitation devices. We reported the feasibility of decoding lower-limb movement in an intuitive and natural way. As far as we know, this is the first online study discussing the role of feedback modalities in lower-limb MI decoding. Our results suggest that

  11. Reduction of the elevator illusion from continued hypergravity exposure and visual error-corrective feedback

    Science.gov (United States)

    Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.

    1996-01-01

    Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.

  12. Learning feedback and feedforward control in a mirror-reversed visual environment.

    Science.gov (United States)

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn

    2015-10-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.

  13. Using visual feedback distortion to alter coordinated pinching patterns for robotic rehabilitation

    Directory of Open Access Journals (Sweden)

    Brewer Bambi R

    2007-05-01

    Full Text Available Abstract Background It is common for individuals with chronic disabilities to continue using the compensatory movement coordination due to entrenched habits, increased perception of task difficulty, or personality variables such as low self-efficacy or a fear of failure. Following our previous work using feedback distortion in a virtual rehabilitation environment to increase strength and range of motion, we address the use of visual feedback distortion environment to alter movement coordination patterns. Methods Fifty-one able-bodied subjects participated in the study. During the experiment, each subject learned to move their index finger and thumb in a particular target pattern while receiving visual feedback. Visual distortion was implemented as a magnification of the error between the thumb and/or index finger position and the desired position. The error reduction profile and the effect of distortion were analyzed by comparing the mean total absolute error and a normalized error that measured performance improvement for each subject as a proportion of the baseline error. Results The results of the study showed that (1 different coordination pattern could be trained with visual feedback and have the new pattern transferred to trials without visual feedback, (2 distorting individual finger at a time allowed different error reduction profile from the controls, and (3 overall learning was not sped up by distorting individual fingers. Conclusion It is important that robotic rehabilitation incorporates multi-limb or finger coordination tasks that are important for activities of daily life in the near future. This study marks the first investigation on multi-finger coordination tasks under visual feedback manipulation.

  14. The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.

    Science.gov (United States)

    Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni

    2017-09-01

    The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Design and implementation of an interface supporting information navigation tasks using hyperbolic visualization technique

    International Nuclear Information System (INIS)

    Lee, J. K.; Choi, I. K.; Jun, S. H.; Park, K. O.; Seo, Y. S.; Seo, S. M.; Koo, I. S.; Jang, M. H.

    2001-01-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchially structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance, in this thesis, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks

  16. Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment

    Directory of Open Access Journals (Sweden)

    Ramiro Velázquez

    2015-01-01

    Full Text Available Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments.

  17. Illumination Tolerance for Visual Navigation with the Holistic Min-Warping Method

    Directory of Open Access Journals (Sweden)

    Ralf Möller

    2014-02-01

    Full Text Available Holistic visual navigation methods are an emerging alternative to the ubiquitous feature-based methods. Holistic methods match entire images pixel-wise instead of extracting and comparing local feature descriptors. In this paper we investigate which pixel-wise distance measures are most suitable for the holistic min-warping method with respect to illumination invariance. Two novel approaches are presented: tunable distance measures—weighted combinations of illumination-invariant and illumination-sensitive terms—and two novel forms of “sequential” correlation which are only invariant against intensity shifts but not against multiplicative changes. Navigation experiments on indoor image databases collected at the same locations but under different conditions of illumination demonstrate that tunable distance measures perform optimally by mixing their two portions instead of using the illumination-invariant term alone. Sequential correlation performs best among all tested methods, and as well but much faster in an approximated form. Mixing with an additional illumination-sensitive term is not necessary for sequential correlation. We show that min-warping with approximated sequential correlation can successfully be applied to visual navigation of cleaning robots.

  18. Effects of Real-Time Visual Feedback on Pre-Service Teachers' Singing

    Science.gov (United States)

    Leong, S.; Cheng, L.

    2014-01-01

    This pilot study focuses on the use real-time visual feedback technology (VFT) in vocal training. The empirical research has two aims: to ascertain the effectiveness of the real-time visual feedback software "Sing & See" in the vocal training of pre-service music teachers and the teachers' perspective on their experience with…

  19. Visual Acuity Testing: Feedback Affects Neither Outcome nor Reproducibility, but Leaves Participants Happier.

    Science.gov (United States)

    Bach, Michael; Schäfer, Kerstin

    2016-01-01

    Assessment of visual acuity is a well standardized procedure at least for expert opinions and clinical trials. It is often recommended not giving patients feedback on the correctness of their responses. As this viewpoint has not been quantitatively examined so far, we quantitatively assessed possible effects of feedback on visual acuity testing. In 40 normal participants we presented Landolt Cs in 8 orientations using the automated Freiburg Acuity Test (FrACT, feedback was provided in 2 x 4 conditions: (A) no feedback, (B) acoustic signals indicating correctness, (C)visual indication of correct orientation, and (D) a combination of (B) and (C). After each run the participants judged comfort. Main outcome measures were absolute visual acuity (logMAR), its test-retest agreement (limits of agreement) and participants' comfort estimates on a 5-step symmetric Likert scale. Feedback influenced acuity outcome significantly (p = 0.02), but with a tiny effect size: 0.02 logMAR poorer acuity for (D) compared to (A), even weaker effects for (B) and (C). Test-retest agreement was high (limits of agreement: ± 1.0 lines) and did not depend on feedback (p>0.5). The comfort ranking clearly differed, by 2 steps on the Likert scale: the condition (A)-no feedback-was on average "slightly uncomfortable", the other three conditions were "slightly comfortable" (pFeedback affected neither reproducibility nor the acuity outcome to any relevant extent. The participants, however, reported markedly greater comfort with any kind of feedback. We conclude that systematic feedback (as implemented in FrACT) offers nothing but advantages for routine use.

  20. Tactile Gap Detection Deteriorates during Bimanual Symmetrical Movements under Mirror Visual Feedback.

    Directory of Open Access Journals (Sweden)

    Janet H Bultitude

    Full Text Available It has been suggested that incongruence between signals for motor intention and sensory input can cause pain and other sensory abnormalities. This claim is supported by reports that moving in an environment of induced sensorimotor conflict leads to elevated pain and sensory symptoms in those with certain painful conditions. Similar procedures can lead to reports of anomalous sensations in healthy volunteers too. In the present study, we used mirror visual feedback to investigate the effects of sensorimotor incongruence on responses to stimuli that arise from sources external to the body, in particular, touch. Incongruence between the sensory and motor signals for the right arm was manipulated by having the participants make symmetrical or asymmetrical movements while watching a reflection of their left arm in a parasagittal mirror, or the left hand surface of a similarly positioned opaque board. In contrast to our prediction, sensitivity to the presence of gaps in tactile stimulation of the right forearm was not reduced when participants made asymmetrical movements during mirror visual feedback, as compared to when they made symmetrical or asymmetrical movements with no visual feedback. Instead, sensitivity was reduced when participants made symmetrical movements during mirror visual feedback relative to the other three conditions. We suggest that small discrepancies between sensory and motor information, as they occur during mirror visual feedback with symmetrical movements, can impair tactile processing. In contrast, asymmetrical movements with mirror visual feedback may not impact tactile processing because the larger discrepancies between sensory and motor information may prevent the integration of these sources of information. These results contrast with previous reports of anomalous sensations during exposure to both low and high sensorimotor conflict, but are nevertheless in agreement with a forward model interpretation of perceptual

  1. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Science.gov (United States)

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  2. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Directory of Open Access Journals (Sweden)

    Andrew J Kolarik

    Full Text Available Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation and tactile (using a sensory substitution device, SSD guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  3. Influence of visual feedback on human task performance in ITER remote handling

    Energy Technology Data Exchange (ETDEWEB)

    Schropp, Gwendolijn Y.R., E-mail: g.schropp@heemskerk-innovative.nl [Utrecht University, Utrecht (Netherlands); Heemskerk Innovative Technology, Noordwijk (Netherlands); Heemskerk, Cock J.M. [Heemskerk Innovative Technology, Noordwijk (Netherlands); Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann [Helmholtz Institute-Utrecht University, Utrecht (Netherlands); Elzendoorn, Ben S.Q. [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands); Bult, David [FOM-Institute for Plasma Physics Rijnhuizen, Association EURATOM/FOM, Partner in the Trilateral Euregio Clusterand ITER-NL, PO box 1207, 3430 BE Nieuwegein (Netherlands)

    2012-08-15

    Highlights: Black-Right-Pointing-Pointer The performance of human operators in an ITER-like test facility for remote handling. Black-Right-Pointing-Pointer Different sources of visual feedback influence how fast one can complete a maintenance task. Black-Right-Pointing-Pointer Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  4. Influence of visual feedback on human task performance in ITER remote handling

    International Nuclear Information System (INIS)

    Schropp, Gwendolijn Y.R.; Heemskerk, Cock J.M.; Kappers, Astrid M.L.; Tiest, Wouter M. Bergmann; Elzendoorn, Ben S.Q.; Bult, David

    2012-01-01

    Highlights: ► The performance of human operators in an ITER-like test facility for remote handling. ► Different sources of visual feedback influence how fast one can complete a maintenance task. ► Insights learned could be used in design of operator work environment or training procedures. - Abstract: In ITER, maintenance operations will be largely performed by remote handling (RH). Before ITER can be put into operation, safety regulations and licensing authorities require proof of maintainability for critical components. Part of the proof will come from using standard components and procedures. Additional verification and validation is based on simulation and hardware tests in 1:1 scale mockups. The Master Slave manipulator system (MS2) Benchmark Product was designed to implement a reference set of maintenance tasks representative for ITER remote handling. Experiments were performed with two versions of the Benchmark Product. In both experiments, the quality of visual feedback varied by exchanging direct view with indirect view (using video cameras) in order to measure and analyze its impact on human task performance. The first experiment showed that both experienced and novice RH operators perform a simple task significantly better with direct visual feedback than with camera feedback. A more complex task showed a large variation in results and could not be completed by many novice operators. Experienced operators commented on both the mechanical design and visual feedback. In a second experiment, a more elaborate task was tested on an improved Benchmark product. Again, the task was performed significantly faster with direct visual feedback than with camera feedback. In post-test interviews, operators indicated that they regarded the lack of 3D perception as the primary factor hindering their performance.

  5. Improving training of laparoscopic tissue manipulation skills using various visual force feedback types

    NARCIS (Netherlands)

    Smit, Daan; Spruit, Edward; Dankelman, J.; Tuijthof, G.J.M.; Hamming, J; Horeman, T.

    2017-01-01

    Background Visual force feedback allows trainees to learn laparoscopic tissue manipulation skills. The aim of this experimental study was to find the most efficient visual force feedback method to acquire these skills. Retention and transfer validity to an untrained task were assessed. Methods

  6. Learning without knowing: subliminal visual feedback facilitates ballistic motor learning

    DEFF Research Database (Denmark)

    Lundbye-Jensen, Jesper; Leukel, Christian; Nielsen, Jens Bo

    by subconscious (subliminal) augmented visual feedback on motor performance. To test this, 45 subjects participated in the experiment, which involved learning of a ballistic task. The task was to execute simple ankle plantar flexion movements as quickly as possible within 200 ms and to continuously improve...... by the learner, indeed facilitated ballistic motor learning. This effect likely relates to multiple (conscious versus unconscious) processing of visual feedback and to the specific neural circuitries involved in optimization of ballistic motor performance.......). It is a well- described phenomenon that we may respond to features of our surroundings without being aware of them. It is also a well-known principle, that learning is reinforced by augmented feedback on motor performance. In the present experiment we hypothesized that motor learning may be facilitated...

  7. Eye movements in interception with delayed visual feedback.

    Science.gov (United States)

    Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli

    2018-04-19

    The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.

  8. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    Science.gov (United States)

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  9. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    Directory of Open Access Journals (Sweden)

    Yan Kun

    2011-01-01

    Full Text Available Abstract Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees.

  10. Real-time feedback on nonverbal clinical communication. Theoretical framework and clinician acceptance of ambient visual design.

    Science.gov (United States)

    Hartzler, A L; Patel, R A; Czerwinski, M; Pratt, W; Roseway, A; Chandrasekaran, N; Back, A

    2014-01-01

    This article is part of the focus theme of Methods of Information in Medicine on "Pervasive Intelligent Technologies for Health". Effective nonverbal communication between patients and clinicians fosters both the delivery of empathic patient-centered care and positive patient outcomes. Although nonverbal skill training is a recognized need, few efforts to enhance patient-clinician communication provide visual feedback on nonverbal aspects of the clinical encounter. We describe a novel approach that uses social signal processing technology (SSP) to capture nonverbal cues in real time and to display ambient visual feedback on control and affiliation--two primary, yet distinct dimensions of interpersonal nonverbal communication. To examine the design and clinician acceptance of ambient visual feedback on nonverbal communication, we 1) formulated a model of relational communication to ground SSP and 2) conducted a formative user study using mixed methods to explore the design of visual feedback. Based on a model of relational communication, we reviewed interpersonal communication research to map nonverbal cues to signals of affiliation and control evidenced in patient-clinician interaction. Corresponding with our formulation of this theoretical framework, we designed ambient real-time visualizations that reflect variations of affiliation and control. To explore clinicians' acceptance of this visual feedback, we conducted a lab study using the Wizard-of-Oz technique to simulate system use with 16 healthcare professionals. We followed up with seven of those participants through interviews to iterate on the design with a revised visualization that addressed emergent design considerations. Ambient visual feedback on non- verbal communication provides a theoretically grounded and acceptable way to provide clinicians with awareness of their nonverbal communication style. We provide implications for the design of such visual feedback that encourages empathic patient

  11. Visual tables of contents: structure and navigation of digital video material

    NARCIS (Netherlands)

    Janse, M.D.; Das, D.A.D.; Tang, H.K.; Paassen, van R.L.F.

    1997-01-01

    This paper presents a study that was initiated to address the relationship between visualization of content information, the structure of this information and the effective traversal and navigation for users of digital video storage systems in domestic environments. Preliminary results in two topic

  12. [Nursing Experience of Using Mirror Visual Feedback for a Schizophrenia Patient With Visual Hallucinations].

    Science.gov (United States)

    Lan, Shu-Ling; Chen, Yu-Chi; Chang, Hsiu-Ju

    2018-06-01

    The aim of this paper was to describe the nursing application of mirror visual feedback in a patient suffering from long-term visual hallucinations. The intervention period was from May 15th to October 19th, 2015. Using the five facets of psychiatric nursing assessment, several health problems were observed, including disturbed sensory perceptions (prominent visual hallucinations) and poor self-care (e.g. limited abilities to self-bathe and put on clothing). Furthermore, "caregiver role strain" due to the related intense care burden was noted. After building up a therapeutic interpersonal relationship, the technique of brain plasticity and mirror visual feedback were performed using multiple nursing care methods in order to help the patient suppress her visual hallucinations by enhancing a different visual stimulus. We also taught her how to cope with visual hallucinations in a proper manner. The frequency and content of visual hallucinations were recorded to evaluate the effects of management. The therapeutic plan was formulated together with the patient in order to boost her self-confidence, and a behavior contract was implemented in order to improve her personal hygiene. In addition, psychoeducation on disease-related topics was provided to the patient's family, and they were encouraged to attend relevant therapeutic activities. As a result, her family became less passive and negative and more engaged in and positive about her future. The crisis of "caregiver role strain" was successfully resolved. The current experience is hoped to serve as a model for enhancing communication and cooperation between family and staff in similar medical settings.

  13. Unipedal balance in healthy adults: effect of visual environments yielding decreased lateral velocity feedback.

    Science.gov (United States)

    Deyer, T W; Ashton-Miller, J A

    1999-09-01

    To test the (null) hypotheses that the reliability of unipedal balance is unaffected by the attenuation of visual velocity feedback and that, relative to baseline performance, deterioration of balance success rates from attenuated visual velocity feedback will not differ between groups of young men and older women, and the presence (or absence) of a vertical foreground object will not affect balance success rates. Single blind, single case study. University research laboratory. Two volunteer samples: 26 healthy young men (mean age, 20.0yrs; SD, 1.6); 23 healthy older women (mean age, 64.9 yrs; SD, 7.8). Normalized success rates in unipedal balance task. Subjects were asked to transfer to and maintain unipedal stance for 5 seconds in a task near the limit of their balance capabilities. Subjects completed 64 trials: 54 trials of three experimental visual scenes in blocked randomized sequences of 18 trials and 10 trials in a normal visual environment. The experimental scenes included two that provided strong velocity/weak position feedback, one of which had a vertical foreground object (SVWP+) and one without (SVWP-), and one scene providing weak velocity/strong position (WVSP) feedback. Subjects' success rates in the experimental environments were normalized by the success rate in the normal environment in order to allow comparisons between subjects using a mixed model repeated measures analysis of variance. The normalized success rate was significantly greater in SVWP+ than in WVSP (p = .0001) and SVWP- (p = .013). Visual feedback significantly affected the normalized unipedal balance success rates (p = .001); neither the group effect nor the group X visual environment interaction was significant (p = .9362 and p = .5634, respectively). Normalized success rates did not differ significantly between the young men and older women in any visual environment. Near the limit of the young men's or older women's balance capability, the reliability of transfer to unipedal

  14. Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.

    Science.gov (United States)

    Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico

    2014-01-01

    Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.

  15. Effects of continuous visual feedback during sitting balance training in chronic stroke survivors.

    Science.gov (United States)

    Pellegrino, Laura; Giannoni, Psiche; Marinelli, Lucio; Casadio, Maura

    2017-10-16

    Postural control deficits are common in stroke survivors and often the rehabilitation programs include balance training based on visual feedback to improve the control of body position or of the voluntary shift of body weight in space. In the present work, a group of chronic stroke survivors, while sitting on a force plate, exercised the ability to control their Center of Pressure with a training based on continuous visual feedback. The goal of this study was to test if and to what extent chronic stroke survivors were able to learn the task and transfer the learned ability to a condition without visual feedback and to directions and displacement amplitudes different from those experienced during training. Eleven chronic stroke survivors (5 Male - 6 Female, age: 59.72 ± 12.84 years) participated in this study. Subjects were seated on a stool positioned on top of a custom-built force platform. Their Center of Pressure positions were mapped to the coordinate of a cursor on a computer monitor. During training, the cursor position was always displayed and the subjects were to reach targets by shifting their Center of Pressure by moving their trunk. Pre and post-training subjects were required to reach without visual feedback of the cursor the training targets as well as other targets positioned in different directions and displacement amplitudes. During training, most stroke survivors were able to perform the required task and to improve their performance in terms of duration, smoothness, and movement extent, although not in terms of movement direction. However, when we removed the visual feedback, most of them had no improvement with respect to their pre-training performance. This study suggests that postural training based exclusively on continuous visual feedback can provide limited benefits for stroke survivors, if administered alone. However, the positive gains observed during training justify the integration of this technology-based protocol in a well

  16. Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People.

    Science.gov (United States)

    Martinez-Sala, Alejandro Santos; Losilla, Fernando; Sánchez-Aarnoutse, Juan Carlos; García-Haro, Joan

    2015-12-21

    Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life) to operate. In this regard, Ultra-Wideband (UWB) technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system.

  17. Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People

    Directory of Open Access Journals (Sweden)

    Alejandro Santos Martinez-Sala

    2015-12-01

    Full Text Available Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life to operate. In this regard, Ultra-Wideband (UWB technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system.

  18. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    Science.gov (United States)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  19. OpinionSeer: interactive visualization of hotel customer feedback.

    Science.gov (United States)

    Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin

    2010-01-01

    The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.

  20. Use of visual CO2 feedback as a retrofit solution for improving classroom air quality.

    Science.gov (United States)

    Wargocki, P; Da Silva, N A F

    2015-02-01

    Carbon dioxide (CO2 ) sensors that provide a visual indication were installed in classrooms during normal school operation. During 2-week periods, teachers and students were instructed to open the windows in response to the visual CO2 feedback in 1 week and open them, as they would normally do, without visual feedback, in the other week. In the heating season, two pairs of classrooms were monitored, one pair naturally and the other pair mechanically ventilated. In the cooling season, two pairs of naturally ventilated classrooms were monitored, one pair with split cooling in operation and the other pair with no cooling. Classrooms were matched by grade. Providing visual CO2 feedback reduced CO2 levels, as more windows were opened in this condition. This increased energy use for heating and reduced the cooling requirement in summertime. Split cooling reduced the frequency of window opening only when no visual CO2 feedback was present. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. The Role of Visual Feedback on Power Output During Intermittent Wingate Testing in Ice Hockey Players

    Directory of Open Access Journals (Sweden)

    Petr Stastny

    2018-04-01

    Full Text Available Background: Visual feedback may help elicit peak performance during different types of strength and power testing, but its effect during the anaerobic Wingate test is unexplored. Therefore, the purpose of this study was to determine the effect of visual feedback on power output during a hockey-specific intermittent Wingate test (AnWT6x6 consisting of 6 stages of 6 s intervals with a 1:1 work-to-rest ratio. Methods: Thirty elite college-aged hockey players performed the AnWT6x6 with either constant (n = 15 visual feedback during all 6 stages (CVF or restricted (n = 15 visual feedback (RVF where feedback was shown only during the 2nd through 5th stages. Results: In the first stage, there were moderate-to-large effect sizes for absolute peak power (PP output and PP relative to body mass and PP relative to fat-free mass. However, the remaining stages (2–6 displayed small or negligible effects. Conclusions: These data indicate that visual feedback may play a role in optimizing power output in a non-fatigued state (1st stage, but likely does not play a role in the presence of extreme neuromuscular fatigue (6th stage during Wingate testing. To achieve the highest peak power, coaches and researchers could provide visual feedback during Wingate testing, as it may positively influence performance in the early stages of testing, but does not result in residual fatigue or negatively affect performance during subsequent stages.

  2. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    Science.gov (United States)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  3. Motor sequence learning occurs despite disrupted visual and proprioceptive feedback

    Directory of Open Access Journals (Sweden)

    Boyd Lara A

    2008-07-01

    Full Text Available Abstract Background Recent work has demonstrated the importance of proprioception for the development of internal representations of the forces encountered during a task. Evidence also exists for a significant role for proprioception in the execution of sequential movements. However, little work has explored the role of proprioceptive sensation during the learning of continuous movement sequences. Here, we report that the repeated segment of a continuous tracking task can be learned despite peripherally altered arm proprioception and severely restricted visual feedback regarding motor output. Methods Healthy adults practiced a continuous tracking task over 2 days. Half of the participants experienced vibration that altered proprioception of shoulder flexion/extension of the active tracking arm (experimental condition and half experienced vibration of the passive resting arm (control condition. Visual feedback was restricted for all participants. Retention testing was conducted on a separate day to assess motor learning. Results Regardless of vibration condition, participants learned the repeated segment demonstrated by significant improvements in accuracy for tracking repeated as compared to random continuous movement sequences. Conclusion These results suggest that with practice, participants were able to use residual afferent information to overcome initial interference of tracking ability related to altered proprioception and restricted visual feedback to learn a continuous motor sequence. Motor learning occurred despite an initial interference of tracking noted during acquisition practice.

  4. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    Science.gov (United States)

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for

  5. The effect of haptic guidance and visual feedback on learning a complex tennis task.

    Science.gov (United States)

    Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert

    2013-11-01

    While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on

  6. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  7. An interactive videogame designed to improve respiratory navigator efficiency in children undergoing cardiovascular magnetic resonance.

    Science.gov (United States)

    Hamlet, Sean M; Haggerty, Christopher M; Suever, Jonathan D; Wehner, Gregory J; Grabau, Jonathan D; Andres, Kristin N; Vandsburger, Moriel H; Powell, David K; Sorrell, Vincent L; Fornwalt, Brandon K

    2016-09-06

    Advanced cardiovascular magnetic resonance (CMR) acquisitions often require long scan durations that necessitate respiratory navigator gating. The tradeoff of navigator gating is reduced scan efficiency, particularly when the patient's breathing patterns are inconsistent, as is commonly seen in children. We hypothesized that engaging pediatric participants with a navigator-controlled videogame to help control breathing patterns would improve navigator efficiency and maintain image quality. We developed custom software that processed the Siemens respiratory navigator image in real-time during CMR and represented diaphragm position using a cartoon avatar, which was projected to the participant in the scanner as visual feedback. The game incentivized children to breathe such that the avatar was positioned within the navigator acceptance window (±3 mm) throughout image acquisition. Using a 3T Siemens Tim Trio, 50 children (Age: 14 ± 3 years, 48 % female) with no significant past medical history underwent a respiratory navigator-gated 2D spiral cine displacement encoding with stimulated echoes (DENSE) CMR acquisition first with no feedback (NF) and then with the feedback game (FG). Thirty of the 50 children were randomized to undergo extensive off-scanner training with the FG using a MRI simulator, or no off-scanner training. Navigator efficiency, signal-to-noise ratio (SNR), and global left-ventricular strains were determined for each participant and compared. Using the FG improved average navigator efficiency from 33 ± 15 to 58 ± 13 % (p < 0.001) and improved SNR by 5 % (p = 0.01) compared to acquisitions with NF. There was no difference in navigator efficiency (p = 0.90) or SNR (p = 0.77) between untrained and trained participants for FG acquisitions. Circumferential and radial strains derived from FG acquisitions were slightly reduced compared to NF acquisitions (-16 ± 2 % vs -17 ± 2 %, p < 0.001; 40 ± 10

  8. Navigating on handheld displays: Dynamic versus Static Keyhole Navigation

    NARCIS (Netherlands)

    Mehra, S.; Werkhoven, P.; Worring, M.

    2006-01-01

    Handheld displays leave little space for the visualization and navigation of spatial layouts representing rich information spaces. The most common navigation method for handheld displays is static peephole navigation: The peephole is static and we move the spatial layout behind it (scrolling). A

  9. Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation.

    Science.gov (United States)

    Tapia, Evelina; Beck, Diane M

    2014-01-01

    A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.

  10. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  11. Influence of visual feedback on knee extensor isokinetic concentric ...

    African Journals Online (AJOL)

    Isokinetic normative data can be invaluable in identifying an individual's strengths and weaknesses, and thus lead to a more effective use of the individual's time to minimise or overcome his weaknesses while maintaining or improving existing strength. However, visual feedback (VF) may significantly affect the result of ...

  12. Audio-Visual Feedback for Self-monitoring Posture in Ballet Training

    DEFF Research Database (Denmark)

    Knudsen, Esben Winther; Hølledig, Malte Lindholm; Bach-Nielsen, Sebastian Siem

    2017-01-01

    An application for ballet training is presented that monitors the posture position (straightness of the spine and rotation of the pelvis) deviation from the ideal position in real-time. The human skeletal data is acquired through a Microsoft Kinect v2. The movement of the student is mirrored......-coded. In an experiment with 9-12 year-old dance students from a ballet school, comparing the audio-visual feedback modality with no feedback leads to an increase in posture accuracy (p

  13. Reducing Trunk Compensation in Stroke Survivors: A Randomized Crossover Trial Comparing Visual and Force Feedback Modalities.

    Science.gov (United States)

    Valdés, Bulmaro Adolfo; Schneider, Andrea Nicole; Van der Loos, H F Machiel

    2017-10-01

    To investigate whether the compensatory trunk movements of stroke survivors observed during reaching tasks can be decreased by force and visual feedback, and to examine whether one of these feedback modalities is more efficacious than the other in reducing this compensatory tendency. Randomized crossover trial. University research laboratory. Community-dwelling older adults (N=15; 5 women; mean age, 64±11y) with hemiplegia from nontraumatic hemorrhagic or ischemic stroke (>3mo poststroke), recruited from stroke recovery groups, the research group's website, and the community. In a single session, participants received augmented feedback about their trunk compensation during a bimanual reaching task. Visual feedback (60 trials) was delivered through a computer monitor, and force feedback (60 trials) was delivered through 2 robotic devices. Primary outcome measure included change in anterior trunk displacement measured by motion tracking camera. Secondary outcomes included trunk rotation, index of curvature (measure of straightness of hands' path toward target), root mean square error of hands' movement (differences between hand position on every iteration of the program), completion time for each trial, and posttest questionnaire to evaluate users' experience and system's usability. Both visual (-45.6% [45.8 SD] change from baseline, P=.004) and force (-41.1% [46.1 SD], P=.004) feedback were effective in reducing trunk compensation. Scores on secondary outcome measures did not improve with either feedback modality. Neither feedback condition was superior. Visual and force feedback show promise as 2 modalities that could be used to decrease trunk compensation in stroke survivors during reaching tasks. It remains to be established which one of these 2 feedback modalities is more efficacious than the other as a cue to reduce compensatory trunk movement. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. The persistence of a visual dominance effect in a telemanipulator task: A comparison between visual and electrotactile feedback

    Science.gov (United States)

    Gaillard, J. P.

    1981-01-01

    The possibility to use an electrotactile stimulation in teleoperation and to observe the interpretation of such information as a feedback to the operator was investigated. It is proposed that visual feedback is more informative than an electrotactile one; and that complex electrotactile feedback slows down both the motor decision and motor response processes, is processed as an all or nothing signal, and bypasses the receptive structure and accesses directly in a working memory where information is sequentially processed and where memory is limited in treatment capacity. The electrotactile stimulation is used as an alerting signal. It is suggested that the visual dominance effect is the result of the advantage of both a transfer function and a sensory memory register where information is pretreated and memorized for a short time. It is found that dividing attention has an effect on the acquisition of the information but not on the subsequent decision processes.

  15. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    Science.gov (United States)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  16. Can explicit visual feedback of postural sway efface the effects of sensory manipulations on mediolateral balance performance?

    OpenAIRE

    Cofre Lizama, L.E.; Pijnappels, M.A.G.M.; Reeves, N.P.; Verschueren, S.M.; van Dieen, J.H.

    2016-01-01

    Explicit visual feedback on postural sway is often used in balance assessment and training. However, up-weighting of visual information may mask impairments of other sensory systems. We therefore aimed to determine whether the effects of somatosensory, vestibular, and proprioceptive manipulations on mediolateral balance are reduced by explicit visual feedback on mediolateral sway of the body center of mass and by the presence of visual information. We manipulated sensory inputs of the somatos...

  17. Active training and driving-specific feedback improve older drivers' visual search prior to lane changes.

    Science.gov (United States)

    Lavallière, Martin; Simoneau, Martin; Tremblay, Mathieu; Laurendeau, Denis; Teasdale, Normand

    2012-03-02

    Driving retraining classes may offer an opportunity to attenuate some effects of aging that may alter driving skills. Unfortunately, there is evidence that classroom programs (driving refresher courses) do not improve the driving performance of older drivers. The aim of the current study was to evaluate if simulator training sessions with video-based feedback can modify visual search behaviors of older drivers while changing lanes in urban driving. In order to evaluate the effectiveness of the video-based feedback training, 10 older drivers who received a driving refresher course and feedback about their driving performance were tested with an on-road standardized evaluation before and after participating to a simulator training program (Feedback group). Their results were compared to a Control group (12 older drivers) who received the same refresher course and in-simulator active practice as the Feedback group without receiving driving-specific feedback. After attending the training program, the Control group showed no increase in the frequency of the visual inspection of three regions of interests (rear view and left side mirrors, and blind spot). In contrast, for the Feedback group, combining active training and driving-specific feedbacks increased the frequency of blind spot inspection by 100% (32.3 to 64.9% of verification before changing lanes). These results suggest that simulator training combined with driving-specific feedbacks helped older drivers to improve their visual inspection strategies, and that in-simulator training transferred positively to on-road driving. In order to be effective, it is claimed that driving programs should include active practice sessions with driving-specific feedbacks. Simulators offer a unique environment for developing such programs adapted to older drivers' needs.

  18. Automated numerical simulation of biological pattern formation based on visual feedback simulation framework.

    Science.gov (United States)

    Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin

    2017-01-01

    There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.

  19. Development and formative evaluation of a visual e-tool to help decision makers navigate the evidence around health financing.

    Science.gov (United States)

    Skordis-Worrall, Jolene; Pulkki-Brännström, Anni-Maria; Utley, Martin; Kembhavi, Gayatri; Bricki, Nouria; Dutoit, Xavier; Rosato, Mikey; Pagel, Christina

    2012-12-21

    There are calls for low and middle income countries to develop robust health financing policies to increase service coverage. However, existing evidence around financing options is complex and often difficult for policy makers to access. To summarize the evidence on the impact of financing health systems and develop an e-tool to help decision makers navigate the findings. After reviewing the literature, we used thematic analysis to summarize the impact of 7 common health financing mechanisms on 5 common health system goals. Information on the relevance of each study to a user's context was provided by 11 country indicators. A Web-based e-tool was then developed to assist users in navigating the literature review. This tool was evaluated using feedback from early users, collected using an online survey and in-depth interviews with key informants. The e-tool provides graphical summaries that allow a user to assess the following parameters with a single snapshot: the number of relevant studies available in the literature, the heterogeneity of evidence, where key evidence is lacking, and how closely the evidence matches their own context. Users particularly liked the visual display and found navigating the tool intuitive. However there was concern that a lack of evidence on positive impact might be construed as evidence against a financing option and that the tool might over-simplify the available financing options. Complex evidence can be made more easily accessible and potentially more understandable using basic Web-based technology and innovative graphical representations that match findings to the users' goals and context.

  20. Watch what you type: the role of visual feedback from the screen and hands in skilled typewriting.

    Science.gov (United States)

    Snyder, Kristy M; Logan, Gordon D; Yamaguchi, Motonori

    2015-01-01

    Skilled typing is controlled by two hierarchically structured processing loops (Logan & Crump, 2011): The outer loop, which produces words, commands the inner loop, which produces keystrokes. Here, we assessed the interplay between the two loops by investigating how visual feedback from the screen (responses either were or were not echoed on the screen) and the hands (the hands either were or were not covered with a box) influences the control of skilled typing. Our results indicated, first, that the reaction time of the first keystroke was longer when responses were not echoed than when they were. Also, the interkeystroke interval (IKSI) was longer when the hands were covered than when they were visible, and the IKSI for responses that were not echoed was longer when explicit error monitoring was required (Exp. 2) than when it was not required (Exp. 1). Finally, explicit error monitoring was more accurate when response echoes were present than when they were absent, and implicit error monitoring (i.e., posterror slowing) was not influenced by visual feedback from the screen or the hands. These findings suggest that the outer loop adjusts the inner-loop timing parameters to compensate for reductions in visual feedback. We suggest that these adjustments are preemptive control strategies designed to execute keystrokes more cautiously when visual feedback from the hands is absent, to generate more cautious motor programs when visual feedback from the screen is absent, and to enable enough time for the outer loop to monitor keystrokes when visual feedback from the screen is absent and explicit error reports are required.

  1. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    Science.gov (United States)

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  2. Active training and driving-specific feedback improve older drivers' visual search prior to lane changes

    Directory of Open Access Journals (Sweden)

    Lavallière Martin

    2012-03-01

    Full Text Available Abstract Background Driving retraining classes may offer an opportunity to attenuate some effects of aging that may alter driving skills. Unfortunately, there is evidence that classroom programs (driving refresher courses do not improve the driving performance of older drivers. The aim of the current study was to evaluate if simulator training sessions with video-based feedback can modify visual search behaviors of older drivers while changing lanes in urban driving. Methods In order to evaluate the effectiveness of the video-based feedback training, 10 older drivers who received a driving refresher course and feedback about their driving performance were tested with an on-road standardized evaluation before and after participating to a simulator training program (Feedback group. Their results were compared to a Control group (12 older drivers who received the same refresher course and in-simulator active practice as the Feedback group without receiving driving-specific feedback. Results After attending the training program, the Control group showed no increase in the frequency of the visual inspection of three regions of interests (rear view and left side mirrors, and blind spot. In contrast, for the Feedback group, combining active training and driving-specific feedbacks increased the frequency of blind spot inspection by 100% (32.3 to 64.9% of verification before changing lanes. Conclusions These results suggest that simulator training combined with driving-specific feedbacks helped older drivers to improve their visual inspection strategies, and that in-simulator training transferred positively to on-road driving. In order to be effective, it is claimed that driving programs should include active practice sessions with driving-specific feedbacks. Simulators offer a unique environment for developing such programs adapted to older drivers' needs.

  3. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke.

    Science.gov (United States)

    Secoli, Riccardo; Milot, Marie-Helene; Rosati, Giulio; Reinkensmeyer, David J

    2011-04-23

    Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated

  4. 6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features.

    Science.gov (United States)

    Ye, Cang; Hong, Soonhac; Tamjidi, Amirhossein

    2015-10-01

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and navigational commands to the user through a speech interface. This work was motivated by the limitations of the existing navigation technology for the visually impaired. Most of the existing methods use a point/line measurement sensor for indoor object detection. Therefore, they lack capability in detecting 3D objects and positioning a blind traveler. Stereovision has been used in recent research. However, it cannot provide reliable depth data for object detection. Also, it tends to produce a lower localization accuracy because its depth measurement error quadratically increases with the true distance. This paper suggests a new approach for navigating a blind traveler. The method uses a single 3D time-of-flight camera for both 6-DOF PE and 3D object

  5. Real-Time Knee Adduction Moment Feedback for Gait Retraining Through Visual and Tactile Displays

    KAUST Repository

    Wheeler, Jason W.; Shull, Pete B.; Besier, Thor F.

    2011-01-01

    The external knee adduction moment (KAM) measured during gait is an indicator of tibiofemoral joint osteoarthritis progression and various strategies have been proposed to lower it. Gait retraining has been shown to be an effective, noninvasive approach for lowering the KAM. We present a new gait retraining approach in which the KAM is fed back to subjects in real-time during ambulation. A study was conducted in which 16 healthy subjects learned to alter gait patterns to lower the KAM through visual or tactile (vibration) feedback. Participants converged on a comfortable gait in just a few minutes by using the feedback to iterate on various kinematic modifications. All subjects adopted altered gait patterns with lower KAM compared with normal ambulation (average reduction of 20.7%). Tactile and visual feedbacks were equally effective for real-time training, although subjects using tactile feedback took longer to converge on an acceptable gait. This study shows that real-time feedback of the KAM can greatly increase the effectiveness and efficiency of subject-specific gait retraining compared with conventional methods. © 2011 American Society of Mechanical Engineers.

  6. Navigation-aided visualization of lumbosacral nerves for anterior sacroiliac plate fixation: a case report.

    Science.gov (United States)

    Takao, Masaki; Nishii, Takashi; Sakai, Takashi; Sugano, Nobuhiko

    2014-06-01

    Anterior sacroiliac joint plate fixation for unstable pelvic ring fractures avoids soft tissue problems in the buttocks; however, the lumbosacral nerves lie in close proximity to the sacroiliac joint and may be injured during the procedure. A 49 year-old woman with a type C pelvic ring fracture was treated with an anterior sacroiliac plate using a computed tomography (CT)-three-dimensional (3D)-fluoroscopy matching navigation system, which visualized the lumbosacral nerves as well as the iliac and sacral bones. We used a flat panel detector 3D C-arm, which made it possible to superimpose our preoperative CT-based plan on the intra-operative 3D-fluoroscopic images. No postoperative complications were noted. Intra-operative lumbosacral nerve visualization using computer navigation was useful to recognize the 'at-risk' area for nerve injury during anterior sacroiliac plate fixation. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Visual feedback alters force control and functional activity in the visuomotor network after stroke

    Directory of Open Access Journals (Sweden)

    Derek B. Archer

    2018-01-01

    Full Text Available Modulating visual feedback may be a viable option to improve motor function after stroke, but the neurophysiological basis for this improvement is not clear. Visual gain can be manipulated by increasing or decreasing the spatial amplitude of an error signal. Here, we combined a unilateral visually guided grip force task with functional MRI to understand how changes in the gain of visual feedback alter brain activity in the chronic phase after stroke. Analyses focused on brain activation when force was produced by the most impaired hand of the stroke group as compared to the non-dominant hand of the control group. Our experiment produced three novel results. First, gain-related improvements in force control were associated with an increase in activity in many regions within the visuomotor network in both the stroke and control groups. These regions include the extrastriate visual cortex, inferior parietal lobule, ventral premotor cortex, cerebellum, and supplementary motor area. Second, the stroke group showed gain-related increases in activity in additional regions of lobules VI and VIIb of the ipsilateral cerebellum. Third, relative to the control group, the stroke group showed increased activity in the ipsilateral primary motor cortex, and activity in this region did not vary as a function of visual feedback gain. The visuomotor network, cerebellum, and ipsilateral primary motor cortex have each been targeted in rehabilitation interventions after stroke. Our observations provide new insight into the role these regions play in processing visual gain during a precisely controlled visuomotor task in the chronic phase after stroke.

  8. Visual feedback training using WII Fit improves balance in Parkinson's disease.

    Science.gov (United States)

    Zalecki, Tomasz; Gorecka-Mazur, Agnieszka; Pietraszko, Wojciech; Surowka, Artur D; Novak, Pawel; Moskala, Marek; Krygowska-Wajs, Anna

    2013-01-01

    Postural instability including imbalance is the most disabling long term problem in Parkinson's disease (PD) that does not respond to pharmacotherapy. This study aimed at investigating the effectiveness of a novel visual-feedback training method, using Wii Fit balance board in improving balance in patients with PD. Twenty four patients with moderate PD were included in the study which comprised of a 6-week home-based balance training program using Nintendo Wii Fit and balance board. The PD patients significantly improved their results in Berg Balance Scale, Tinnet's Performance-Oriented Mobility Assessment, Timed Up-and-Go, Sit-to-stand test, 10-Meter Walk test and Activities-specific Balance Confidence scale at the end of the programme. This study suggests that visual feedback training using Wii-Fit with balance board could improve dynamic and functional balance as well as motor disability in PD patients.

  9. Technology-Assisted Rehabilitation of Writing Skills in Parkinson’s Disease: Visual Cueing versus Intelligent Feedback

    Directory of Open Access Journals (Sweden)

    Evelien Nackaerts

    2017-01-01

    Full Text Available Recent research showed that visual cueing can have both beneficial and detrimental effects on handwriting of patients with Parkinson’s disease (PD and healthy controls depending on the circumstances. Hence, using other sensory modalities to deliver cueing or feedback may be a valuable alternative. Therefore, the current study compared the effects of short-term training with either continuous visual cues or intermittent intelligent verbal feedback. Ten PD patients and nine healthy controls were randomly assigned to one of these training modes. To assess transfer of learning, writing performance was assessed in the absence of cueing and feedback on both trained and untrained writing sequences. The feedback pen and a touch-sensitive writing tablet were used for testing. Both training types resulted in improved writing amplitudes for the trained and untrained sequences. In conclusion, these results suggest that the feedback pen is a valuable tool to implement writing training in a tailor-made fashion for people with PD. Future studies should include larger sample sizes and different subgroups of PD for long-term training with the feedback pen.

  10. Rubber hand illusion under delayed visual feedback.

    Directory of Open Access Journals (Sweden)

    Sotaro Shimada

    Full Text Available BACKGROUND: Rubber hand illusion (RHI is a subject's illusion of the self-ownership of a rubber hand that was touched synchronously with their own hand. Although previous studies have confirmed that this illusion disappears when the rubber hand was touched asynchronously with the subject's hand, the minimum temporal discrepancy of these two events for attenuation of RHI has not been examined. METHODOLOGY/PRINCIPAL FINDINGS: In this study, various temporal discrepancies between visual and tactile stimulations were introduced by using a visual feedback delay experimental setup, and RHI effects in each temporal discrepancy condition were systematically tested. The results showed that subjects felt significantly greater RHI effects with temporal discrepancies of less than 300 ms compared with longer temporal discrepancies. The RHI effects on reaching performance (proprioceptive drift showed similar conditional differences. CONCLUSIONS/SIGNIFICANCE: Our results first demonstrated that a temporal discrepancy of less than 300 ms between visual stimulation of the rubber hand and tactile stimulation to the subject's own hand is preferable to induce strong sensation of RHI. We suggest that the time window of less than 300 ms is critical for multi-sensory integration processes constituting the self-body image.

  11. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  12. Attainment and retention of force moderation following laparoscopic resection training with visual force feedback.

    Science.gov (United States)

    Hernandez, Rafael; Onar-Thomas, Arzu; Travascio, Francesco; Asfour, Shihab

    2017-11-01

    Laparoscopic training with visual force feedback can lead to immediate improvements in force moderation. However, the long-term retention of this kind of learning and its potential decay are yet unclear. A laparoscopic resection task and force sensing apparatus were designed to assess the benefits of visual force feedback training. Twenty-two male university students with no previous experience in laparoscopy underwent relevant FLS proficiency training. Participants were randomly assigned to either a control or treatment group. Both groups trained on the task for 2 weeks as follows: initial baseline, sixteen training trials, and post-test immediately after. The treatment group had visual force feedback during training, whereas the control group did not. Participants then performed four weekly test trials to assess long-term retention of training. Outcomes recorded were maximum pulling and pushing forces, completion time, and rated task difficulty. Extreme maximum pulling force values were tapered throughout both the training and retention periods. Average maximum pushing forces were significantly lowered towards the end of training and during retention period. No significant decay of applied force learning was found during the 4-week retention period. Completion time and rated task difficulty were higher during training, but results indicate that the difference eventually fades during the retention period. Significant differences in aptitude across participants were found. Visual force feedback training improves on certain aspects of force moderation in a laparoscopic resection task. Results suggest that with enough training there is no significant decay of learning within the first month of the retention period. It is essential to account for differences in aptitude between individuals in this type of longitudinal research. This study shows how an inexpensive force measuring system can be used with an FLS Trainer System after some retrofitting. Surgical

  13. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    Science.gov (United States)

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  14. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer David J

    2011-04-01

    Full Text Available Abstract Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for

  15. Parsimonious Ways to Use Vision for Navigation

    Directory of Open Access Journals (Sweden)

    Paul Graham

    2012-05-01

    Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.

  16. Promoting Increased Pitch Variation in Oral Presentations with Transient Visual Feedback

    Directory of Open Access Journals (Sweden)

    Rebecca Hincks

    2009-10-01

    Full Text Available This paper investigates learner response to a novel kind of intonation feedback generated from speech analysis. Instead of displays of pitch curves, our feedback is flashing lights that show how much pitch variation the speaker has produced. The variable used to generate the feedback is the standard deviation of fundamental frequency as measured in semitones. Flat speech causes the system to show yellow lights, while more expressive speech that has used pitch to give focus to any part of an utterance generates green lights. Participants in the study were 14 Chinese students of English at intermediate and advanced levels. A group that received visual feedback was compared with a group that received audio feedback. Pitch variation was measured at four stages: in a baseline oral presentation; for the first and second halves of three hours of training; and finally in the production of a new oral presentation. Both groups increased their pitch variation with training, and the effect lasted after the training had ended. The test group showed a significantly higher increase than the control group, indicating that the feedback is effective. These positive results imply that the feedback could be beneficially used in a system for practicing oral presentations.

  17. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Science.gov (United States)

    Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V

    2015-01-01

    It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear

  18. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Directory of Open Access Journals (Sweden)

    Marika T Leving

    Full Text Available It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process.17 Participants received visual feedback-based practice (feedback group and 15 participants received regular practice (natural learning group. Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block and optimize it in the prescribed direction (2nd 4-min block. To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability.The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group.These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not

  19. Haptic feedback for enhancing realism of walking simulations.

    Science.gov (United States)

    Turchet, Luca; Burelli, Paolo; Serafin, Stefania

    2013-01-01

    In this paper, we describe several experiments whose goal is to evaluate the role of plantar vibrotactile feedback in enhancing the realism of walking experiences in multimodal virtual environments. To achieve this goal we built an interactive and a noninteractive multimodal feedback system. While during the use of the interactive system subjects physically walked, during the use of the noninteractive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented with and without the haptic feedback. Results of the experiments provide a clear preference toward the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and noninteractive configurations. The majority of subjects clearly appreciated the added feedback. However, some subjects found the added feedback unpleasant. This might be due, on one hand, to the limits of the haptic simulation and, on the other hand, to the different individual desire to be involved in the simulations. Our findings can be applied to the context of physical navigation in multimodal virtual environments as well as to enhance the user experience of watching a movie or playing a video game.

  20. Peripheral visual feedback: a powerful means of supporting effective attention allocation in event-driven, data-rich environments.

    Science.gov (United States)

    Nikolic, M I; Sarter, N B

    2001-01-01

    Breakdowns in human-automation coordination in data-rich, event-driven domains such as aviation can be explained in part by a mismatch between the high degree of autonomy yet low observability of modern technology. To some extent, the latter is the result of an increasing reliance in feedback design on foveal vision--an approach that fails to support pilots in tracking system-induced changes and events in parallel with performing concurrent flight-related tasks. One possible solution to the problem is the distribution of tasks and information across sensory modalities and processing channels. A simulator study is presented that compared the effectiveness of current foveal feedback and two implementations of peripheral visual feedback for keeping pilots informed about uncommanded changes in the status of an automated cockpit system. Both peripheral visual displays resulted in higher detection rates and faster response times, without interfering with the performance of concurrent visual tasks any more than does currently available automation feedback. Potential applications include improved display designs that support effective attention allocation in a variety of complex dynamic environments, such as aviation, process control, and medicine.

  1. Enhance students’ motivation to learn programming by using direct visual feed-back

    DEFF Research Database (Denmark)

    Kofoed, Lise B.; Reng, Lars

    2011-01-01

    The technical subjects chosen are within programming. Using image-processing algorithms as means to provide direct visual feedback for learning basic C/C++. The pedagogical approach is within a PBL framework and is based on dialogue and collaborative learning. At the same time the intention...... was to establish a community of practice among the students and the teachers. A direct visual feedback and a higher level of merging between the artistic, creative, and technical lectures have been the focus of motivation as well as a complete restructuring of the elements of the technical lectures. The paper...... abilities and enhanced balance between the interdisciplinary disciplines of the study are analyzed. The conclusion is that the technical courses have got a higher status for the students. The students now see it as a very important basis for their further study, and their learning results have improved...

  2. Effect of visual feedback on the occipito-parietal-motor network in Parkinson's disease patients with freezing of gait

    Directory of Open Access Journals (Sweden)

    Priya D Velu

    2014-01-01

    Full Text Available Freezing of gait (FOG is an elusive phenomenon that debilitates a large number of Parkinson’s disease (PD patients regardless of stage of disease, medication status, or DBS implantation. Sensory cues, especially visual feedback cues, have been shown to alleviate FOG episodes or prevent episodes from even occurring. Here, we examine cortical information flow between occipital, parietal, and motor areas during the pre-movement stage of gait in a PD-with-FOG patient that had a strong positive behavioral response to visual cues, a PD-with-FOG patient without any behavioral response to visual cues, and an age-matched healthy control, before and after training with visual feedback. Results for this case study show differences in cortical information flow between the responding PD-with-FOG patient and the other two subjects, notably, an increased information flow in the beta range. Tentatively suggesting the formation of an alternative cortical sensory-motor pathway during training with visual feedback, these results are proposed as subject for further verification employing larger cohorts of patients.

  3. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    International Nuclear Information System (INIS)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho; Woo, Hyun Soo; Jo, Jae Min; Lee, Min Hee

    2015-01-01

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques

  4. Development of an online radiology case review system featuring interactive navigation of volumetric image datasets using advanced visualization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hyun Kyung; Kim, Boh Kyoung; Jung, Ju Hyun; Kang, Heung Sik; Lee, Kyoung Ho [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Woo, Hyun Soo [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul (Korea, Republic of); Jo, Jae Min [Dept. of Computer Science and Engineering, Seoul National University, Seoul (Korea, Republic of); Lee, Min Hee [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of)

    2015-11-15

    To develop an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques. Our Institutional Review Board approved the use of the patient data and waived the need for informed consent. We determined the following system requirements: volumetric navigation, accessibility, scalability, undemanding case management, trainee encouragement, and simulation of a busy practice. The system comprised a case registry server, client case review program, and commercially available cloud-based image viewing system. In the pilot test, we used 30 cases of low-dose abdomen computed tomography for the diagnosis of acute appendicitis. In each case, a trainee was required to navigate through the images and submit answers to the case questions. The trainee was then given the correct answers and key images, as well as the image dataset with annotations on the appendix. After evaluation of all cases, the system displayed the diagnostic accuracy and average review time, and the trainee was asked to reassess the failed cases. The pilot system was deployed successfully in a hands-on workshop course. We developed an online radiology case review system that allows interactive navigation of volumetric image datasets using advanced visualization techniques.

  5. Hand Motion-Based Remote Control Interface with Vibrotactile Feedback for Home Robots

    Directory of Open Access Journals (Sweden)

    Juan Wu

    2013-06-01

    Full Text Available This paper presents the design and implementation of a hand-held interface system for the locomotion control of home robots. A handheld controller is proposed to implement hand motion recognition and hand motion-based robot control. The handheld controller can provide a ‘connect-and-play’ service for the users to control the home robot with visual and vibrotactile feedback. Six natural hand gestures are defined for navigating the home robots. A three-axis accelerometer is used to detect the hand motions of the user. The recorded acceleration data are analysed and classified to corresponding control commands according to their characteristic curves. A vibration motor is used to provide vibrotactile feedback to the user when an improper operation is performed. The performances of the proposed hand motion-based interface and the traditional keyboard and mouse interface have been compared in robot navigation experiments. The experimental results of home robot navigation show that the success rate of the handheld controller is 13.33% higher than the PC based controller. The precision of the handheld controller is 15.4% more than that of the PC and the execution time is 24.7% less than the PC based controller. This means that the proposed hand motion-based interface is more efficient and flexible.

  6. Explicit knowledge about the availability of visual feedback affects grasping with the left but not the right hand.

    Science.gov (United States)

    Tang, Rixin; Whitwell, Robert L; Goodale, Melvyn A

    2014-01-01

    Previous research (Whitwell et al. in Exp Brain Res 188:603-611, 2008; Whitwell and Goodale in Exp Brain Res 194:619-629, 2009) has shown that trial history, but not anticipatory knowledge about the presence or absence of visual feedback on an upcoming trial, plays a vital role in determining how that feedback is exploited when grasping with the right hand. Nothing is known about how the non-dominant left hand behaves under the same feedback regimens. In present study, therefore, we compared peak grip aperture (PGA) for left- and right-hand grasps executed with and without visual feedback (i.e., closed- vs. open-loop conditions) in right-handed individuals under three different trial schedules: the feedback conditions were blocked separately, they were randomly interleaved, or they were alternated. When feedback conditions were blocked, the PGA was much larger for open-loop trials as compared to closed-loop trials, although this difference was more pronounced for right-hand grasps than left-hand grasps. Like Whitwell et al., we found that mixing open- and closed-loop trials together, compared to blocking them separately, homogenized the PGA for open- and closed-loop grasping in the right hand (i.e., the PGAs became smaller on open-loop trials and larger on closed-loop trials). In addition, the PGAs for right-hand grasps were entirely determined by trial history and not by knowledge of whether or not visual feedback would be available on an upcoming trial. In contrast to grasps made with the right hand, grasps made by the left hand were affected both by trial history and by anticipatory knowledge of the upcoming visual feedback condition. But these effects were observed only on closed-loop trials, i.e., the PGAs of grasps made with the left hand on closed-loop trials were smaller when participants could anticipate the availability of feedback on an upcoming trial (alternating trials) than when they could not (randomized trials). In contrast, grasps made with the

  7. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    Science.gov (United States)

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  8. Effect of an auditory feedback substitution, tactilo-kinesthetic, or visual feedback on kinematics of pouring water from kettle into cup.

    Science.gov (United States)

    Portnoy, Sigal; Halaby, Orli; Dekel-Chen, Dotan; Dierick, Frédéric

    2015-11-01

    Pouring hot water from a kettle into a cup may prove a hazardous task, especially for the elderly or the visually-impaired. Individuals with deteriorating eyesight may endanger their hands by performing this task with both hands, relaying on tactilo-kinesthetic feedback (TKF). Auditory feedback (AF) may allow them to perform the task singlehandedly, thereby reducing the risk for injury. However since relying on an AF is not intuitive and requires practice, we aimed to determine if AF supplied during the task of pouring water can be used naturally as visual feedback (VF) following practice. For this purpose, we quantified, in young healthy sighted subjects (n = 20), the performance and kinematics of pouring water in the presence of three isolated feedbacks: visual, tactilo-kinesthetic, or auditory. There were no significant differences between the weights of spilled water in the AF condition compared to the TKF condition in the first, fifth or thirteenth trials. The subjectively-reported difficulty levels of using the TKF and the AF were significantly reduced between the first and thirteenth trials for both TKF (p = 0.01) and AF (p = 0.001). Trunk rotation during the first trial using the TKF was significantly lower than the trunk rotation while using VF. Also, shoulder adduction during the first trial using the TKF was significantly higher than the shoulder adduction while using the VF. During the AF trials, the median travel distance of the tip of the kettle was significantly reduced in the first trials so that in the thirtieth trial it did not differ significantly from the median travel distance during the thirtieth trial using TKF and VF. The maximal velocity of the tip of the kettle was constant for each of the feedback conditions but was higher in 10 cm s(-1) using VF than TKF, which was higher in 10 cm s(-1) from using AF. The smoothness of movement of the TKF and AF conditions, expressed by the normalized jerk score (NJSM), was one and two orders

  9. Terminal attack trajectories of peregrine falcons are described by the proportional navigation guidance law of missiles.

    Science.gov (United States)

    Brighton, Caroline H; Thomas, Adrian L R; Taylor, Graham K

    2017-12-19

    The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus , attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best-and exceedingly well-modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant ( N ). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N law could find use in small visually guided drones designed to remove other drones from protected airspace. Copyright © 2017 the Author(s). Published by PNAS.

  10. Quantifying the impact on navigation performance in visually impaired: Auditory information loss versus information gain enabled through electronic travel aids.

    Directory of Open Access Journals (Sweden)

    Alex Kreilinger

    Full Text Available This study's purpose was to analyze and quantify the impact of auditory information loss versus information gain provided by electronic travel aids (ETAs on navigation performance in people with low vision. Navigation performance of ten subjects (age: 54.9±11.2 years with visual acuities >1.0 LogMAR was assessed via the Graz Mobility Test (GMT. Subjects passed through a maze in three different modalities: 'Normal' with visual and auditory information available, 'Auditory Information Loss' with artificially reduced hearing (leaving only visual information, and 'ETA' with a vibrating ETA based on ultrasonic waves, thereby facilitating visual, auditory, and tactile information. Main performance measures comprised passage time and number of contacts. Additionally, head tracking was used to relate head movements to motion direction. When comparing 'Auditory Information Loss' to 'Normal', subjects needed significantly more time (p<0.001, made more contacts (p<0.001, had higher relative viewing angles (p = 0.002, and a higher percentage of orientation losses (p = 0.011. The only significant difference when comparing 'ETA' to 'Normal' was a reduced number of contacts (p<0.001. Our study provides objective, quantifiable measures of the impact of reduced hearing on the navigation performance in low vision subjects. Significant effects of 'Auditory Information Loss' were found for all measures; for example, passage time increased by 17.4%. These findings show that low vision subjects rely on auditory information for navigation. In contrast, the impact of the ETA was not significant but further analysis of head movements revealed two different coping strategies: half of the subjects used the ETA to increase speed, whereas the other half aimed at avoiding contacts.

  11. Changes in Pain Modulation Occur Soon After Whiplash Trauma but are not Related to Altered Perception of Distorted Visual Feedback.

    Science.gov (United States)

    Daenen, Liesbeth; Nijs, Jo; Cras, Patrick; Wouters, Kristien; Roussel, Nathalie

    2014-09-01

    Widespread sensory hypersensitivity has been observed in acute whiplash associated disorders (WAD). Changes in descending pain modulation take part in central sensitization. However, endogenous pain modulation has never been investigated in acute WAD. Altered perception of distorted visual feedback has been observed in WAD. Both mechanisms (ie, pain modulation and perception of distorted visual feedback) may be different components of one integrated system orchestrated by the brain. This study evaluated conditioned pain modulation (CPM) in acute WAD. Secondly, we investigated whether changes in CPM are associated with altered perception of distorted visual feedback. Thirty patients with acute WAD, 35 patients with chronic WAD and 31 controls were subjected to an experiment evaluating CPM and a coordination task inducing visual mediated changes between sensory feedback and motor output. A significant CPM effect was observed in acute WAD (P = 0.012 and P = 0.006), which was significantly lower compared to controls (P = 0.004 and P = 0.020). No obvious differences in CPM were found between acute and chronic WAD (P = 0.098 and P = 0.041). Changes in CPM were unrelated to altered perception of distorted visual feedback (P > 0.01). Changes in CPM were observed in acute WAD, suggesting less efficient pain modulation. The results suggest that central pain and sensorimotor processing underlie distinctive mechanisms. © 2013 World Institute of Pain.

  12. A dual visual-local feedback model of the vergence eye movement system

    NARCIS (Netherlands)

    Erkelens, C.J.

    2011-01-01

    Pure vergence movements are the eye movements that we make when we change our binocular fixation between targets differing in distance but not in direction relative to the head. Pure vergence is slow and controlled by visual feedback. Saccades are the rapid eye movements that we make between targets

  13. Evolved Navigation Theory and Horizontal Visual Illusions

    Science.gov (United States)

    Jackson, Russell E.; Willey, Chela R.

    2011-01-01

    Environmental perception is prerequisite to most vertebrate behavior and its modern investigation initiated the founding of experimental psychology. Navigation costs may affect environmental perception, such as overestimating distances while encumbered (Solomon, 1949). However, little is known about how this occurs in real-world navigation or how…

  14. Pareto navigation-algorithmic foundation of interactive multi-criteria IMRT planning

    International Nuclear Information System (INIS)

    Monz, M; Kuefer, K H; Bortfeld, T R; Thieke, C

    2008-01-01

    Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle-a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far

  15. Pareto navigation: algorithmic foundation of interactive multi-criteria IMRT planning.

    Science.gov (United States)

    Monz, M; Küfer, K H; Bortfeld, T R; Thieke, C

    2008-02-21

    Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle -- a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far.

  16. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  17. Survey of computer vision technology for UVA navigation

    Science.gov (United States)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are

  18. Thoracic ROM measurement system with visual bio-feedback: system design and biofeedback evaluation.

    Science.gov (United States)

    Ando, Takeshi; Kawamura, Kazuya; Fujitani, Junko; Koike, Tomokazu; Fujimoto, Masashi; Fujie, Masakatsu G

    2011-01-01

    Patients with diseases such as chronic obstructive pulmonary disease (COPD) need to improve their thorax mobility. Thoracic ROM is one of the simplest and most useful indexes to evaluate the respiratory function. In this paper, we have proposed the prototype of a simple thoracic ROM measurement system with real-time visual bio-feedback in the chest expansion test. In this system, the thoracic ROM is measured using a wire-type linear encoder whose wire is wrapped around the thorax. In this paper, firstly, the repeatability and reliability of measured thoracic ROM was confirmed as a first report of the developed prototype. Secondly, we analyzed the effect of the bio-feedback system on the respiratory function. The result of the experiment showed that it was easier to maintain a large and stable thoracic ROM during deep breathing by using the real-time visual biofeedback system of the thoracic ROM.

  19. Visual Odometry for Autonomous Deep-Space Navigation Project

    Science.gov (United States)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.

  20. Differential effects of absent visual feedback control on gait variability during different locomotion speeds.

    Science.gov (United States)

    Wuehr, M; Schniepp, R; Pradhan, C; Ilmberger, J; Strupp, M; Brandt, T; Jahn, K

    2013-01-01

    Healthy persons exhibit relatively small temporal and spatial gait variability when walking unimpeded. In contrast, patients with a sensory deficit (e.g., polyneuropathy) show an increased gait variability that depends on speed and is associated with an increased fall risk. The purpose of this study was to investigate the role of vision in gait stabilization by determining the effects of withdrawing visual information (eyes closed) on gait variability at different locomotion speeds. Ten healthy subjects (32.2 ± 7.9 years, 5 women) walked on a treadmill for 5-min periods at their preferred walking speed and at 20, 40, 70, and 80 % of maximal walking speed during the conditions of walking with eyes open (EO) and with eyes closed (EC). The coefficient of variation (CV) and fractal dimension (α) of the fluctuations in stride time, stride length, and base width were computed and analyzed. Withdrawing visual information increased the base width CV for all walking velocities (p < 0.001). The effects of absent visual information on CV and α of stride time and stride length were most pronounced during slow locomotion (p < 0.001) and declined during fast walking speeds. The results indicate that visual feedback control is used to stabilize the medio-lateral (i.e., base width) gait parameters at all speed sections. In contrast, sensory feedback control in the fore-aft direction (i.e., stride time and stride length) depends on speed. Sensory feedback contributes most to fore-aft gait stabilization during slow locomotion, whereas passive biomechanical mechanisms and an automated central pattern generation appear to control fast locomotion.

  1. The influence of verbal training and visual feedback on manual wheelchair propulsion.

    Science.gov (United States)

    DeGroot, Keri K; Hollingsworth, Holly H; Morgan, Kerri A; Morris, Carrie L; Gray, David B

    2009-03-01

    To determine if verbal training with visual feedback improved manual wheelchair propulsion; to examine propulsion differences between an individual with paraplegia and an individual with tetraplegia. Quasi-experimental study: Nine manual wheelchair-using adults participated in propulsion assessments and training. Baseline propulsion performance was measured on several tasks on different surfaces. Participants were trained on a wheelchair treadmill with verbal and visual feedback to increase push length, reduce push frequency and to modify propulsion pattern. Handrim biomechanics were measured with an instrumented wheel. Changes in propulsion were assessed. Differences in propulsion characteristics between a participant with paraplegia and a participant with tetraplegia were examined. Push length increased (p propulsion characteristics between a participant with paraplegia and a participant with tetraplegia. Verbal training may produce changes in push biomechanics of manual wheelchair users. Longer training periods may be needed to sustain propulsion changes. Findings from this study support other studies that have shown propulsion differences between people with tetraplegia and paraplegia. Propulsion training for populations with upper-extremity impairments warrants further study.

  2. Can explicit visual feedback of postural sway efface the effects of sensory manipulations on mediolateral balance performance?

    NARCIS (Netherlands)

    Cofre Lizama, L.E.; Pijnappels, M.A.G.M.; Reeves, N.P.; Verschueren, S.M.; van Dieen, J.H.

    2016-01-01

    Explicit visual feedback on postural sway is often used in balance assessment and training. However, up-weighting of visual information may mask impairments of other sensory systems. We therefore aimed to determine whether the effects of somatosensory, vestibular, and proprioceptive manipulations on

  3. Vibrotactile Feedback for Brain-Computer Interface Operation

    Directory of Open Access Journals (Sweden)

    Febo Cincotti

    2007-01-01

    Full Text Available To be correctly mastered, brain-computer interfaces (BCIs need an uninterrupted flow of feedback to the user. This feedback is usually delivered through the visual channel. Our aim was to explore the benefits of vibrotactile feedback during users' training and control of EEG-based BCI applications. A protocol for delivering vibrotactile feedback, including specific hardware and software arrangements, was specified. In three studies with 33 subjects (including 3 with spinal cord injury, we compared vibrotactile and visual feedback, addressing: (I the feasibility of subjects' training to master their EEG rhythms using tactile feedback; (II the compatibility of this form of feedback in presence of a visual distracter; (III the performance in presence of a complex visual task on the same (visual or different (tactile sensory channel. The stimulation protocol we developed supports a general usage of the tactors; preliminary experimentations. All studies indicated that the vibrotactile channel can function as a valuable feedback modality with reliability comparable to the classical visual feedback. Advantages of using a vibrotactile feedback emerged when the visual channel was highly loaded by a complex task. In all experiments, vibrotactile feedback felt, after some training, more natural for both controls and SCI users.

  4. Impact of online visual feedback on motor acquisition and retention when learning to reach in a force field.

    Science.gov (United States)

    Batcho, C S; Gagné, M; Bouyer, L J; Roy, J S; Mercier, C

    2016-11-19

    When subjects learn a novel motor task, several sources of feedback (proprioceptive, visual or auditory) contribute to the performance. Over the past few years, several studies have investigated the role of visual feedback in motor learning, yet evidence remains conflicting. The aim of this study was therefore to investigate the role of online visual feedback (VFb) on the acquisition and retention stages of motor learning associated with training in a reaching task. Thirty healthy subjects made ballistic reaching movements with their dominant arm toward two targets, on 2 consecutive days using a robotized exoskeleton (KINARM). They were randomly assigned to a group with (VFb) or without (NoVFb) VFb of index position during movement. On day 1, the task was performed before (baseline) and during the application of a velocity-dependent resistive force field (adaptation). To assess retention, participants repeated the task with the force field on day 2. Motor learning was characterized by: (1) the final endpoint error (movement accuracy) and (2) the initial angle (iANG) of deviation (motor planning). Even though both groups showed motor adaptation, the NoVFb-group exhibited slower learning and higher final endpoint error than the VFb-group. In some condition, subjects trained without visual feedback used more curved initial trajectories to anticipate for the perturbation. This observation suggests that learning to reach targets in a velocity-dependent resistive force field is possible even when feedback is limited. However, the absence of VFb leads to different strategies that were only apparent when reaching toward the most challenging target. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Detecting delay in visual feedback of an action as a monitor of self recognition.

    Science.gov (United States)

    Hoover, Adria E N; Harris, Laurence R

    2012-10-01

    How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

  6. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    Science.gov (United States)

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  7. Distinct Feedforward and Feedback Effects of Microstimulation in Visual Cortex Reveal Neural Mechanisms of Texture Segregation.

    Science.gov (United States)

    Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R

    2017-07-05

    The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Outer navigation of a inspection robot by means of feedback of global guidance; Navegacion exterior de un robot de inspeccion mediante retroalimentacion de la orientacion global

    Energy Technology Data Exchange (ETDEWEB)

    Segovia de los R, A.; Bucio V, F. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico); Garduno G, M. [Instituto Tecnologico de Toluca, Av. Instituto Tecnologico s/n, Metepec, Estado de Mexico 52140 (Mexico)]. e-mail: asegovia@nuclear.inin.mx

    2008-07-01

    The objective of this article is the presentation of an inspection system to mobile robot navigating in exteriors by means of the employment of a feedback of instantaneous guidance with respect to a global reference throughout moment of the displacement. The robot evolves obeying the commands coming from the one tele operator which indicates the diverse addresses by means of the operation console that the robot should take using for it information provided by an electronic compass. The mobile robot employee in the experimentations is a Pioneer 3-AT, which counts with a sensor series required to obtain an operation of more autonomy. The electronic compass offers geographical information coded in a format SPI, reason for which a micro controller ({mu}C) economic of general use has been an employee for to transfer the information to the format RS-232, originally used by the Pioneer 3-AT. The orientation information received by the robot by means of their serial port RS-232 secondary it is forwarded to the computer hostess in the one which a program Java is used to generate the commands for the robot navigation control and to deploy one graphic interface user utilized to receive the order of the operator. This research is part of an ambitious project in which it is tried to count on an inspection system and monitoring of sites in which risks of high radiation levels could exist, thus a navigation systems in exteriors could be very useful. The complete system will count besides the own sensors of the robot, with certain numbers of agree sensors to the variables that are desired to monitor. The resulting values of such measurements will be visualized in real time in the graphic interface user, thanks to a bidirectional wireless communication among the station of operation and the mobile robot. (Author)

  9. 14 CFR 121.349 - Communication and navigation equipment for operations under VFR over routes not navigated by...

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment for... § 121.349 Communication and navigation equipment for operations under VFR over routes not navigated by... receiver providing visual and aural signals; and (iii) One ILS receiver; and (3) Any RNAV system used to...

  10. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  11. Experimental System for Investigation of Visual Sensory Input in Postural Feedback Control

    Directory of Open Access Journals (Sweden)

    Jozef Pucik

    2012-01-01

    Full Text Available The human postural control system represents a biological feedback system responsible for maintenance of upright stance. Vestibular, proprioceptive and visual sensory inputs provide the most important information into the control system, which controls body centre of mass (COM in order to stabilize the human body resembling an inverted pendulum. The COM can be measured indirectly by means of a force plate as the centre of pressure (COP. Clinically used measurement method is referred to as posturography. In this paper, the conventional static posturography is extended by visual stimulation, which provides insight into a role of visual information in balance control. Visual stimuli have been designed to induce body sway in four specific directions – forward, backward, left and right. Stabilograms were measured using proposed single-PC based system and processed to calculate velocity waveforms and posturographic parameters. The parameters extracted from pre-stimulus and on-stimulus periods exhibit statistically significant differences.

  12. Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback

    Directory of Open Access Journals (Sweden)

    Andrés Cela

    2012-12-01

    Full Text Available The aim of this article is focused on the design of an obstacle detection system for assisting visually impaired people. A dense disparity map is computed from the images of a stereo camera carried by the user. By using the dense disparity map, potential obstacles can be detected in 3D in indoor and outdoor scenarios. A ground plane estimation algorithm based on RANSAC plus filtering techniques allows the robust detection of the ground in every frame. A polar grid representation is proposed to account for the potential obstacles in the scene. The design is completed with acoustic feedback to assist visually impaired users while approaching obstacles. Beep sounds with different frequencies and repetitions inform the user about the presence of obstacles. Audio bone conducting technology is employed to play these sounds without interrupting the visually impaired user from hearing other important sounds from its local environment. A user study participated by four visually impaired volunteers supports the proposed system.

  13. Vibrotactile in-vehicle navigation system

    NARCIS (Netherlands)

    Erp, J.B.F. van; Veen, H.J. van

    2004-01-01

    A vibrotactile display, consisting ofeight vibrating elements or tactors mounted in a driver's seat, was tested in a driving simulator. Participants drove with visual, tactile and multimodal navigation displays through a built-up area. Workload and the reaction time to navigation messages were

  14. The Use of Visual Feedback during Signing: Evidence from Signers with Impaired Vision

    Science.gov (United States)

    Emmorey, Karen; Korpics, Franco; Petronio, Karen

    2009-01-01

    The role of visual feedback during the production of American Sign Language was investigated by comparing the size of signing space during conversations and narrative monologues for normally sighted signers, signers with tunnel vision due to Usher syndrome, and functionally blind signers. The interlocutor for all groups was a normally sighted deaf…

  15. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  16. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  17. Use of visual CO2 feedback as a retrofit solution for improving classroom air quality

    DEFF Research Database (Denmark)

    Wargocki, Pawel; Da Silva, Nuno Alexandre Faria

    2015-01-01

    Carbon dioxide (CO2) sensors that provide a visual indication were installed in classrooms during normal school operation. During 2-week periods, teachers and students were instructed to open the windows in response to the visual CO2 feedback in 1week and open them, as they would normally do, wit...

  18. Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex.

    Science.gov (United States)

    van Kerkoerle, Timo; Self, Matthew W; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Poort, Jasper; van der Togt, Chris; Roelfsema, Pieter R

    2014-10-07

    Cognitive functions rely on the coordinated activity of neurons in many brain regions, but the interactions between cortical areas are not yet well understood. Here we investigated whether low-frequency (α) and high-frequency (γ) oscillations characterize different directions of information flow in monkey visual cortex. We recorded from all layers of the primary visual cortex (V1) and found that γ-waves are initiated in input layer 4 and propagate to the deep and superficial layers of cortex, whereas α-waves propagate in the opposite direction. Simultaneous recordings from V1 and downstream area V4 confirmed that γ- and α-waves propagate in the feedforward and feedback direction, respectively. Microstimulation in V1 elicited γ-oscillations in V4, whereas microstimulation in V4 elicited α-oscillations in V1, thus providing causal evidence for the opposite propagation of these rhythms. Furthermore, blocking NMDA receptors, thought to be involved in feedback processing, suppressed α while boosting γ. These results provide new insights into the relation between brain rhythms and cognition.

  19. Visual feedback attenuates mean concentric barbell velocity loss, and improves motivation, competitiveness, and perceived workload in male adolescent athletes.

    Science.gov (United States)

    Weakley, Jonathon Js; Wilson, Kyle M; Till, Kevin; Read, Dale B; Darrall-Jones, Joshua; Roe, Gregory; Phibbs, Padraic J; Jones, Ben

    2017-07-12

    It is unknown whether instantaneous visual feedback of resistance training outcomes can enhance barbell velocity in younger athletes. Therefore, the purpose of this study was to quantify the effects of visual feedback on mean concentric barbell velocity in the back squat, and to identify changes in motivation, competitiveness, and perceived workload. In a randomised-crossover design (Feedback vs. Control) feedback of mean concentric barbell velocity was or was not provided throughout a set of 10 repetitions in the barbell back squat. Magnitude-based inferences were used to assess changes between conditions, with almost certainly greater differences in mean concentric velocity between the Feedback (0.70 ±0.04 m·s) and Control (0.65 ±0.05 m·s) observed. Additionally, individual repetition mean concentric velocity ranged from possibly (repetition number two: 0.79 ±0.04 vs. 0.78 ±0.04 m·s) to almost certainly (repetition number 10: 0.58 ±0.05 vs. 0.49 ±0.05 m·s) greater when provided feedback, while almost certain differences were observed in motivation, competitiveness, and perceived workload, respectively. Providing adolescent male athletes with visual kinematic information while completing resistance training is beneficial for the maintenance of barbell velocity during a training set, potentially enhancing physical performance. Moreover, these improvements were observed alongside increases in motivation, competitiveness and perceived workload providing insight into the underlying mechanisms responsible for the performance gains observed. Given the observed maintenance of barbell velocity during a training set, practitioners can use this technique to manipulate training outcomes during resistance training.

  20. Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics.

    Science.gov (United States)

    Srinivasan, Mandyam V

    2011-04-01

    Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles.

  1. Augmented visual feedback of movement performance to enhance walking recovery after stroke: study protocol for a pilot randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Thikey Heather

    2012-09-01

    Full Text Available Abstract Background Increasing evidence suggests that use of augmented visual feedback could be a useful approach to stroke rehabilitation. In current clinical practice, visual feedback of movement performance is often limited to the use of mirrors or video. However, neither approach is optimal since cognitive and self-image issues can distract or distress patients and their movement can be obscured by clothing or limited viewpoints. Three-dimensional motion capture has the potential to provide accurate kinematic data required for objective assessment and feedback in the clinical environment. However, such data are currently presented in numerical or graphical format, which is often impractical in a clinical setting. Our hypothesis is that presenting this kinematic data using bespoke visualisation software, which is tailored for gait rehabilitation after stroke, will provide a means whereby feedback of movement performance can be communicated in a more meaningful way to patients. This will result in increased patient understanding of their rehabilitation and will enable progress to be tracked in a more accessible way. Methods The hypothesis will be assessed using an exploratory (phase II randomised controlled trial. Stroke survivors eligible for this trial will be in the subacute stage of stroke and have impaired walking ability (Functional Ambulation Classification of 1 or more. Participants (n = 45 will be randomised into three groups to compare the use of the visualisation software during overground physical therapy gait training against an intensity-matched and attention-matched placebo group and a usual care control group. The primary outcome measure will be walking speed. Secondary measures will be Functional Ambulation Category, Timed Up and Go, Rivermead Visual Gait Assessment, Stroke Impact Scale-16 and spatiotemporal parameters associated with walking. Additional qualitative measures will be used to assess the participant

  2. The Effects of Visual Feedback on CPR Skill Retention in Graduate Student Athletic Trainers

    Directory of Open Access Journals (Sweden)

    Michael G. Miller

    2015-09-01

    Full Text Available Context: Studies examining the effectiveness of cardiopulmonary resuscitation (CPR chest compressions have found compression depth and rate to be less than optimal and recoil to full release to be incomplete. Objective: To determine if visual feedback affects the rate and depth of chest compressions and chest recoil values during CPR training of athletic trainers and to determine retention of proficiency over time. Design: Pre-test, post-test. Setting: Medical simulation laboratory. Participants: Eleven females and one male (23.08+.51 years old, from an Athletic Training Graduate Program. All participants were Certified Athletic Trainers (1.12+.46 years of experience and certified in CPR for the Professional Rescuer. Interventions: Participants completed a pre-test, practice sessions, and a post-test on a SimMan® (Laerdal Medical manikin with visual feedback of skills in real time. After the pre-test, participants received feedback by the investigators. Participants completed practice sessions as needed (range=1-4 sessions, until they reached 100% skill proficiency. After achieving proficiency, participants returned 8 weeks later to perform the CPR skills. Main Outcome Measures: The average of all compression outcome measures (rate, depth, recoil was captured every 10 seconds (6x per min. All participants performed 5 cycles of 30 compressions. A two-tailed paired samples t-test (pre to post was used to compare rate of chest compressions, depth of chest compressions, and recoil of the chest. Significance was set a priori at pResults: There was a significant difference between pre and post-test compression depth average, p=.002. The pre-depth average was 41mm + 9.83mm compared to the post-depth average of 52.26mm + 5mm. There were no significant differences between pre and post-test chest compression rates and recoil. Conclusions: The use of a simulated manikin with visual feedback facilitated participants to reach the recommended compression

  3. Adaptation effects in static postural control by providing simultaneous visual feedback of center of pressure and center of gravity.

    Science.gov (United States)

    Takeda, Kenta; Mani, Hiroki; Hasegawa, Naoya; Sato, Yuki; Tanaka, Shintaro; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-07-19

    The benefit of visual feedback of the center of pressure (COP) on quiet standing is still debatable. This study aimed to investigate the adaptation effects of visual feedback training using both the COP and center of gravity (COG) during quiet standing. Thirty-four healthy young adults were divided into three groups randomly (COP + COG, COP, and control groups). A force plate was used to calculate the coordinates of the COP in the anteroposterior (COP AP ) and mediolateral (COP ML ) directions. A motion analysis system was used to calculate the coordinates of the center of mass (COM) in both directions (COM AP and COM ML ). The coordinates of the COG in the AP direction (COG AP ) were obtained from the force plate signals. Augmented visual feedback was presented on a screen in the form of fluctuation circles in the vertical direction that moved upward as the COP AP and/or COG AP moved forward and vice versa. The COP + COG group received the real-time COP AP and COG AP feedback simultaneously, whereas the COP group received the real-time COP AP feedback only. The control group received no visual feedback. In the training session, the COP + COG group was required to maintain an even distance between the COP AP and COG AP and reduce the COG AP fluctuation, whereas the COP group was required to reduce the COP AP fluctuation while standing on a foam pad. In test sessions, participants were instructed to keep their standing posture as quiet as possible on the foam pad before (pre-session) and after (post-session) the training sessions. In the post-session, the velocity and root mean square of COM AP in the COP + COG group were lower than those in the control group. In addition, the absolute value of the sum of the COP - COM distances in the COP + COG group was lower than that in the COP group. Furthermore, positive correlations were found between the COM AP velocity and COP - COM parameters. The results suggest that the novel visual feedback

  4. Guideline implementation in clinical practice: Use of statistical process control charts as visual feedback devices

    Directory of Open Access Journals (Sweden)

    Fahad A Al-Hussein

    2009-01-01

    Conclusions: A process of audits in the context of statistical process control is necessary for any improvement in the implementation of guidelines in primary care. Statistical process control charts are an effective means of visual feedback to the care providers.

  5. SeSaMoNet 2.0: Improving a Navigation System for Visually Impaired People

    Science.gov (United States)

    Ceipidor, Ugo Biader; Medaglia, Carlo Maria; Sciarretta, Eliseo

    The authors present the improvements obtained during the work done for the last installation of SeSaMoNet, a navigation system for blind people. First the mobility issues of visually impaired people are shown together with strategies to solve them. Then an overview of the system and of its main elements is given. Afterward, the reasons which brought to a re-design are explained and finally the main features of the last system revision for the application are presented and compared to the previous one.

  6. Reproducibility of The Abdominal and Chest Wall Position by Voluntary Breath-Hold Technique Using a Laser-Based Monitoring and Visual Feedback System

    International Nuclear Information System (INIS)

    Nakamura, Katsumasa; Shioyama, Yoshiyuki; Nomoto, Satoru; Ohga, Saiji; Toba, Takashi; Yoshitake, Tadamasa; Anai, Shigeo; Terashima, Hiromi; Honda, Hiroshi

    2007-01-01

    Purpose: The voluntary breath-hold (BH) technique is a simple method to control the respiration-related motion of a tumor during irradiation. However, the abdominal and chest wall position may not be accurately reproduced using the BH technique. The purpose of this study was to examine whether visual feedback can reduce the fluctuation in wall motion during BH using a new respiratory monitoring device. Methods and Materials: We developed a laser-based BH monitoring and visual feedback system. For this study, five healthy volunteers were enrolled. The volunteers, practicing abdominal breathing, performed shallow end-expiration BH (SEBH), shallow end-inspiration BH (SIBH), and deep end-inspiration BH (DIBH) with or without visual feedback. The abdominal and chest wall positions were measured at 80-ms intervals during BHs. Results: The fluctuation in the chest wall position was smaller than that of the abdominal wall position. The reproducibility of the wall position was improved by visual feedback. With a monitoring device, visual feedback reduced the mean deviation of the abdominal wall from 2.1 ± 1.3 mm to 1.5 ± 0.5 mm, 2.5 ± 1.9 mm to 1.1 ± 0.4 mm, and 6.6 ± 2.4 mm to 2.6 ± 1.4 mm in SEBH, SIBH, and DIBH, respectively. Conclusions: Volunteers can perform the BH maneuver in a highly reproducible fashion when informed about the position of the wall, although in the case of DIBH, the deviation in the wall position remained substantial

  7. Effects of visual feedback balance training on the balance and ankle instability in adult men with functional ankle instability.

    Science.gov (United States)

    Nam, Seung-Min; Kim, Kyoung; Lee, Do Youn

    2018-01-01

    [Purpose] This study examined the effects of visual feedback balance training on the balance and ankle instability in adult men with functional ankle instability. [Subjects and Methods] Twenty eight adults with functional ankle instability, divided randomly into an experimental group, which performed visual feedback balance training for 20 minutes and ankle joint exercises for 10 minutes, and a control group, which performed ankle joint exercise for 30 minutes. Exercises were completed three times a week for 8 weeks. Bio rescue was used for balance ability. It measured limit of stability at one minute. For ankle instability was measured using Cumberland ankle instability tool (CAIT). This measure was performed before and after the experiments in each group. [Results] The experimental group had significant increase in the Limit of Stability and CAIT score. The control group had significant increase in CAIT score. While the Limit of Stability increased without significance. [Conclusion] In conclusion, visual feedback balance training can be recommended as a treatment method for patients with functional ankle instability.

  8. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  9. Identification of critical areas of carotid stent navigation by measurement of resistive forces in vitro, using silicone phantoms

    International Nuclear Information System (INIS)

    Sengupta, A.; Kesavadas, T.; Baier, R.E.; Hoffmann, K.R.; Schafer, S.

    2007-01-01

    Manipulation of surgical tools in neuro-endovascular surgery presents problems that are unique to this procedure. Navigating tools through arterial complexities without appropriate visual or force feedback information often causes tool snagging, plaque dislocations and formation of thrombosis from the damage of the arterial wall by the tools. Identifying the critical areas in the vasculature during navigation of endovascular tools, will not only ensure safer surgical planning but also reduce risks of vessel damage. In the present research, resistive forces of stent navigation were measured in-vitro using silicone phantoms and clinically relevant surgical devices. The patterns of variation of the forces along the path of the stent movement were analyzed and mapped along the path of stent movement using a color code. It was observed that the forces changed along the length of the vessel, independent of the insertion length but based on the curvature of the vessel and the contact area of the device in the vessel lumen. (orig.)

  10. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    Directory of Open Access Journals (Sweden)

    Donghun Kim

    2014-06-01

    Full Text Available In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user’s pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  11. Shape Perception and Navigation in Blind Adults

    Science.gov (United States)

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  12. Kinesthetic and vestibular information modulate alpha activity during spatial navigation: a mobile EEG study.

    Science.gov (United States)

    Ehinger, Benedikt V; Fischer, Petra; Gert, Anna L; Kaufhold, Lilli; Weber, Felix; Pipa, Gordon; König, Peter

    2014-01-01

    In everyday life, spatial navigation involving locomotion provides congruent visual, vestibular, and kinesthetic information that need to be integrated. Yet, previous studies on human brain activity during navigation focus on stationary setups, neglecting vestibular and kinesthetic feedback. The aim of our work is to uncover the influence of those sensory modalities on cortical processing. We developed a fully immersive virtual reality setup combined with high-density mobile electroencephalography (EEG). Participants traversed one leg of a triangle, turned on the spot, continued along the second leg, and finally indicated the location of their starting position. Vestibular and kinesthetic information was provided either in combination, as isolated sources of information, or not at all within a 2 × 2 full factorial intra-subjects design. EEG data were processed by clustering independent components, and time-frequency spectrograms were calculated. In parietal, occipital, and temporal clusters, we detected alpha suppression during the turning movement, which is associated with a heightened demand of visuo-attentional processing and closely resembles results reported in previous stationary studies. This decrease is present in all conditions and therefore seems to generalize to more natural settings. Yet, in incongruent conditions, when different sensory modalities did not match, the decrease is significantly stronger. Additionally, in more anterior areas we found that providing only vestibular but no kinesthetic information results in alpha increase. These observations demonstrate that stationary experiments omit important aspects of sensory feedback. Therefore, it is important to develop more natural experimental settings in order to capture a more complete picture of neural correlates of spatial navigation.

  13. Kinesthetic and Vestibular Information Modulate Alpha Activity during Spatial Navigation: A Mobile EEG Study

    Directory of Open Access Journals (Sweden)

    Benedikt Valerian Ehinger

    2014-02-01

    Full Text Available In everyday life, spatial navigation involving locomotion provides congruent visual, vestibular and kinesthetic information that need to be integrated. Yet, previous studies on human brain activity during navigation focus on stationary setups, neglecting vestibular and kinesthetic feedback. The aim of our work is to uncover the influence of those sensory modalities on cortical processing. We developed a fully immersive virtual reality setup combined with high-density mobile electroencephalography (EEG. Participants traversed one leg of a triangle, turned on the spot, continued along the second leg and finally indicated the location of their starting position. Vestibular and kinesthetic information was provided either in combination, as isolated sources of information or not at all within a 2x2 full factorial intra-subjects design. EEG data were processed by clustering independent components, and time-frequency spectrograms were calculated. In parietal, occipital and temporal clusters, we detected alpha suppression during the turning movement, which is associated with a heightened demand of visuo-attentional processing, and closely resembles results reported in previous stationary studies. This decrease is present in all conditions and therefore seems to generalize to more natural settings. Yet, in incongruent conditions, when different sensory modalities did not match, the decrease is significantly stronger. Additionally, in more anterior areas, we found that providing only vestibular but no kinesthetic information results in alpha increase. These observations demonstrate that stationary experiments omit important aspects of sensory feedback. Therefore, it is important to develop more natural experimental settings in order to capture a more complete picture of neural correlates of spatial navigation.

  14. Cloud-Induced Uncertainty for Visual Navigation

    Science.gov (United States)

    2014-12-26

    can occur due to interference, jamming, or signal blockage in urban canyons. In GPS-denied environments, a GP- S/INS navigation system is forced to rely...physics-based approaches use equations that model fluid flow, thermodynamics, water condensation , and evapora- tion to generate clouds [4]. The drawback

  15. Bio-inspired modeling and implementation of the ocelli visual system of flying insects.

    Science.gov (United States)

    Gremillion, Gregory; Humbert, J Sean; Krapp, Holger G

    2014-12-01

    Two visual sensing modalities in insects, the ocelli and compound eyes, provide signals used for flight stabilization and navigation. In this article, a generalized model of the ocellar visual system is developed for a 3-D visual simulation environment based on behavioral, anatomical, and electrophysiological data from several species. A linear measurement model is estimated from Monte Carlo simulation in a cluttered urban environment relating state changes of the vehicle to the outputs of the ocellar model. A fully analog-printed circuit board sensor based on this model is designed and fabricated. Open-loop characterization of the sensor to visual stimuli induced by self motion is performed. Closed-loop stabilizing feedback of the sensor in combination with optic flow sensors is implemented onboard a quadrotor micro-air vehicle and its impulse response is characterized.

  16. A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

    Science.gov (United States)

    Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.

    2015-01-01

    For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135

  17. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept.

    Science.gov (United States)

    Roosink, Meyke; Robitaille, Nicolas; McFadyen, Bradford J; Hébert, Luc J; Jackson, Philip L; Bouyer, Laurent J; Mercier, Catherine

    2015-01-05

    Virtual reality (VR) provides interactive multimodal sensory stimuli and biofeedback, and can be a powerful tool for physical and cognitive rehabilitation. However, existing systems have generally not implemented realistic full-body avatars and/or a scaling of visual movement feedback. We developed a "virtual mirror" that displays a realistic full-body avatar that responds to full-body movements in all movement planes in real-time, and that allows for the scaling of visual feedback on movements in real-time. The primary objective of this proof-of-concept study was to assess the ability of healthy subjects to detect scaled feedback on trunk flexion movements. The "virtual mirror" was developed by integrating motion capture, virtual reality and projection systems. A protocol was developed to provide both augmented and reduced feedback on trunk flexion movements while sitting and standing. The task required reliance on both visual and proprioceptive feedback. The ability to detect scaled feedback was assessed in healthy subjects (n = 10) using a two-alternative forced choice paradigm. Additionally, immersion in the VR environment and task adherence (flexion angles, velocity, and fluency) were assessed. The ability to detect scaled feedback could be modelled using a sigmoid curve with a high goodness of fit (R2 range 89-98%). The point of subjective equivalence was not significantly different from 0 (i.e. not shifted), indicating an unbiased perception. The just noticeable difference was 0.035 ± 0.007, indicating that subjects were able to discriminate different scaling levels consistently. VR immersion was reported to be good, despite some perceived delays between movements and VR projections. Movement kinematic analysis confirmed task adherence. The new "virtual mirror" extends existing VR systems for motor and pain rehabilitation by enabling the use of realistic full-body avatars and scaled feedback. Proof-of-concept was demonstrated for the assessment of

  18. A Study of Visual Descriptors for Outdoor Navigation Using Google Street View Images

    Directory of Open Access Journals (Sweden)

    L. Fernández

    2016-01-01

    Full Text Available A comparative analysis between several methods to describe outdoor panoramic images is presented. The main objective consists in studying the performance of these methods in the localization process of a mobile robot (vehicle in an outdoor environment, when a visual map that contains images acquired from different positions of the environment is available. With this aim, we make use of the database provided by Google Street View, which contains spherical panoramic images captured in urban environments and their GPS position. The main benefit of using these images resides in the fact that it permits testing any novel localization algorithm in countless outdoor environments anywhere in the world and under realistic capture conditions. The main contribution of this work consists in performing a comparative evaluation of different methods to describe images to solve the localization problem in an outdoor dense map using only visual information. We have tested our algorithms using several sets of panoramic images captured in different outdoor environments. The results obtained in the work can be useful to select an appropriate description method for visual navigation tasks in outdoor environments using the Google Street View database and taking into consideration both the accuracy in localization and the computational efficiency of the algorithm.

  19. Design and Evaluation of Shape-Changing Haptic Interfaces for Pedestrian Navigation Assistance.

    Science.gov (United States)

    Spiers, Adam J; Dollar, Aaron M

    2017-01-01

    Shape-changing interfaces are a category of device capable of altering their form in order to facilitate communication of information. In this work, we present a shape-changing device that has been designed for navigation assistance. 'The Animotus' (previously, 'The Haptic Sandwich' ), resembles a cube with an articulated upper half that is able to rotate and extend (translate) relative to the bottom half, which is fixed in the user's grasp. This rotation and extension, generally felt via the user's fingers, is used to represent heading and proximity to navigational targets. The device is intended to provide an alternative to screen or audio based interfaces for visually impaired, hearing impaired, deafblind, and sighted pedestrians. The motivation and design of the haptic device is presented, followed by the results of a navigation experiment that aimed to determine the role of each device DOF, in terms of facilitating guidance. An additional device, 'The Haptic Taco', which modulated its volume in response to target proximity (negating directional feedback), was also compared. Results indicate that while the heading (rotational) DOF benefited motion efficiency, the proximity (translational) DOF benefited velocity. Combination of the two DOF improved overall performance. The volumetric Taco performed comparably to the Animotus' extension DOF.

  20. Validation of exposure visualization and audible distance emission for navigated temporal bone drilling in phantoms.

    Directory of Open Access Journals (Sweden)

    Eduard H J Voormolen

    Full Text Available BACKGROUND: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. METHODOLOGY/PRINCIPAL FINDINGS: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. CONCLUSIONS/SIGNIFICANCE: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling.

  1. A feedback model of visual attention.

    Science.gov (United States)

    Spratling, M W; Johnson, M H

    2004-03-01

    Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.

  2. Age-specific effects of mirror-muscle activity on cross-limb adaptations under mirror and non-mirror visual feedback conditions.

    Directory of Open Access Journals (Sweden)

    Paola eReissig

    2015-12-01

    Full Text Available Cross-limb transfer (CLT describes the observation of bilateral performance gains due to unilateral motor practice. Previous research has suggested that CLT may be reduced, or absent, in older adults, possibly due to age-related structural and functional brain changes. Based on research showing increases in CLT due to the provision of mirror visual feedback (MVF during task execution in young adults, our study aimed to investigate whether MVF can facilitate CLT in older adults, who are known to be more reliant on visual feedback for accurate motor performance. Participants (N = 53 engaged in a short-term training regime (300 movements involving a ballistic finger task using their dominant hand, while being provided with either visual feedback of their active limb, or a mirror reflection of their active limb (superimposed over the quiescent limb. Bilateral performance was examined before, during and following the training. Furthermore, we measured corticospinal excitability (using TMS at these time points, and assessed muscle activity bilaterally during the task via EMG; these parameters were used to investigate the mechanisms mediating and predicting CLT. Training resulted in significant bilateral performance gains that did not differ as a result of age or visual feedback (all ps > 0.1. Training also elicited bilateral increases in corticospinal excitability (p < 0.05. For younger adults, CLT was significantly predicted by performance gains in the trained hand (β = 0.47, whereas for older adults it was significantly predicted by mirror activity in the untrained hand during training (β = 0.60. The present study suggests that older adults are capable of exhibiting CLT to a similar degree to younger adults. The prominent role of mirror activity in the untrained hand for CLT in older adults indicates that bilateral cortical activity during unilateral motor tasks is a compensatory mechanism. In this particular task, MVF did not facilitate the

  3. Randomized crossover trial of a pressure sensing visual feedback system to improve mask fitting in noninvasive ventilation.

    Science.gov (United States)

    Brill, Anne-Kathrin; Moghal, Mohammad; Morrell, Mary J; Simonds, Anita K

    2017-10-01

    A good mask fit, avoiding air leaks and pressure effects on the skin are key elements for a successful noninvasive ventilation (NIV). However, delivering practical training for NIV is challenging, and it takes time to build experience and competency. This study investigated whether a pressure sensing system with real-time visual feedback improved mask fitting. During an NIV training session, 30 healthcare professionals (14 trained in mask fitting and 16 untrained) performed two mask fittings on the same healthy volunteer in a randomized order: one using standard mask-fitting procedures and one with additional visual feedback on mask pressure on the nasal bridge. Participants were required to achieve a mask fit with low mask pressure and minimal air leak (mask fit and staff- confidence were measured. Compared with standard mask fitting, a lower pressure was exerted on the nasal bridge using the feedback system (71.1 ± 17.6 mm Hg vs 63.2 ± 14.6 mm Hg, P mask-fitting training, resulted in a lower pressure on the skin and better mask fit for the volunteer, with increased staff confidence. © 2017 Asian Pacific Society of Respirology.

  4. Artificial proprioceptive feedback for myoelectric control.

    Science.gov (United States)

    Pistohl, Tobias; Joshi, Deepak; Ganesh, Gowrishankar; Jackson, Andrew; Nazarpour, Kianoush

    2015-05-01

    The typical control of myoelectric interfaces, whether in laboratory settings or real-life prosthetic applications, largely relies on visual feedback because proprioceptive signals from the controlling muscles are either not available or very noisy. We conducted a set of experiments to test whether artificial proprioceptive feedback, delivered noninvasively to another limb, can improve control of a two-dimensional myoelectrically-controlled computer interface. In these experiments, participants were required to reach a target with a visual cursor that was controlled by electromyogram signals recorded from muscles of the left hand, while they were provided with an additional proprioceptive feedback on their right arm by moving it with a robotic manipulandum. Provision of additional artificial proprioceptive feedback improved the angular accuracy of their movements when compared to using visual feedback alone but did not increase the overall accuracy quantified with the average distance between the cursor and the target. The advantages conferred by proprioception were present only when the proprioceptive feedback had similar orientation to the visual feedback in the task space and not when it was mirrored, demonstrating the importance of congruency in feedback modalities for multi-sensory integration. Our results reveal the ability of the human motor system to learn new inter-limb sensory-motor associations; the motor system can utilize task-related sensory feedback, even when it is available on a limb distinct from the one being actuated. In addition, the proposed task structure provides a flexible test paradigm by which the effectiveness of various sensory feedback and multi-sensory integration for myoelectric prosthesis control can be evaluated.

  5. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    Science.gov (United States)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  6. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  7. Psychophysical study of the visual sun location in pictures of cloudy and twilight skies inspired by Viking navigation.

    Science.gov (United States)

    Barta, András; Horváth, Gábor; Meyer-Rochow, Victor Benno

    2005-06-01

    In the late 1960s it was hypothesized that Vikings had been able to navigate the open seas, even when the sun was occluded by clouds or below the sea horizon, by using the angle of polarization of skylight. To detect the direction of skylight polarization, they were thought to have made use of birefringent crystals, called "sun-stones," and a large part of the scientific community still firmly believe that Vikings were capable of polarimetric navigation. However, there are some critics who treat the usefulness of skylight polarization for orientation under partly cloudy or twilight conditions with extreme skepticism. One of their counterarguments has been the assumption that solar positions or solar azimuth directions could be estimated quite accurately by the naked eye, even if the sun was behind clouds or below the sea horizon. Thus under partly cloudy or twilight conditions there might have been no serious need for a polarimetric method to determine the position of the sun. The aim of our study was to test quantitatively the validity of this qualitative counterargument. In our psychophysical laboratory experiments, test subjects were confronted with numerous 180 degrees field-of-view color photographs of partly cloudy skies with the sun occluded by clouds or of twilight skies with the sun below the horizon. The task of the subjects was to guess the position or the azimuth direction of the invisible sun with the naked eye. We calculated means and standard deviations of the estimated solar positions and azimuth angles to characterize the accuracy of the visual sun location. Our data do not support the common belief that the invisible sun can be located quite accurately from the celestial brightness and/or color patterns under cloudy or twilight conditions. Although our results underestimate the accuracy of visual sun location by experienced Viking navigators, the mentioned counterargument cannot be taken seriously as a valid criticism of the theory of the alleged

  8. Breath-hold monitoring and visual feedback for radiotherapy using a charge-coupled device camera and a head-mounted display. System development and feasibility

    International Nuclear Information System (INIS)

    Yoshitake, Tadamasa; Nakamura, Katsumasa; Shioyama, Yoshiyuki

    2008-01-01

    The aim of this study was to present the technical aspects of the breath-hold technique with respiratory monitoring and visual feedback and to evaluate the feasibility of this system in healthy volunteers. To monitor respiration, the vertical position of the fiducial marker placed on the patient's abdomen was tracked by a machine vision system with a charge-coupled device camera. A monocular head-mounted display was used to provide the patient with visual feedback about the breathing trace. Five healthy male volunteers were enrolled in this study. They held their breath at the end-inspiration and the end-expiration phases. They performed five repetitions of the same type of 15-s breath-holds with and without a head-mounted display, respectively. A standard deviation of five mean positions of the fiducial marker during a 15-s breath-hold in each breath-hold type was used as the reproducibility value of breath-hold. All five volunteers well tolerated the breath-hold maneuver. For the inspiration breath-hold, the standard deviations with and without visual feedback were 1.74 mm and 0.84 mm, respectively (P=0.20). For the expiration breath-hold, the standard deviations with and without visual feedback were 0.63 mm and 0.96 mm, respectively (P=0.025). Our newly developed system might help the patient achieve improved breath-hold reproducibility. (author)

  9. Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.

    Science.gov (United States)

    Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A

    2015-05-01

    Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Interwoven fluctuations during intermodal perception: fractality in head sway supports the use of visual feedback in haptic perceptual judgments by manual wielding.

    Science.gov (United States)

    Kelty-Stephen, Damian G; Dixon, James A

    2014-12-01

    Intermodal integration required for perceptual learning tasks is rife with individual differences. Participants vary in how they use perceptual information to one modality. One participant alone might change her own response over time. Participants vary further in their use of feedback through one modality to inform another modality. Two experiments test the general hypothesis that perceptual-motor fluctuations reveal both information use within modality and coordination among modalities. Experiment 1 focuses on perceptual learning in dynamic touch, in which participants use exploratory hand-wielding of unseen objects to make visually guided length judgments and use visual feedback to rescale their judgments of the same mechanical information. Previous research found that the degree of fractal temporal scaling (i.e., "fractality") in hand-wielding moderates the use of mechanical information. Experiment 1 shows that head-sway fractality moderates the use of visual information. Further, experience with feedback increases head-sway fractality and prolongs its effect on later hand-wielding fractality. Experiment 2 replicates effects of head-sway fractality moderating use of visual information in a purely visual-judgment task. Together, these findings suggest that fractal fluctuations may provide a modal-general window onto not just how participants use perceptual information but also how well they may integrate information among different modalities. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Sensorimotor Learning of Acupuncture Needle Manipulation Using Visual Feedback.

    Directory of Open Access Journals (Sweden)

    Won-Mo Jung

    Full Text Available Humans can acquire a wide variety of motor skills using sensory feedback pertaining to discrepancies between intended and actual movements. Acupuncture needle manipulation involves sophisticated hand movements and represents a fundamental skill for acupuncturists. We investigated whether untrained students could improve their motor performance during acupuncture needle manipulation using visual feedback (VF.Twenty-one untrained medical students were included, randomly divided into concurrent (n = 10 and post-trial (n = 11 VF groups. Both groups were trained in simple lift/thrusting techniques during session 1, and in complicated lift/thrusting techniques in session 2 (eight training trials per session. We compared the motion patterns and error magnitudes of pre- and post-training tests.During motion pattern analysis, both the concurrent and post-trial VF groups exhibited greater improvements in motion patterns during the complicated lifting/thrusting session. In the magnitude error analysis, both groups also exhibited reduced error magnitudes during the simple lifting/thrusting session. For the training period, the concurrent VF group exhibited reduced error magnitudes across all training trials, whereas the post-trial VF group was characterized by greater error magnitudes during initial trials, which gradually reduced during later trials.Our findings suggest that novices can improve the sophisticated hand movements required for acupuncture needle manipulation using sensorimotor learning with VF. Use of two types of VF can be beneficial for untrained students in terms of learning how to manipulate acupuncture needles, using either automatic or cognitive processes.

  12. Observability Analysis of a Matrix Kalman Filter-Based Navigation System Using Visual/Inertial/Magnetic Sensors

    Directory of Open Access Journals (Sweden)

    Guohu Feng

    2012-06-01

    Full Text Available A matrix Kalman filter (MKF has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a at least one degree of rotational freedom is excited, and (b at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions.

  13. Crosswalk navigation for people with visual impairments on a wearable device

    Science.gov (United States)

    Cheng, Ruiqi; Wang, Kaiwei; Yang, Kailun; Long, Ningbo; Hu, Weijian; Chen, Hao; Bai, Jian; Liu, Dong

    2017-09-01

    Detecting and reminding of crosswalks at urban intersections is one of the most important demands for people with visual impairments. A real-time crosswalk detection algorithm, adaptive extraction and consistency analysis (AECA), is proposed. Compared with existing algorithms, which detect crosswalks in ideal scenarios, the AECA algorithm performs better in challenging scenarios, such as crosswalks at far distances, low-contrast crosswalks, pedestrian occlusion, various illuminances, and the limited resources of portable PCs. Bright stripes of crosswalks are extracted by adaptive thresholding, and are gathered to form crosswalks by consistency analysis. On the testing dataset, the proposed algorithm achieves a precision of 84.6% and a recall of 60.1%, which are higher than the bipolarity-based algorithm. The position and orientation of crosswalks are conveyed to users by voice prompts so as to align themselves with crosswalks and walk along crosswalks. The field tests carried out in various practical scenarios prove the effectiveness and reliability of the proposed navigation approach.

  14. FlyAR: augmented reality supported micro aerial vehicle navigation.

    Science.gov (United States)

    Zollmann, Stefanie; Hoppe, Christof; Langlotz, Tobias; Reitmayr, Gerhard

    2014-04-01

    Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicle’s position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the user’s view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.

  15. Mobile Screens: The Visual Regime of Navigation

    NARCIS (Netherlands)

    Verhoeff, N.

    2012-01-01

    In this book on screen media, space, and mobility I compare synchronically, as well as diachronically, diverse and variegated screen media - their technologies and practices – as sites for virtual mobility and navigation. Mobility as a central trope can be found on the multiple levels that are

  16. 14 CFR 125.203 - Communication and navigation equipment.

    Science.gov (United States)

    2010-01-01

    ... within the degree of accuracy required for ATC; (ii) One marker beacon receiver providing visual and... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment. 125... Equipment Requirements § 125.203 Communication and navigation equipment. (a) Communication equipment—general...

  17. Joint image restoration and location in visual navigation system

    Science.gov (United States)

    Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie

    2018-02-01

    Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.

  18. Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback

    Science.gov (United States)

    Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios

    2013-08-01

    The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.

  19. A new visual feedback-based magnetorheological haptic master for robot-assisted minimally invasive surgery

    Science.gov (United States)

    Choi, Seung-Hyun; Kim, Soomin; Kim, Pyunghwa; Park, Jinhyuk; Choi, Seung-Bok

    2015-06-01

    In this study, we developed a novel four-degrees-of-freedom haptic master using controllable magnetorheological (MR) fluid. We also integrated the haptic master with a vision device with image processing for robot-assisted minimally invasive surgery (RMIS). The proposed master can be used in RMIS as a haptic interface to provide the surgeon with a sense of touch by using both kinetic and kinesthetic information. The slave robot, which is manipulated with a proportional-integrative-derivative controller, uses a force sensor to obtain the desired forces from tissue contact, and these desired repulsive forces are then embodied through the MR haptic master. To verify the effectiveness of the haptic master, the desired force and actual force are compared in the time domain. In addition, a visual feedback system is implemented in the RMIS experiment to distinguish between the tumor and organ more clearly and provide better visibility to the operator. The hue-saturation-value color space is adopted for the image processing since it is often more intuitive than other color spaces. The image processing and haptic feedback are realized on surgery performance. In this work, tumor-cutting experiments are conducted under four different operating conditions: haptic feedback on, haptic feedback off, image processing on, and image processing off. The experimental realization shows that the performance index, which is a function of pixels, is different in the four operating conditions.

  20. An Aerial-Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas

    Directory of Open Access Journals (Sweden)

    David Zapata

    2013-01-01

    Full Text Available There are many outdoor robotic applications where a robot must reach a goal position or explore an area without previous knowledge of the environment around it. Additionally, other applications (like path planning require the use of known maps or previous information of the environment. This work presents a system composed by a terrestrial and an aerial robot that cooperate and share sensor information in order to address those requirements. The ground robot is able to navigate in an unknown large environment aided by visual feedback from a camera on board the aerial robot. At the same time, the obstacles are mapped in real-time by putting together the information from the camera and the positioning system of the ground robot. A set of experiments were carried out with the purpose of verifying the system applicability. The experiments were performed in a simulation environment and outdoor with a medium-sized ground robot and a mini quad-rotor. The proposed robotic system shows outstanding results in simultaneous navigation and mapping applications in large outdoor environments.

  1. The integration of temporally shifted visual feedback in a synchronization task: The role of perceptual stability in a visuo-proprioceptive conflict situation.

    Science.gov (United States)

    Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J

    2010-12-01

    The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Haptically facilitated bimanual training combined with augmented visual feedback in moderate to severe hemiplegia.

    Science.gov (United States)

    Boos, Amy; Qiu, Qinyin; Fluet, Gerard G; Adamovich, Sergei V

    2011-01-01

    This study describes the design and feasibility testing of a hand rehabilitation system that provides haptic assistance for hand opening in moderate to severe hemiplegia while subjects attempt to perform bilateral hand movements. A cable-actuated exoskeleton robot assists the subjects in performing impaired finger movements but is controlled by movement of the unimpaired hand. In an attempt to combine the neurophysiological stimuli of bilateral movement and action observation during training, visual feedback of the impaired hand is replaced by feedback of the unimpaired hand, either by using a sagittaly oriented mirror or a virtual reality setup with a pair of virtual hands presented on a flat screen controlled with movement of the unimpaired hand, providing a visual image of their paretic hand moving normally. Joint angles for both hands are measured using data gloves. The system is programmed to maintain a symmetrical relationship between the two hands as they respond to commands to open and close simultaneously. Three persons with moderate to severe hemiplegia secondary to stroke trained with the system for eight, 30 to 60 minute sessions without adverse events. Each demonstrated positive motor adaptations to training. The system was well tolerated by persons with moderate to severe upper extremity hemiplegia. Further testing of its effects on motor ability with a broader range of clinical presentations is indicated.

  3. Visualizing guided tours

    DEFF Research Database (Denmark)

    Poulsen, Signe Herbers; Fjord-Larsen, Mads; Hansen, Frank Allan

    This paper identifies several problems with navigating and visualizing guided tours in traditional hypermedia systems. We discuss solutions to these problems, including the representation of guided tours as 3D metro maps with content preview. Issues regarding navigation and disorientation...

  4. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback.

    Science.gov (United States)

    Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T

    2007-07-01

    Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.

  5. Control Framework for Dexterous Manipulation Using Dynamic Visual Servoing and Tactile Sensors’ Feedback

    Directory of Open Access Journals (Sweden)

    Carlos A. Jara

    2014-01-01

    Full Text Available Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.

  6. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    Science.gov (United States)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  7. A new visual feedback-based magnetorheological haptic master for robot-assisted minimally invasive surgery

    International Nuclear Information System (INIS)

    Choi, Seung-Hyun; Kim, Soomin; Kim, Pyunghwa; Park, Jinhyuk; Choi, Seung-Bok

    2015-01-01

    In this study, we developed a novel four-degrees-of-freedom haptic master using controllable magnetorheological (MR) fluid. We also integrated the haptic master with a vision device with image processing for robot-assisted minimally invasive surgery (RMIS). The proposed master can be used in RMIS as a haptic interface to provide the surgeon with a sense of touch by using both kinetic and kinesthetic information. The slave robot, which is manipulated with a proportional-integrative-derivative controller, uses a force sensor to obtain the desired forces from tissue contact, and these desired repulsive forces are then embodied through the MR haptic master. To verify the effectiveness of the haptic master, the desired force and actual force are compared in the time domain. In addition, a visual feedback system is implemented in the RMIS experiment to distinguish between the tumor and organ more clearly and provide better visibility to the operator. The hue-saturation-value color space is adopted for the image processing since it is often more intuitive than other color spaces. The image processing and haptic feedback are realized on surgery performance. In this work, tumor-cutting experiments are conducted under four different operating conditions: haptic feedback on, haptic feedback off, image processing on, and image processing off. The experimental realization shows that the performance index, which is a function of pixels, is different in the four operating conditions. (paper)

  8. The Effect of Delayed Visual Feedback on Synchrony Perception in a Tapping Task

    Directory of Open Access Journals (Sweden)

    Mirjam Keetels

    2011-10-01

    Full Text Available Sensory events following a motor action are, within limits, interpreted as a causal consequence of those actions. For example, the clapping of the hands is initiated by the motor system, but subsequently visual, auditory, and tactile information is provided and processed. In the present study we examine the effect of temporal disturbances in this chain of motor-sensory events. Participants are instructed to tap a surface with their finger in synchrony with a chain of 20 sound clicks (ISI 750 ms. We examined the effect of additional visual information on this ‘tap-sound’-synchronization task. During tapping, subjects will see a video of their own tapping hand on a screen in front of them. The video can either be in synchrony with the tap (real-time recording, or can be slightly delayed (∼40–160 ms. In a control condition, no video is provided. We explore whether ‘tap-sound’ synchrony will be shifted as a function of the delayed visual feedback. Results will provide fundamental insights into how the brain preserves a causal interpretation of motor actions and their sensory consequences.

  9. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam [Dept. of Radiation Oncology, Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Song, Jae Hoon [Dept. of Biomedical Engineering, Seoul St. Mary' s Hospital, Seoul (Korea, Republic of); Kim, Young Jae [Dept. of Radiological Technology, Gwang Yang Health Collage, Gwangyang (Korea, Republic of)

    2013-03-15

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam{sub t}ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule.

  10. Utility estimation of the application of auditory-visual-tactile sense feedback in respiratory gated radiation therapy

    International Nuclear Information System (INIS)

    Jo, Jung Hun; KIm, Byeong Jin; Roh, Shi Won; Lee, Hyeon Chan; Jang, Hyeong Jun; Kim, Hoi Nam; Song, Jae Hoon; Kim, Young Jae

    2013-01-01

    The purpose of this study was to evaluate the possibility to optimize the gated treatment delivery time and maintenance of stable respiratory by the introduction of breath with the assistance of auditory-visual-tactile sense. The experimenter's respiration were measured by ANZAI 4D system. We obtained natural breathing signal, monitor-induced breathing signal, monitor and ventilator-induced breathing signal, and breath-hold signal using real time monitor during 10 minutes beam-on-time. In order to check the stability of respiratory signals distributed in each group were compared with means, standard deviation, variation value, beam t ime of the respiratory signal. The stability of each respiratory was measured in consideration of deviation change studied in each respiratory time lapse. As a result of an analysis of respiratory signal, all experimenters has showed that breathing signal used both Real time monitor and Ventilator was the most stable and shortest time. In this study, it was evaluated that respiratory gated radiation therapy with auditory-visual-tactual sense and without auditory-visual-tactual sense feedback. The study showed that respiratory gated radiation therapy delivery time could significantly be improved by the application of video feedback when this is combined with audio-tactual sense assistance. This delivery technique did prove its feasibility to limit the tumor motion during treatment delivery for all patients to a defined value while maintaining the accuracy and proved the applicability of the technique in a conventional clinical schedule

  11. DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance

    Directory of Open Access Journals (Sweden)

    Ruxandra Tapu

    2017-10-01

    Full Text Available In this paper, we introduce the so-called DEEP-SEE framework that jointly exploits computer vision algorithms and deep convolutional neural networks (CNNs to detect, track and recognize in real time objects encountered during navigation in the outdoor environment. A first feature concerns an object detection technique designed to localize both static and dynamic objects without any a priori knowledge about their position, type or shape. The methodological core of the proposed approach relies on a novel object tracking method based on two convolutional neural networks trained offline. The key principle consists of alternating between tracking using motion information and predicting the object location in time based on visual similarity. The validation of the tracking technique is performed on standard benchmark VOT datasets, and shows that the proposed approach returns state-of-the-art results while minimizing the computational complexity. Then, the DEEP-SEE framework is integrated into a novel assistive device, designed to improve cognition of VI people and to increase their safety when navigating in crowded urban scenes. The validation of our assistive device is performed on a video dataset with 30 elements acquired with the help of VI users. The proposed system shows high accuracy (>90% and robustness (>90% scores regardless on the scene dynamics.

  12. 33 CFR 164.78 - Navigation under way: Towing vessels.

    Science.gov (United States)

    2010-07-01

    ...) Evaluates the danger of each closing visual or radar contact; (5) Knows and applies the variation and... type of correction; (6) Knows the speed and direction of the current, and the set, drift, and tidal... vessels. 164.78 Section 164.78 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND...

  13. Design and test of a Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during training of upper limb movement.

    Science.gov (United States)

    Simonsen, Daniel; Popovic, Mirjana B; Spaich, Erika G; Andersen, Ole Kæseler

    2017-11-01

    The present paper describes the design and test of a low-cost Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during the execution of an upper limb exercise. Eleven sub-acute stroke patients with varying degrees of upper limb function were recruited. Each subject participated in a control session (repeated twice) and a feedback session (repeated twice). In each session, the subjects were presented with a rectangular pattern displayed on a vertical mounted monitor embedded in the table in front of the patient. The subjects were asked to move a marker inside the rectangular pattern by using their most affected hand. During the feedback session, the thickness of the rectangular pattern was changed according to the performance of the subject, and the color of the marker changed according to its position, thereby guiding the subject's movements. In the control session, the thickness of the rectangular pattern and the color of the marker did not change. The results showed that the movement similarity and smoothness was higher in the feedback session than in the control session while the duration of the movement was longer. The present study showed that adaptive visual feedback delivered by use of the Kinect sensor can increase the similarity and smoothness of upper limb movement in stroke patients.

  14. Event Displays for the Visualization of CMS Events

    CERN Document Server

    Jones, Christopher Duncan

    2010-01-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  15. Event Display for the Visualization of CMS Events

    Science.gov (United States)

    Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.

    2011-12-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  16. Event Display for the Visualization of CMS Events

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Eulisse, G; Jones, C D; McCauley, T; Osborne, I; Kovalskyi, D; Tadel, A Mrak; Muelmenstaedt, J; Tadel, M; Tu, Y; Yagil, A

    2011-01-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  17. Using neural networks to understand the information that guides behavior: a case study in visual navigation.

    Science.gov (United States)

    Philippides, Andrew; Graham, Paul; Baddeley, Bart; Husbands, Philip

    2015-01-01

    To behave in a robust and adaptive way, animals must extract task-relevant sensory information efficiently. One way to understand how they achieve this is to explore regularities within the information animals perceive during natural behavior. In this chapter, we describe how we have used artificial neural networks (ANNs) to explore efficiencies in vision and memory that might underpin visually guided route navigation in complex worlds. Specifically, we use three types of neural network to learn the regularities within a series of views encountered during a single route traversal (the training route), in such a way that the networks output the familiarity of novel views presented to them. The problem of navigation is then reframed in terms of a search for familiar views, that is, views similar to those associated with the route. This approach has two major benefits. First, the ANN provides a compact holistic representation of the data and is thus an efficient way to encode a large set of views. Second, as we do not store the training views, we are not limited in the number of training views we use and the agent does not need to decide which views to learn.

  18. Volunteers Oriented Interface Design for the Remote Navigation of Rescue Robots at Large-Scale Disaster Sites

    Science.gov (United States)

    Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi

    This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.

  19. Learning of Temporal and Spatial Movement Aspects: A Comparison of Four Types of Haptic Control and Concurrent Visual Feedback.

    Science.gov (United States)

    Rauter, Georg; Sigrist, Roland; Riener, Robert; Wolf, Peter

    2015-01-01

    In literature, the effectiveness of haptics for motor learning is controversially discussed. Haptics is believed to be effective for motor learning in general; however, different types of haptic control enhance different movement aspects. Thus, in dependence on the movement aspects of interest, one type of haptic control may be effective whereas another one is not. Therefore, in the current work, it was investigated if and how different types of haptic controllers affect learning of spatial and temporal movement aspects. In particular, haptic controllers that enforce active participation of the participants were expected to improve spatial aspects. Only haptic controllers that provide feedback about the task's velocity profile were expected to improve temporal aspects. In a study on learning a complex trunk-arm rowing task, the effect of training with four different types of haptic control was investigated: position control, path control, adaptive path control, and reactive path control. A fifth group (control) trained with visual concurrent augmented feedback. As hypothesized, the position controller was most effective for learning of temporal movement aspects, while the path controller was most effective in teaching spatial movement aspects of the rowing task. Visual feedback was also effective for learning temporal and spatial movement aspects.

  20. The Effects of Task Clarification, Visual Prompts, and Graphic Feedback on Customer Greeting and Up-Selling in a Restaurant

    Science.gov (United States)

    Squires, James; Wilder, David A.; Fixsen, Amanda; Hess, Erica; Rost, Kristen; Curran, Ryan; Zonneveld, Kimberly

    2007-01-01

    An intervention consisting of task clarification, visual prompts, and graphic feedback was evaluated to increase customer greeting and up-selling in a restaurant. A combination multiple baseline and reversal design was used to evaluate intervention effects. Although all interventions improved performance over baseline, the delivery of graphic…

  1. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept

    NARCIS (Netherlands)

    Roosink, M.; Robitaille, N.; McFadyen, B.J.; Hebert, L.J.; Jackson, P.L.; Bouyer, L.J.; Mercier, C.

    2015-01-01

    BACKGROUND: Virtual reality (VR) provides interactive multimodal sensory stimuli and biofeedback, and can be a powerful tool for physical and cognitive rehabilitation. However, existing systems have generally not implemented realistic full-body avatars and/or a scaling of visual movement feedback.

  2. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  3. Reliability of Visual and Somatosensory Feedback in Skilled Movement: The Role of the Cerebellum.

    Science.gov (United States)

    Mizelle, J C; Oparah, Alexis; Wheaton, Lewis A

    2016-01-01

    The integration of vision and somatosensation is required to allow for accurate motor behavior. While both sensory systems contribute to an understanding of the state of the body through continuous updating and estimation, how the brain processes unreliable sensory information remains to be fully understood in the context of complex action. Using functional brain imaging, we sought to understand the role of the cerebellum in weighting visual and somatosensory feedback by selectively reducing the reliability of each sense individually during a tool use task. We broadly hypothesized upregulated activation of the sensorimotor and cerebellar areas during movement with reduced visual reliability, and upregulated activation of occipital brain areas during movement with reduced somatosensory reliability. As specifically compared to reduced somatosensory reliability, we expected greater activations of ipsilateral sensorimotor cerebellum for intact visual and somatosensory reliability. Further, we expected that ipsilateral posterior cognitive cerebellum would be affected with reduced visual reliability. We observed that reduced visual reliability results in a trend towards the relative consolidation of sensorimotor activation and an expansion of cerebellar activation. In contrast, reduced somatosensory reliability was characterized by the absence of cerebellar activations and a trend towards the increase of right frontal, left parietofrontal activation, and temporo-occipital areas. Our findings highlight the role of the cerebellum for specific aspects of skillful motor performance. This has relevance to understanding basic aspects of brain functions underlying sensorimotor integration, and provides a greater understanding of cerebellar function in tool use motor control.

  4. Real-time vision, tactile cues, and visual form agnosia in pantomimed grasping: removing haptic feedback induces a switch from natural to pantomime-like grasps

    Directory of Open Access Journals (Sweden)

    Robert Leslie Whitwell

    2015-05-01

    Full Text Available Investigators study the kinematics of grasping movements (prehension under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. When patient DF, who suffers from visual form agnosia, performs natural grasps, her in-flight hand aperture is scaled to the widths of targets ('grip scaling' that she cannot discriminate amongst. In contrast, when DF's pantomime grasps are based on a memory of a previewed object, her grip scaling is very poor. Her failure on this task has been interpreted as additional support for the dissociation between the use of object vision for action and object vision for perception. Curiously, however, when DF directs her pantomimed grasps towards a displaced imagined copy of a visible object where her fingers make contact with the surface of the table, her grip scaling does not appear to be particularly poor. In the first of two experiments, we revisit this previous work and show that her grip scaling in this real-time pantomime grasping task does not differ from controls, suggesting that terminal tactile feedback from a proxy of the target can maintain DF's grip scaling. In a second experiment with healthy participants, we tested a recent variant of a grasping task in which no tactile feedback is available (i.e. no haptic feedback by comparing the kinematics of target-directed grasps with and without haptic feedback to those of real-time pantomime grasps without haptic feedback. Compared to natural grasps, removing haptic feedback increased RT, slowed the velocity of the reach, reduced grip aperture, sharpened the slopes relating grip aperture to target width, and reduced the final grip aperture. All of these effects were also observed in the pantomime grasping task. Taken together, these results provide compelling support for the view that removing haptic feedback induces a switch from real-time visual control to one that depends more on visual perception and cognitive supervision.

  5. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    Science.gov (United States)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  6. An online visual loop closure detection method for indoor robotic navigation

    Science.gov (United States)

    Erhan, Can; Sariyanidi, Evangelos; Sencan, Onur; Temeltas, Hakan

    2015-01-01

    In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth information to extract Zernike moments in local manner. These moments are used to represent holistic shape information inside the image. The moments in complex space that are extracted from both grayscale and depth images are coarsely quantized. In order to find out the similarity between two locations, nearest neighbour (NN) classification algorithm is performed. Exemplary results and the practical implementation case of the method are also given with the data gathered on the testbed using a Kinect. The method is evaluated in three different datasets of different lighting conditions. Additional depth information with the actual image increases the detection rate especially in dark environments. The results are referred as a successful, high-fidelity online method for visual place recognition as well as to close navigation loops, which is a crucial information for the well known simultaneously localization and mapping (SLAM) problem. This technique is also practically applicable because of its low computational complexity, and performing capability in real-time with high loop closing accuracy.

  7. Analysis of Feedback in after Action Reviews

    Science.gov (United States)

    1987-06-01

    CONNTSM Page INTRODUCTIUN . . . . . . . . . . . . . . . . . . . A Perspective on Feedback. . ....... • • ..... • 1 Overviev of %,•urrent Research...part of their training program . The AAR is in marked contrast to the critique method of feedback which is often used in military training. The AAR...feedback is task-inherent feedback. Task-inherent feedback refers to human-machine interacting systems, e.g., computers , where in a visual tracking task

  8. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.

    Science.gov (United States)

    Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.

  9. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.

    Directory of Open Access Journals (Sweden)

    Min Li

    Full Text Available Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.

  10. Comprehension and navigation of networked hypertexts

    NARCIS (Netherlands)

    Blom, Helen; Segers, Eliane; Knoors, Harry; Hermans, Daan; Verhoeven, Ludo

    2018-01-01

    This study aims to investigate secondary school students' reading comprehension and navigation of networked hypertexts with and without a graphic overview compared to linear digital texts. Additionally, it was studied whether prior knowledge, vocabulary, verbal, and visual working memory moderated

  11. Does Top-Down Feedback Modulate the Encoding of Orthographic Representations During Visual-Word Recognition?

    Science.gov (United States)

    Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta

    2016-09-01

    In masked priming lexical decision experiments, there is a matched-case identity advantage for nonwords, but not for words (e.g., ERTAR-ERTAR words when top-down feedback is minimized. We employed a task that taps prelexical orthographic processes: the masked prime same-different task. For "same" trials, results showed faster response times for targets when preceded by a briefly presented matched-case identity prime than when preceded by a mismatched-case identity prime. Importantly, this advantage was similar in magnitude for nonwords and words. This finding constrains the interplay of bottom-up versus top-down mechanisms in models of visual-word identification.

  12. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  13. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    OpenAIRE

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E.; Dickel, Ludovic

    2017-01-01

    International audience; Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavio...

  14. Electrophysiological correlates of mental navigation in blind and sighted people.

    Science.gov (United States)

    Kober, Silvia Erika; Wood, Guilherme; Kampl, Christiane; Neuper, Christa; Ischebeck, Anja

    2014-10-15

    The aim of the present study was to investigate functional reorganization of the occipital cortex for a mental navigation task in blind people. Eight completely blind adults and eight sighted matched controls performed a mental navigation task, in which they mentally imagined to walk along familiar routes of their hometown during a multi-channel EEG measurement. A motor imagery task was used as control condition. Furthermore, electrophysiological activation patterns during a resting measurement with open and closed eyes were compared between blind and sighted participants. During the resting measurement with open eyes, no differences in EEG power were observed between groups, whereas sighted participants showed higher alpha (8-12Hz) activity at occipital sites compared to blind participants during an eyes-closed resting condition. During the mental navigation task, blind participants showed a stronger event-related desynchronization in the alpha band over the visual cortex compared to sighted controls indicating a stronger activation in this brain region in the blind. Furthermore, groups showed differences in functional brain connectivity between fronto-central and parietal-occipital brain networks during mental navigation indicating stronger visuo-spatial processing in sighted than in blind people during mental navigation. Differences in electrophysiological parameters between groups were specific for mental navigation since no group differences were observed during motor imagery. These results indicate that in the absence of vision the visual cortex takes over other functions such as spatial navigation. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. How to find home backwards? Navigation during rearward homing of Cataglyphis fortis desert ants.

    Science.gov (United States)

    Pfeffer, Sarah E; Wittlinger, Matthias

    2016-07-15

    Cataglyphis ants are renowned for their impressive navigation skills, which have been studied in numerous experiments during forward locomotion. However, the ants' navigational performance during backward homing when dragging large food loads has not been investigated until now. During backward locomotion, the odometer has to deal with unsteady motion and irregularities in inter-leg coordination. The legs' sensory feedback during backward walking is not just a simple reversal of the forward stepping movements: compared with forward homing, ants are facing towards the opposite direction during backward dragging. Hence, the compass system has to cope with a flipped celestial view (in terms of the polarization pattern and the position of the sun) and an inverted retinotopic image of the visual panorama and landmark environment. The same is true for wind and olfactory cues. In this study we analyze for the first time backward-homing ants and evaluate their navigational performance in channel and open field experiments. Backward-homing Cataglyphis fortis desert ants show remarkable similarities in the performance of homing compared with forward-walking ants. Despite the numerous challenges emerging for the navigational system during backward walking, we show that ants perform quite well in our experiments. Direction and distance gauging was comparable to that of the forward-walking control groups. Interestingly, we found that backward-homing ants often put down the food item and performed foodless search loops around the left food item. These search loops were mainly centred around the drop-off position (and not around the nest position), and increased in length the closer the ants came to their fictive nest site. © 2016. Published by The Company of Biologists Ltd.

  16. A Systematic Review of the Literature on Parenting of Young Children with Visual Impairments and the Adaptions for Video-Feedback Intervention to Promote Positive Parenting (VIPP)

    NARCIS (Netherlands)

    van den Broek, Ellen G. C.; van Eijden, Ans J P M; Overbeek, Mathilde M.; Kef, Sabina; Sterkenburg, Paula S.; Schuengel, Carlo

    Secure parent-child attachment may help children to overcome the challenges of growing up with a visual or visual-and-intellectual impairment. A large literature exists that provides a blueprint for interventions that promote parental sensitivity and secure attachment. The Video-feedback

  17. Mapping, Navigation, and Learning for Off-Road Traversal

    DEFF Research Database (Denmark)

    Konolige, Kurt; Agrawal, Motilal; Blas, Morten Rufus

    2009-01-01

    The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision......, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain.......The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision...

  18. Muscle involvement during intermittent contraction patterns with different target force feedback modes

    DEFF Research Database (Denmark)

    Sjøgaard, G; Jørgensen, L V; Ekner, D

    2000-01-01

    and following 30 min of intermittent contractions showed larger fatigue development with proprioceptive feedback than visual feedback. Also rating of perceived exertion increased more during proprioceptive feedback than visual feedback. This may in part be explained by small differences in the mechanics during......: Feedback mode significantly effects the muscle involvement and fatigue during intermittent contractions. RelevanceIntermittent contractions are common in many work places and various feedback modes are being given regarding work requirements. The choice of feedback may significantly affect the muscle load...... and consequently the development muscle fatigue and disorders....

  19. Keeping Pace with Your Eating: Visual Feedback Affects Eating Rate in Humans.

    Directory of Open Access Journals (Sweden)

    Laura L Wilkinson

    Full Text Available Deliberately eating at a slower pace promotes satiation and eating quickly has been associated with a higher body mass index. Therefore, understanding factors that affect eating rate should be given high priority. Eating rate is affected by the physical/textural properties of a food, by motivational state, and by portion size and palatability. This study explored the prospect that eating rate is also influenced by a hitherto unexplored cognitive process that uses ongoing perceptual estimates of the volume of food remaining in a container to adjust intake during a meal. A 2 (amount seen; 300 ml or 500 ml x 2 (amount eaten; 300 ml or 500 ml between-subjects design was employed (10 participants in each condition. In two 'congruent' conditions, the same amount was seen at the outset and then subsequently consumed (300 ml or 500 ml. To dissociate visual feedback of portion size and actual amount consumed, food was covertly added or removed from a bowl using a peristaltic pump. This created two additional 'incongruent' conditions, in which 300 ml was seen but 500 ml was eaten or vice versa. We repeated these conditions using a savoury soup and a sweet dessert. Eating rate (ml per second was assessed during lunch. After lunch we assessed fullness over a 60-minute period. In the congruent conditions, eating rate was unaffected by the actual volume of food that was consumed (300 ml or 500 ml. By contrast, we observed a marked difference across the incongruent conditions. Specifically, participants who saw 300 ml but actually consumed 500 ml ate at a faster rate than participants who saw 500 ml but actually consumed 300 ml. Participants were unaware that their portion size had been manipulated. Nevertheless, when it disappeared faster or slower than anticipated they adjusted their rate of eating accordingly. This suggests that the control of eating rate involves visual feedback and is not a simple reflexive response to orosensory stimulation.

  20. Keeping Pace with Your Eating: Visual Feedback Affects Eating Rate in Humans.

    Science.gov (United States)

    Wilkinson, Laura L; Ferriday, Danielle; Bosworth, Matthew L; Godinot, Nicolas; Martin, Nathalie; Rogers, Peter J; Brunstrom, Jeffrey M

    2016-01-01

    Deliberately eating at a slower pace promotes satiation and eating quickly has been associated with a higher body mass index. Therefore, understanding factors that affect eating rate should be given high priority. Eating rate is affected by the physical/textural properties of a food, by motivational state, and by portion size and palatability. This study explored the prospect that eating rate is also influenced by a hitherto unexplored cognitive process that uses ongoing perceptual estimates of the volume of food remaining in a container to adjust intake during a meal. A 2 (amount seen; 300 ml or 500 ml) x 2 (amount eaten; 300 ml or 500 ml) between-subjects design was employed (10 participants in each condition). In two 'congruent' conditions, the same amount was seen at the outset and then subsequently consumed (300 ml or 500 ml). To dissociate visual feedback of portion size and actual amount consumed, food was covertly added or removed from a bowl using a peristaltic pump. This created two additional 'incongruent' conditions, in which 300 ml was seen but 500 ml was eaten or vice versa. We repeated these conditions using a savoury soup and a sweet dessert. Eating rate (ml per second) was assessed during lunch. After lunch we assessed fullness over a 60-minute period. In the congruent conditions, eating rate was unaffected by the actual volume of food that was consumed (300 ml or 500 ml). By contrast, we observed a marked difference across the incongruent conditions. Specifically, participants who saw 300 ml but actually consumed 500 ml ate at a faster rate than participants who saw 500 ml but actually consumed 300 ml. Participants were unaware that their portion size had been manipulated. Nevertheless, when it disappeared faster or slower than anticipated they adjusted their rate of eating accordingly. This suggests that the control of eating rate involves visual feedback and is not a simple reflexive response to orosensory stimulation.

  1. The effects of spatially displaced visual feedback on remote manipulator performance

    Science.gov (United States)

    Smith, Randy L.; Stuart, Mark A.

    1993-01-01

    The results of this evaluation have important implications for the arrangement of remote manipulation worksites and the design of workstations for telerobot operations. This study clearly illustrates the deleterious effects that can accompany the performance of remote manipulator tasks when viewing conditions are less than optimal. Future evaluations should emphasize telerobot camera locations and the use of image/graphical enhancement techniques in an attempt to lessen the adverse effects of displaced visual feedback. An important finding in this evaluation is the extent to which results from previously performed direct manipulation studies can be generalized to remote manipulation studies. Even though the results obtained were very similar to those of the direct manipulation evaluations, there were differences as well. This evaluation has demonstrated that generalizations to remote manipulation applications based upon the results of direct manipulation studies are quite useful, but they should be made cautiously.

  2. Neurosurgical simulation and navigation with three-dimensional computer graphics.

    Science.gov (United States)

    Hayashi, N; Endo, S; Shibata, T; Ikeda, H; Takaku, A

    1999-01-01

    We developed a pre-operative simulation and intra-operative navigation system with three-dimensional computer graphics (3D-CG). Because the 3D-CG created by the present system enables visualization of lesions via semitransparent imaging of the scalp surface and brain, the expected operative field could be visualized on the computer display pre-operatively. We used two different configurative navigators. One is assembled by an arciform arm and a laser pointer. The arciform arm consists of 3 joints mounted with rotary encoders forming an iso-center system. The distal end of the arm has a laser pointer, which has a CCD for measurement of the distance between the outlet of the laser beam, and the position illuminated by the laser pointer. Using this navigator, surgeons could accurately estimate the trajectory to the target lesion, and the boundaries of the lesion. Because the other navigator has six degrees of freedom and an interchangeable probe shaped like a bayonet on its tip, it can be used in deep structures through narrow openings. Our system proved efficient and yielded an unobstructed view of deep structures during microscopic neurosurgical procedures.

  3. Visual narratives : free-hand sketch for visual search and navigation of video.

    OpenAIRE

    James, Stuart

    2016-01-01

    Humans have an innate ability to communicate visually; the earliest forms of communication were cave drawings, and children can communicate visual descriptions of scenes through drawings well before they can write. Drawings and sketches offer an intuitive and efficient means for communicating visual concepts. Today, society faces a deluge of digital visual content driven by a surge in the generation of video on social media and the online availability of video archives. Mobile devices are...

  4. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    Science.gov (United States)

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Evaluation of multimodal feedback effects on improving rowing competencies

    Directory of Open Access Journals (Sweden)

    Korman Maria

    2011-12-01

    Full Text Available This study focused on the selection and preliminary evaluation of different types of modal and information feedback in virtual environment to facilitate acquisition and transfer of a complex motor-cognitive skill of rowing. Specifically, we addressed the effectiveness of immediate information feedback provided visually as compared to sensory haptic feedback on the improvement in hands kinematics and changes in cognitive load during the course of learning the basic rowing technique. Several pilot experiments described in this report lead to the evaluation and optimization of the training protocol, to enhance facilitatory effects of adding visual and haptic feedback during training.

  6. Vibrotactile Feedback for Brain-Computer Interface Operation

    OpenAIRE

    Cincotti, Febo; Kauhanen, Laura; Aloise, Fabio; Palomäki, Tapio; Caporusso, Nicholas; Jylänki, Pasi; Mattia, Donatella; Babiloni, Fabio; Vanacker, Gerolf; Nuttin, Marnix; Marciani, Maria Grazia; Millán, José del R.

    2007-01-01

    To be correctly mastered, brain-computer interfaces (BCIs) need an uninterrupted flow of feedback to the user. This feedback is usually delivered through the visual channel. Our aim was to explore the benefits of vibrotactile feedback during users' training and control of EEG-based BCI applications. A protocol for delivering vibrotactile feedback, including specific hardware and software arrangements, was specified. In three studies with 33 subjects (i...

  7. Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors

    Science.gov (United States)

    Lokka, I.; Çöltekin, A.

    2016-06-01

    The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.

  8. A Sensor-Based Visual Effect Evaluation of Chevron Alignment Signs’ Colors on Drivers through the Curves in Snow and Ice Environment

    Directory of Open Access Journals (Sweden)

    Wei Zhao

    2017-01-01

    Full Text Available The ability to quantitatively evaluate the visual feedback of drivers has been considered as the primary research for reducing crashes in snow and ice environments. Different colored Chevron alignment signs cause diverse visual effect. However, the effect of Chevrons on visual feedback and on the driving reaction while navigating curves in SI environments has not been adequately evaluated. The objective of this study is twofold: (1 an effective and long-term experiment was designed and developed to test the effect of colored Chevrons on drivers’ vision and vehicle speed; (2 a new quantitative effect evaluation model is employed to measure the effect of different colors of the Chevrons. Fixation duration and pupil size were used to describe the driver’s visual response, and Cohen’s d was used to evaluate the colors’ psychological effect on drivers. The results showed the following: (1 after choosing the proper color for Chevrons, drivers reduced the speed of the vehicle while approaching the curves. (2 It was easier for drivers to identify the road alignment after setting the Chevrons. (3 Cohen’s d related to different colors of Chevrons have different effect sizes. The conclusions provide evident references for freeway warning products and the design of intelligent vehicles.

  9. Project Management Using Modern Guidance, Navigation and Control Theory

    Science.gov (United States)

    Hill, Terry

    2010-01-01

    The idea of control theory and its application to project management is not new, however literature on the topic and real-world applications is not as readily available and comprehensive in how all the principals of Guidance, Navigation and Control (GN&C) apply. This paper will address how the fundamental principals of modern GN&C Theory have been applied to NASA's Constellation Space Suit project and the results in the ability to manage the project within cost, schedule and budget. A s with physical systems, projects can be modeled and managed with the same guiding principles of GN&C as if it were a complex vehicle, system or software with time-varying processes, at times non-linear responses, multiple data inputs of varying accuracy and a range of operating points. With such systems the classic approach could be applied to small and well-defined projects; however with larger, multi-year projects involving multiple organizational structures, external influences and a multitude of diverse resources, then modern control theory is required to model and control the project. The fundamental principals of G N&C stated that a system is comprised of these basic core concepts: State, Behavior, Control system, Navigation system, Guidance and Planning Logic, Feedback systems. The state of a system is a definition of the aspects of the dynamics of the system that can change, such as position, velocity, acceleration, coordinate-based attitude, temperature, etc. The behavior of the system is more of what changes are possible rather than what can change, which is captured in the state of the system. The behavior of a system is captured in the system modeling and if properly done, will aid in accurate system performance prediction in the future. The Control system understands the state and behavior of the system and feedback systems to adjust the control inputs into the system. The Navigation system takes the multiple data inputs and based upon a priori knowledge of the input

  10. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    Science.gov (United States)

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Large-Scale Context-Aware Volume Navigation using Dynamic Insets

    KAUST Repository

    Al-Awami, Ali

    2012-07-01

    Latest developments in electron microscopy (EM) technology produce high resolution images that enable neuro-scientists to identify and put together the complex neural connections in a nervous system. However, because of the massive size and underlying complexity of this kind of data, processing, navigation and analysis suffer drastically in terms of time and effort. In this work, we propose the use of state-of- the-art navigation techniques, such as dynamic insets, built on a peta-scale volume visualization framework to provide focus and context-awareness to help neuro-scientists in their mission to analyze, reconstruct, navigate and explore EM neuroscience data.

  12. Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.

    Science.gov (United States)

    Liverence, Brandon M; Scholl, Brian J

    2015-07-01

    Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.

  13. Does Narrative Feedback Enhance Children's Motor Learning in a Virtual Environment?

    Science.gov (United States)

    Levac, Danielle E; Lu, Amy S

    2018-04-30

    Augmented feedback has motivational and informational functions in motor learning, and is a key feature of practice in a virtual environment (VE). This study evaluated the impact of narrative (story-based) feedback as compared to standard feedback during practice of a novel task in a VE on typically developing children's motor learning, motivation and engagement. Thirty-eight children practiced navigating through a virtual path, receiving narrative or non-narrative feedback following each trial. All participants improved their performance on retention but not transfer, with no significant differences between groups. Self-reported engagement was associated with acquisition, retention and transfer for both groups. A narrative approach to feedback delivery did not offer an additive benefit; additional affective advantages of augmented feedback for motor learning in VEs should be explored.

  14. Noisy visual feedback training impairs detection of self-generated movement error: implications for anosognosia for hemiplegia

    Directory of Open Access Journals (Sweden)

    Catherine ePreston

    2014-06-01

    Full Text Available Anosognosia for hemiplegia (AHP is characterised as a disorder in which patients are unaware of their contralateral motor deficit. Many current theories for unawareness in AHP are based on comparator model accounts of the normal experience of agency. According to such models, while small mismatches between predicted and actual feedback allow unconscious fine-tuning of normal actions, mismatches that surpass an inherent threshold reach conscious awareness and inform judgements of agency (whether a given movement is produced by the self or another agent. This theory depends on a threshold for consciousness that is greater than the intrinsic noise in the system to reduce the occurrence of incorrect rejections of self-generated movements and maintain a fluid experience of agency. Pathological increases to this threshold could account for reduced motor awareness following brain injury, including AHP. The current experiment tested this hypothesis in healthy controls by exposing them to training in which noise was applied the visual feedback of their normal reaches. Subsequent self/other attribution tasks without noise revealed a decrease in the ability to detect manipulated (other feedback compared to training without noise. This suggests a slackening of awareness thresholds in the comparator model that may help to explain clinical observations of decreased action awareness following stroke.

  15. Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine

    International Nuclear Information System (INIS)

    Ploder, O.; Wagner, A.; Enislidis, G.; Ewers, R.

    1995-01-01

    In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment - the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants. (orig.) [de

  16. MR respiratory navigator echo gated coronary angiography at 3 T

    International Nuclear Information System (INIS)

    Chang Shixin; Wang Yibin; Zong Genlin; Hao Nanxin; Du Yushan

    2007-01-01

    Objective: To investigate the techniques and influence factors for the respiratory navigator echo triggered whole-heart coronary MR angiography (WH-CMRA) and evaluate its application in visualizing coronary arteries and the image quality. Methods: Ninety two volunteers were acquired with WH-CMRA at 3 T MR scanner using respiratory navigator-echo gated TFE sequence. Imaging quality was visually graded as 0-IV grade according to the visual inspection, average length, diameter and sharpness of coronary arteries. The correlation between the imaging quality and respiratory pattern, heart rate and navigator efficiency was analyzed. Results: The imaging quality in 92 cases was that 28 were graded as IV, 53 were graded as III, 9 were graded as II and 2 were graded as I. The successful rate of scan was 88% (81/92). The imaging quality is mainly graded as IV when the heart rate was less than 75 beats per minute (bpm) and the sharpness of vessel was (48±11)%. When heart rate was more than 75 bpm, the image quality was mostly graded as 111 and the sharpness was (33±15)%. The correlation between heart rate and imaging quality score was negative (r= -0.726, P O.05). Conclusion: 3 T WH-CMRA technique could facilitated the visualization of whole coronary arteries at free breathing but having indications on heart rate. (authors)

  17. Interação de variáveis biomecânicas na composição de "feedback" visual aumentado para o ensino do ciclismo Interacción de variables biomecánicas en la composición de feedback visual aumentado para el enseñanza del ciclismo Interaction of biomechanical variables in the composition of visual augmented feedback for learning cycling

    Directory of Open Access Journals (Sweden)

    Guilherme Garcia Holderbaum

    2012-12-01

    Full Text Available O objetivo deste estudo foi testar uma metodologia para o ensino da técnica da pedalada do ciclismo utilizando variáveis biomecánicas para desenvolver um sistema de "feedback" visual aumentado (FVA. Participaram do estudo 19 indivíduos, sem experiência no ciclismo , divididos em grupo experimental (n = 10 e controle (n = 9. Inicialmente foi realizado um pré-teste para determinar o consumo máximo de oxigênio (VO2máx bem como a carga de trabalho utilizada nas sessões práticas que correspondeu a 60% do VO2máx. Em seguida foram realizadas sete sessões de prática. O grupo experimental foi submetido ao FVA e o grupo controle ao "feedback" aumentado (FA. O teste de retenção mostrou um aumento de 21 % na média do índice de efetividade (IE do grupo experimental quando comparado ao grupo controle. Os resultados mostraram que variáveis biomecánicas são apropriadas para o desenvolvimento de FVA e podem contribuir no processo de ensino-aprendizagem da técnica da pedalada do ciclismo.El objetivo de este estudio fue probar una metodología para enseñar la técnica de el ciclismo mediante la utilización de variables biomecánicas para desarrollar un sistema de feedback visual aumentado (FVA. Fue aplicado en 19 personas sin experiencia en el ciclismo, divididos en dos grupos (experimental = 10 y control = 9. Inicialmente se realizó un pre-test para determinar el consumo máximo de oxígeno (VO2max y la carga de trabajo utilizada en las sesiones de práctica que correspondía al 60% del VO2máx. El grupo experimental fue sometido a la FVA y el control a la feedback aumentado (FA. El ensayo de retención mostró un aumento del 21% en la media del índice de eficacia (IE en el grupo experimental en comparación con el grupo control. Los resultados mostraron que las variables biomecánicas son apropiadas para el desarrollo de la FVA y puede contribuir al proceso de enseñanza y aprendizaje del ciclismo.The aim of this study was to test a

  18. Online visual feedback during error-free channel trials leads to active unlearning of movement dynamics: evidence for adaptation to trajectory prediction errors.

    Directory of Open Access Journals (Sweden)

    Angel Lago-Rodriguez

    2016-09-01

    Full Text Available Prolonged exposure to movement perturbations leads to creation of motor memories which decay towards previous states when the perturbations are removed. However, it remains unclear whether this decay is due only to a spontaneous and passive recovery of the previous state. It has recently been reported that activation of reinforcement-based learning mechanisms delays the onset of the decay. This raises the question whether other motor learning mechanisms may also contribute to the retention and/or decay of the motor memory. Therefore, we aimed to test whether mechanisms of error-based motor adaptation are active during the decay of the motor memory. Forty-five right-handed participants performed point-to-point reaching movements under an external dynamic perturbation. We measured the expression of the motor memory through error-clamped (EC trials, in which lateral forces constrained movements to a straight line towards the target. We found greater and faster decay of the motor memory for participants who had access to full online visual feedback during these EC trials (Cursor group, when compared with participants who had no EC feedback regarding movement trajectory (Arc group. Importantly, we did not find between-group differences in adaptation to the external perturbation. In addition, we found greater decay of the motor memory when we artificially increased feedback errors through the manipulation of visual feedback (Augmented-Error group. Our results then support the notion of an active decay of the motor memory, suggesting that adaptive mechanisms are involved in correcting for the mismatch between predicted movement trajectories and actual sensory feedback, which leads to greater and faster decay of the motor memory.

  19. Nonholonomic feedback control among moving obstacles

    Science.gov (United States)

    Armstrong, Stephen Gregory

    A feedback controller is developed for navigating a nonholonomic vehicle in an area with multiple stationary and possibly moving obstacles. Among other applications the developed algorithms can be used for automatic parking of a passenger car in a parking lot with complex configuration or a ground robot in cluttered environment. Several approaches are explored which combine nonholonomic systems control based on sliding modes and potential field methods.

  20. Short structured feedback training is equivalent to a mechanical feedback device in two-rescuer BLS: a randomised simulation study.

    Science.gov (United States)

    Pavo, Noemi; Goliasch, Georg; Nierscher, Franz Josef; Stumpf, Dominik; Haugk, Moritz; Breckwoldt, Jan; Ruetzler, Kurt; Greif, Robert; Fischer, Henrik

    2016-05-13

    Resuscitation guidelines encourage the use of cardiopulmonary resuscitation (CPR) feedback devices implying better outcomes after sudden cardiac arrest. Whether effective continuous feedback could also be given verbally by a second rescuer ("human feedback") has not been investigated yet. We, therefore, compared the effect of human feedback to a CPR feedback device. In an open, prospective, randomised, controlled trial, we compared CPR performance of three groups of medical students in a two-rescuer scenario. Group "sCPR" was taught standard BLS without continuous feedback, serving as control. Group "mfCPR" was taught BLS with mechanical audio-visual feedback (HeartStart MRx with Q-CPR-Technology™). Group "hfCPR" was taught standard BLS with human feedback. Afterwards, 326 medical students performed two-rescuer BLS on a manikin for 8 min. CPR quality parameters, such as "effective compression ratio" (ECR: compressions with correct hand position, depth and complete decompression multiplied by flow-time fraction), and other compression, ventilation and time-related parameters were assessed for all groups. ECR was comparable between the hfCPR and the mfCPR group (0.33 vs. 0.35, p = 0.435). The hfCPR group needed less time until starting chest compressions (2 vs. 8 s, p feedback or by using a mechanical audio-visual feedback device was similar. Further studies should investigate whether extended human feedback training could further increase CPR quality at comparable costs for training.

  1. Neurocognitive Treatment for a Patient with Alzheimer's Disease Using a Virtual Reality Navigational Environment

    Directory of Open Access Journals (Sweden)

    Paul J.F. White

    2016-01-01

    Full Text Available In this case study, a man at the onset of Alzheimer's disease (AD was enrolled in a cognitive treatment program based upon spatial navigation in a virtual reality (VR environment. We trained him to navigate to targets in a symmetric, landmark-less virtual building. Our research goals were to determine whether an individual with AD could learn to navigate in a simple VR navigation (VRN environment and whether that training could also bring real-life cognitive benefits. The results show that our participant learned to perfectly navigate to desired targets in the VRN environment over the course of the training program. Furthermore, subjective feedback from his primary caregiver (his wife indicated that his skill at navigating while driving improved noticeably and that he enjoyed cognitive improvement in his daily life at home. These results suggest that VRN treatments might benefit other people with AD.

  2. The Effects of Mirror Feedback during Target Directed Movements on Ipsilateral Corticospinal Excitability

    Directory of Open Access Journals (Sweden)

    Mathew Yarossi

    2017-05-01

    Full Text Available Mirror visual feedback (MVF training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1 excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror and presence of a visual target (target present, target absent for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs in the untrained first dorsal interosseous (FDI and abductor digiti minimi (ADM muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4. Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability.

  3. A virtual reality-based method of decreasing transmission time of visual feedback for a tele-operative robotic catheter operating system.

    Science.gov (United States)

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-03-01

    An Internet-based tele-operative robotic catheter operating system was designed for vascular interventional surgery, to afford unskilled surgeons the opportunity to learn basic catheter/guidewire skills, while allowing experienced physicians to perform surgeries cooperatively. Remote surgical procedures, limited by variable transmission times for visual feedback, have been associated with deterioration in operability and vascular wall damage during surgery. At the patient's location, the catheter shape/position was detected in real time and converted into three-dimensional coordinates in a world coordinate system. At the operation location, the catheter shape was reconstructed in a virtual-reality environment, based on the coordinates received. The data volume reduction significantly reduced visual feedback transmission times. Remote transmission experiments, conducted over inter-country distances, demonstrated the improved performance of the proposed prototype. The maximum error for the catheter shape reconstruction was 0.93 mm and the transmission time was reduced considerably. The results were positive and demonstrate the feasibility of remote surgery using conventional network infrastructures. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes.

    Directory of Open Access Journals (Sweden)

    Trevor Murray

    Full Text Available Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area' has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.

  5. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes.

    Science.gov (United States)

    Murray, Trevor; Zeil, Jochen

    2017-01-01

    Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.

  6. Learning receptive fields using predictive feedback.

    Science.gov (United States)

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  7. Performance Improvement of Inertial Navigation System by Using Magnetometer with Vehicle Dynamic Constraints

    Directory of Open Access Journals (Sweden)

    Daehee Won

    2015-01-01

    Full Text Available A navigation algorithm is proposed to increase the inertial navigation performance of a ground vehicle using magnetic measurements and dynamic constraints. The navigation solutions are estimated based on inertial measurements such as acceleration and angular velocity measurements. To improve the inertial navigation performance, a three-axis magnetometer is used to provide the heading angle, and nonholonomic constraints (NHCs are introduced to increase the correlation between the velocity and the attitude equation. The NHCs provide a velocity feedback to the attitude, which makes the navigation solution more robust. Additionally, an acceleration-based roll and pitch estimation is applied to decrease the drift when the acceleration is within certain boundaries. The magnetometer and NHCs are combined with an extended Kalman filter. An experimental test was conducted to verify the proposed method, and a comprehensive analysis of the performance in terms of the position, velocity, and attitude showed that the navigation performance could be improved by using the magnetometer and NHCs. Moreover, the proposed method could improve the estimation performance for the position, velocity, and attitude without any additional hardware except an inertial sensor and magnetometer. Therefore, this method would be effective for ground vehicles, indoor navigation, mobile robots, vehicle navigation in urban canyons, or navigation in any global navigation satellite system-denied environment.

  8. Interactive navigation and bronchial tube tracking in virtual bronchoscopy.

    Science.gov (United States)

    Heng, P A; Fung, P F; Wong, T T; Siu, Y H; Sun, H

    1999-01-01

    An interactive virtual environment for simulation of bronchoscopy is developed. Medical doctor can safely plan their surgical bronchoscopy using the virtual environment without any invasive diagnosis which may risk the patient's health. The 3D pen input device of the system allows the doctor to navigate and visualize the bronchial tree of the patient naturally and interactively. To navigate the patient's bronchial tree, a vessel tracking process is required. While manual tracking is tedious and labor-intensive, fully automatic tracking may not be reliable. We propose a semi-automatic tracking technique called Intelligent Path Tracker which provides automation and enough user control during the vessel tracking. To support an interactive frame rate, we also introduce a new volume rendering acceleration technique, named as IsoRegion Leaping. The volume rendering is further accelerated by distributed rendering on a TCP/IP-based network of low-cost PCs. With these approaches, a 256 x 256 x 256 volume data of human lung, can be navigated and visualized at a frame rate of over 10 Hz in our virtual bronchoscopy system.

  9. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    Science.gov (United States)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  10. Duration reproduction with sensory feedback delay: Differential involvement of perception and action time

    Directory of Open Access Journals (Sweden)

    Stephanie eGanzenmüller

    2012-10-01

    Full Text Available Previous research has shown that voluntary action can attract subsequent, delayed feedback events towards the action, and adaptation to the sensorimotor delay can even reverse motor-sensory temporal-order judgments. However, whether and how sensorimotor delay affects duration reproduction is still unclear. To investigate this, we injected an onset- or offset-delay to the sensory feedback signal from a duration reproduction task. We compared duration reproductions within (visual, auditory modality and across audiovisual modalities with feedback signal onset- and offset-delay manipulations. We found that the reproduced duration was lengthened in both visual and auditory feedback signal onset-delay conditions. The lengthening effect was evident immediately, on the first trial with the onset delay. However, when the onset of the feedback signal was prior to the action, the lengthening effect was diminished. In contrast, a shortening effect was found with feedback signal offset-delay, though the effect was weaker and manifested only in the auditory offset-delay condition. These findings indicate that participants tend to mix the onset of action and the feedback signal more when the feedback is delayed, and they heavily rely on motor-stop signals for the duration reproduction. Furthermore, auditory duration was overestimated compared to visual duration in crossmodal feedback conditions, and the overestimation of auditory duration (or the underestimation of visual duration was independent of the delay manipulation.

  11. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs

    OpenAIRE

    Revina, Yulia; Petro, Lucy S.; Muckli, Lars

    2017-01-01

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested i...

  12. Image navigation as a means to expand the boundaries of fluorescence-guided surgery.

    Science.gov (United States)

    Brouwer, Oscar R; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L; Wendler, Thomas; Valdés-Olmos, Renato A; van der Poel, Henk G; van Leeuwen, Fijs W B

    2012-05-21

    Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.

  13. Laparoscopic Navigated Liver Resection: Technical Aspects and Clinical Practice in Benign Liver Tumors

    Directory of Open Access Journals (Sweden)

    Markus Kleemann

    2012-01-01

    Full Text Available Laparoscopic liver resection has been performed mostly in centers with an extended expertise in both hepatobiliary and laparoscopic surgery and only in highly selected patients. In order to overcome the obstacles of this technique through improved intraoperative visualization we developed a laparoscopic navigation system (LapAssistent to register pre-operatively reconstructed three-dimensional CT or MRI scans within the intra-operative field. After experimental development of the navigation system, we commenced with the clinical use of navigation-assisted laparoscopic liver surgery in January 2010. In this paper we report the technical aspects of the navigation system and the clinical use in one patient with a large benign adenoma. Preoperative planning data were calculated by Fraunhofer MeVis Bremen, Germany. After calibration of the system including camera, laparoscopic instruments, and the intraoperative ultrasound scanner we registered the surface of the liver. Applying the navigated ultrasound the preoperatively planned resection plane was then overlain with the patient's liver. The laparoscopic navigation system could be used under sterile conditions and it was possible to register and visualize the preoperatively planned resection plane. These first results now have to be validated and certified in a larger patient collective. A nationwide prospective multicenter study (ProNavic I has been conducted and launched.

  14. Using Screencasts to Enhance Assessment Feedback: Students' Perceptions and Preferences

    Science.gov (United States)

    Marriott, Pru; Teoh, Lim Keong

    2012-01-01

    In the UK, assessment and feedback have been regularly highlighted by the National Student Survey as critical aspects that require improvement. An innovative approach to delivering feedback that has proved successful in non-business-related disciplines is the delivery of audio and visual feedback using screencast technology. The feedback on…

  15. Visual information transfer. 1: Assessment of specific information needs. 2: The effects of degraded motion feedback. 3: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1984-01-01

    Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.

  16. Integrated INS/GPS Navigation from a Popular Perspective

    Science.gov (United States)

    Omerbashich, Mensur

    2002-01-01

    Inertial navigation, blended with other navigation aids, Global Positioning System (GPS) in particular, has gained significance due to enhanced navigation and inertial reference performance and dissimilarity for fault tolerance and anti-jamming. Relatively new concepts based upon using Differential GPS (DGPS) blended with Inertial (and visual) Navigation Sensors (INS) offer the possibility of low cost, autonomous aircraft landing. The FAA has decided to implement the system in a sophisticated form as a new standard navigation tool during this decade. There have been a number of new inertial sensor concepts in the recent past that emphasize increased accuracy of INS/GPS versus INS and reliability of navigation, as well as lower size and weight, and higher power, fault tolerance, and long life. The principles of GPS are not discussed; rather the attention is directed towards general concepts and comparative advantages. A short introduction to the problems faced in kinematics is presented. The intention is to relate the basic principles of kinematics to probably the most used navigation method in the future-INS/GPS. An example of the airborne INS is presented, with emphasis on how it works. The discussion of the error types and sources in navigation, and of the role of filters in optimal estimation of the errors then follows. The main question this paper is trying to answer is 'What are the benefits of the integration of INS and GPS and how is this, navigation concept of the future achieved in reality?' The main goal is to communicate the idea about what stands behind a modern navigation method.

  17. Understanding satellite navigation

    CERN Document Server

    Acharya, Rajat

    2014-01-01

    This book explains the basic principles of satellite navigation technology with the bare minimum of mathematics and without complex equations. It helps you to conceptualize the underlying theory from first principles, building up your knowledge gradually using practical demonstrations and worked examples. A full range of MATLAB simulations is used to visualize concepts and solve problems, allowing you to see what happens to signals and systems with different configurations. Implementation and applications are discussed, along with some special topics such as Kalman Filter and Ionosphere. W

  18. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  19. Landmarks or panoramas: what do navigating ants attend to for guidance?

    Directory of Open Access Journals (Sweden)

    Beugnon Guy

    2011-08-01

    Full Text Available Abstract Background Insects are known to rely on terrestrial landmarks for navigation. Landmarks are used to chart a route or pinpoint a goal. The distant panorama, however, is often thought not to guide navigation directly during a familiar journey, but to act as a contextual cue that primes the correct memory of the landmarks. Results We provided Melophorus bagoti ants with a huge artificial landmark located right near the nest entrance to find out whether navigating ants focus on such a prominent visual landmark for homing guidance. When the landmark was displaced by small or large distances, ant routes were affected differently. Certain behaviours appeared inconsistent with the hypothesis that guidance was based on the landmark only. Instead, comparisons of panoramic images recorded on the field, encompassing both landmark and distal panorama, could explain most aspects of the ant behaviours. Conclusion Ants navigating along a familiar route do not focus on obvious landmarks or filter out distal panoramic cues, but appear to be guided by cues covering a large area of their panoramic visual field, including both landmarks and distal panorama. Using panoramic views seems an appropriate strategy to cope with the complexity of natural scenes and the poor resolution of insects' eyes. The ability to isolate landmarks from the rest of a scene may be beyond the capacity of animals that do not possess a dedicated object-perception visual stream like primates.

  20. Comprehension of Navigation Directions

    Science.gov (United States)

    Schneider, Vivian I.; Healy, Alice F.

    2000-01-01

    In an experiment simulating communication between air traffic controllers and pilots, subjects were given navigation instructions varying in length telling them to move in a space represented by grids on a computer screen. The subjects followed the instructions by clicking on the grids in the locations specified. Half of the subjects read the instructions, and half heard them. Half of the subjects in each modality condition repeated back the instructions before following them,and half did not. Performance was worse for the visual than for the auditory modality on the longer messages. Repetition of the instructions generally depressed performance, especially with the longer messages, which required more output than did the shorter messages, and especially with the visual modality, in which phonological recoding from the visual input to the spoken output was necessary. These results are explained in terms of the degrading effects of output interference on memory for instructions.

  1. Interação de variáveis biomecânicas na composição de "feedback" visual aumentado para o ensino do ciclismo Interacción de variables biomecánicas en la composición de feedback visual aumentado para el enseñanza del ciclismo Interaction of biomechanical variables in the composition of visual augmented feedback for learning cycling

    OpenAIRE

    Guilherme Garcia Holderbaum; Ricardo Demétrio de Souza Petersen; Antônio Carlos Stringhini Guimarães

    2012-01-01

    O objetivo deste estudo foi testar uma metodologia para o ensino da técnica da pedalada do ciclismo utilizando variáveis biomecánicas para desenvolver um sistema de "feedback" visual aumentado (FVA). Participaram do estudo 19 indivíduos, sem experiência no ciclismo , divididos em grupo experimental (n = 10) e controle (n = 9). Inicialmente foi realizado um pré-teste para determinar o consumo máximo de oxigênio (VO2máx) bem como a carga de trabalho utilizada nas sessões práticas que correspond...

  2. Blind MuseumTourer: A System for Self-Guided Tours in Museums and Blind Indoor Navigation

    OpenAIRE

    Apostolos Meliones; Demetrios Sampson

    2018-01-01

    Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils), which has primarily addressed blind or visually impaired (BVI) accessibility and self-guided tours in museums. A pilot prototype has been developed and is curr...

  3. Visual navigation in insects: coupling of egocentric and geocentric information

    Science.gov (United States)

    Wehner; Michel; Antonsen

    1996-01-01

    Social hymenopterans such as bees and ants are central-place foragers; they regularly depart from and return to fixed positions in their environment. In returning to the starting point of their foraging excursion or to any other point, they could resort to two fundamentally different ways of navigation by using either egocentric or geocentric systems of reference. In the first case, they would rely on information continuously collected en route (path integration, dead reckoning), i.e. integrate all angles steered and all distances covered into a mean home vector. In the second case, they are expected, at least by some authors, to use a map-based system of navigation, i.e. to obtain positional information by virtue of the spatial position they occupy within a larger environmental framework. In bees and ants, path integration employing a skylight compass is the predominant mechanism of navigation, but geocentred landmark-based information is used as well. This information is obtained while the animal is dead-reckoning and, hence, added to the vector course. For example, the image of the horizon skyline surrounding the nest entrance is retinotopically stored while the animal approaches the goal along its vector course. As shown in desert ants (genus Cataglyphis), there is neither interocular nor intraocular transfer of landmark information. Furthermore, this retinotopically fixed, and hence egocentred, neural snapshot is linked to an external (geocentred) system of reference. In this way, geocentred information might more and more complement and potentially even supersede the egocentred information provided by the path-integration system. In competition experiments, however, Cataglyphis never frees itself of its homeward-bound vector - its safety-line, so to speak - by which it is always linked to home. Vector information can also be transferred to a longer-lasting (higher-order) memory. There is no need to invoke the concept of the mental analogue of a topographic

  4. Toward Functional Augmented Reality in Marine Navigation : A Cognitive Work Analysis

    NARCIS (Netherlands)

    Procee, S.; Borst, C.; van Paassen, M.M.; Mulder, M.; Bertram, V.

    2017-01-01

    Augmented Reality, (AR) also known as vision-overlay, can help the navigator to visually detect a dangerous target by the overlay of a synthetic image, thus providing a visual cue over the real world. This is the first paper of a series about the practicalities and consequences of implementing AR in

  5. Skill learning from kinesthetic feedback.

    Science.gov (United States)

    Pinzon, David; Vega, Roberto; Sanchez, Yerly Paola; Zheng, Bin

    2017-10-01

    It is important for a surgeon to perform surgical tasks under appropriate guidance from visual and kinesthetic feedback. However, our knowledge on kinesthetic (muscle) memory and its role in learning motor skills remains elementary. To discover the effect of exclusive kinesthetic training on kinesthetic memory in both performance and learning. In Phase 1, a total of twenty participants duplicated five 2 dimensional movements of increasing complexity via passive kinesthetic guidance, without visual or auditory stimuli. Five participants were asked to repeat the task in the Phase 2 over a period of three weeks, for a total of nine sessions. Subjects accurately recalled movement direction using kinesthetic memory, but recalling movement length was less precise. Over the nine training sessions, error occurrence dropped after the sixth session. Muscle memory constructs the foundation for kinesthetic training. Knowledge gained helps surgeons learn skills from kinesthetic information in the condition where visual feedback is limited. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Multidisciplinary evaluation of an emergency department nurse navigator role: A mixed methods study.

    Science.gov (United States)

    Jessup, Melanie; Fulbrook, Paul; Kinnear, Frances B

    2017-09-20

    To utilise multidisciplinary staff feedback to assess their perceptions of a novel emergency department nurse navigator role and to understand the impact of the role on the department. Prolonged emergency department stays impact patients, staff and quality of care, and are linked to increased morbidity and mortality. One innovative strategy to facilitate patient flow is the navigator: a nurse supporting staff in care delivery to enhance efficient, timely movement of patients through the department. However, there is a lack of rigorous research into this emerging role. Sequential exploratory mixed methods. A supernumerary emergency department nurse navigator was implemented week-off-week-on, seven days a week for 20 weeks. Diaries, focus groups, and an online survey (24-item Navigator Role Evaluation tool) were used to collect and synthesise data from the perspectives of multidisciplinary departmental staff. Thematic content analysis of cumulative qualitative data drawn from the navigators' diaries, focus groups and survey revealed iterative processes of the navigators growing into the role and staff incorporating the role into departmental flow, manifested as: Reception of the role and relationships with staff; Defining the role; and Assimilation of the role. Statistical analysis of survey data revealed overall staff satisfaction with the role. Physicians, nurses and others assessed it similarly. However, only 44% felt the role was an overall success, less than half (44%) considered it necessary, and just over a third (38%) thought it positively impacted inter-professional relationships. Investigation of individual items revealed several areas of uncertainty about the role. Within-group differences between nursing grades were noted, junior nurses rating the role significantly higher than more senior nurses. Staff input yielded invaluable insider feedback for ensuing modification and optimal instigation of the navigator role, rendering a sense of departmental

  7. Stochastic two-delay differential model of delayed visual feedback effects on postural dynamics.

    Science.gov (United States)

    Boulet, Jason; Balasubramaniam, Ramesh; Daffertshofer, Andreas; Longtin, André

    2010-01-28

    We report on experiments and modelling involving the 'visuo-postural control loop' in the upright stance. We experimentally manipulated an artificial delay to the visual feedback during standing, presented at delays ranging from 0 to 1 s in increments of 250 ms. Using stochastic delay differential equations, we explicitly modelled the centre-of-pressure (COP) and centre-of-mass (COM) dynamics with two independent delay terms for vision and proprioception. A novel 'drifting fixed point' hypothesis was used to describe the fluctuations of the COM with the COP being modelled as a faster, corrective process of the COM. The model was in good agreement with the data in terms of probability density functions, power spectral densities, short- and long-term correlations (Hurst exponents) as well the critical time between the two ranges. This journal is © 2010 The Royal Society

  8. Effects of acoustic feedback training in elite-standard Para-Rowing.

    Science.gov (United States)

    Schaffert, Nina; Mattes, Klaus

    2015-01-01

    Assessment and feedback devices have been regularly used in technique training in high-performance sports. Biomechanical analysis is mainly visually based and so can exclude athletes with visual impairments. The aim of this study was to examine the effects of auditory feedback on mean boat speed during on-water training of visually impaired athletes. The German National Para-Rowing team (six athletes, mean ± s, age 34.8 ± 10.6 years, body mass 76.5 ± 13.5 kg, stature 179.3 ± 8.6 cm) participated in the study. Kinematics included boat acceleration and distance travelled, collected with Sofirow at two intensities of training. The boat acceleration-time traces were converted online into acoustic feedback and presented via speakers during rowing (sections with and without alternately). Repeated-measures within-participant factorial ANOVA showed greater boat speed with acoustic feedback than baseline (0.08 ± 0.01 m·s(-1)). The time structure of rowing cycles was improved (extended time of positive acceleration). Questioning of athletes showed acoustic feedback to be a supportive training aid as it provided important functional information about the boat motion independent of vision. It gave access for visually impaired athletes to biomechanical analysis via auditory information. The concept for adaptive athletes has been successfully integrated into the preparation for the Para-Rowing World Championships and Paralympics.

  9. The positive effect of mirror visual feedback on arm control in children with Spastic hemiparetic cerebral palsy is dependent on which arm is viewed

    NARCIS (Netherlands)

    Smorenburg, A; Ledebt, A.; Feltham, M.; Deconinck, F.; Savelsbergh, G.J.P.

    2011-01-01

    Mirror visual feedback has previously been found to reduce disproportionate interlimb variability and neuromuscular activity in the arm muscles in children with Spastic Hemiparetic Cerebral Palsy (SHCP). The aim of the current study was to determine whether these positive effects are generated by

  10. 14 CFR 135.165 - Communication and navigation equipment: Extended over-water or IFR operations.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Communication and navigation equipment... PERSONS ON BOARD SUCH AIRCRAFT Aircraft and Equipment § 135.165 Communication and navigation equipment... aircraft used for IFR operations is equipped with at least— (i) One marker beacon receiver providing visual...

  11. Navigation using sensory substitution in real and virtual mazes.

    Science.gov (United States)

    Chebat, Daniel-Robert; Maidenbaum, Shachar; Amedi, Amir

    2015-01-01

    Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations) can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.

  12. Visual Inertial Navigation and Calibration

    OpenAIRE

    Skoglund, Martin A.

    2011-01-01

    Processing and interpretation of visual content is essential to many systems and applications. This requires knowledge of how the content is sensed and also what is sensed. Such knowledge is captured in models which, depending on the application, can be very advanced or simple. An application example is scene reconstruction using a camera; if a suitable model of the camera is known, then a model of the scene can be estimated from images acquired at different, unknown, locations, yet, the qual...

  13. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  14. Corticocortical feedback increases the spatial extent of normalization.

    Science.gov (United States)

    Nassi, Jonathan J; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a "normalization pool." Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing.

  15. Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.

    Science.gov (United States)

    Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta

    2015-05-01

    Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).

  16. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model

    OpenAIRE

    Li, M.; Konstantinova, J.; Xu, G.; He, B.; Aminzadeh, V.; Xie, J.; Wurdemann, H.; Althoefer, K.

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by slidin...

  17. Practical indoor mobile robot navigation using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2011-01-01

    This paper presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as 2D occupancy grids by a range sensor to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the 'places-of-interests' in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... robot and evaluated in a hospital environment....

  18. Training based on mirror visual feedback influences transcallosal communication.

    Science.gov (United States)

    Avanzino, Laura; Raffo, Alessia; Pelosin, Elisa; Ogliastro, Carla; Marchese, Roberta; Ruggeri, Piero; Abbruzzese, Giovanni

    2014-08-01

    Mirror visual feedback (MVF) therapy has been demonstrated to be successful in neurorehabilitation, probably inducing neuroplasticity changes in the primary motor cortex (M1). However, it is not known whether MVF training influences the hemispheric balance between the M1s. This topic is of extreme relevance when MVF training is applied to stroke rehabilitation, as the competitive interaction between the two hemispheres induces abnormal interhemispheric inhibition (IHI) that weakens motor function in stroke patients. In the present study, we evaluated, in a group of healthy subjects, the effect of motor training and MVF training on the excitability of the two M1s and the IHI between M1s. The IHI from the 'active' M1 to the opposite M1 (where 'active' means the M1 contralateral to the moving hand in the motor training and the M1 of the seen hand in the MVF training) increased, after training, in both the experimental conditions. Only after motor training did we observe an increase in the excitability of the active M1. Our findings show that training based on MVF may influence the excitability of the transcallosal pathway and support its use in disorders where abnormal IHI is a potential target, such as stroke, where an imbalance between the affected and unaffected M1s has been documented. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Evaluation of Augmented Reality Feedback in Surgical Training Environment.

    Science.gov (United States)

    Zahiri, Mohsen; Nelson, Carl A; Oleynikov, Dmitry; Siu, Ka-Chun

    2018-02-01

    Providing computer-based laparoscopic surgical training has several advantages that enhance the training process. Self-evaluation and real-time performance feedback are 2 of these advantages, which avoid dependency of trainees on expert feedback. The goal of this study was to investigate the use of a visual time indicator as real-time feedback correlated with the laparoscopic surgical training. Twenty novices participated in this study working with (and without) different presentations of time indicators. They performed a standard peg transfer task, and their completion times and muscle activity were recorded and compared. Also of interest was whether the use of this type of feedback induced any side effect in terms of motivation or muscle fatigue. Of the 20 participants, 15 (75%) preferred using a time indicator in the training process rather than having no feedback. However, time to task completion showed no significant difference in performance with the time indicator; furthermore, no significant differences in muscle activity or muscle fatigue were detected with/without time feedback. The absence of significant difference between task performance with/without time feedback shows that using visual real-time feedback can be included in surgical training based on user preference. Trainees may benefit from this type of feedback in the form of increased motivation. The extent to which this can influence training frequency leading to performance improvement is a question for further study.

  20. The influence of visual feedback from the recent past on the programming of grip aperture is grasp-specific, shared between hands, and mediated by sensorimotor memory not task set.

    Science.gov (United States)

    Tang, Rixin; Whitwell, Robert L; Goodale, Melvyn A

    2015-05-01

    Goal-directed movements, such as reaching out to grasp an object, are necessarily constrained by the spatial properties of the target such as its size, shape, and position. For example, during a reach-to-grasp movement, the peak width of the aperture formed by the thumb and fingers in flight (peak grip aperture, PGA) is linearly related to the target's size. Suppressing vision throughout the movement (visual open loop) has a small though significant effect on this relationship. Visual open loop conditions also produce a large increase in the PGA compared to when vision is available throughout the movement (visual closed loop). Curiously, this differential effect of the availability of visual feedback is influenced by the presentation order: the difference in PGA between closed- and open-loop trials is smaller when these trials are intermixed (an effect we have called 'homogenization'). Thus, grasping movements are affected not only by the availability of visual feedback (closed loop or open loop) but also by what happened on the previous trial. It is not clear, however, whether this carry-over effect is mediated through motor (or sensorimotor) memory or through the interference of different task sets for closed-loop and open-loop feedback that determine when the movements are fully specified. We reasoned that sensorimotor memory, but not a task set for closed and open loop feedback, would be specific to the type of response. We tested this prediction in a condition in which pointing to targets was alternated with grasping those same targets. Critically, in this condition, when pointing was performed in open loop, grasping was always performed in closed loop (and vice versa). Despite the fact that closed- and open-loop trials were alternating in this condition, we found no evidence for homogenization of the PGA. Homogenization did occur, however, in a follow-up experiment in which grasping movements and visual feedback were alternated between the left and the right

  1. Moving in Dim Light: Behavioral and Visual Adaptations in Nocturnal Ants.

    Science.gov (United States)

    Narendra, Ajay; Kamhi, J Frances; Ogawa, Yuri

    2017-11-01

    Visual navigation is a benchmark information processing task that can be used to identify the consequence of being active in dim-light environments. Visual navigational information that animals use during the day includes celestial cues such as the sun or the pattern of polarized skylight and terrestrial cues such as the entire panorama, canopy pattern, or significant salient features in the landscape. At night, some of these navigational cues are either unavailable or are significantly dimmer or less conspicuous than during the day. Even under these circumstances, animals navigate between locations of importance. Ants are a tractable system for studying navigation during day and night because the fine scale movement of individual animals can be recorded in high spatial and temporal detail. Ant species range from being strictly diurnal, crepuscular, and nocturnal. In addition, a number of species have the ability to change from a day- to a night-active lifestyle owing to environmental demands. Ants also offer an opportunity to identify the evolution of sensory structures for discrete temporal niches not only between species but also within a single species. Their unique caste system with an exclusive pedestrian mode of locomotion in workers and an exclusive life on the wing in males allows us to disentangle sensory adaptations that cater for different lifestyles. In this article, we review the visual navigational abilities of nocturnal ants and identify the optical and physiological adaptations they have evolved for being efficient visual navigators in dim-light. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  2. Instrument-mounted displays for reducing cognitive load during surgical navigation.

    Science.gov (United States)

    Herrlich, Marc; Tavakol, Parnian; Black, David; Wenig, Dirk; Rieder, Christian; Malaka, Rainer; Kikinis, Ron

    2017-09-01

    Surgical navigation systems rely on a monitor placed in the operating room to relay information. Optimal monitor placement can be challenging in crowded rooms, and it is often not possible to place the monitor directly beside the situs. The operator must split attention between the navigation system and the situs. We present an approach for needle-based interventions to provide navigational feedback directly on the instrument and close to the situs by mounting a small display onto the needle. By mounting a small and lightweight smartwatch display directly onto the instrument, we are able to provide navigational guidance close to the situs and directly in the operator's field of view, thereby reducing the need to switch the focus of view between the situs and the navigation system. We devise a specific variant of the established crosshair metaphor suitable for the very limited screen space. We conduct an empirical user study comparing our approach to using a monitor and a combination of both. Results from the empirical user study show significant benefits for cognitive load, user preference, and general usability for the instrument-mounted display, while achieving the same level of performance in terms of time and accuracy compared to using a monitor. We successfully demonstrate the feasibility of our approach and potential benefits. With ongoing technological advancements, instrument-mounted displays might complement standard monitor setups for surgical navigation in order to lower cognitive demands and for improved usability of such systems.

  3. Visual Odometry and Place Recognition Fusion for Vehicle Position Tracking in Urban Environments.

    Science.gov (United States)

    Ouerghi, Safa; Boutteau, Rémi; Savatier, Xavier; Tlili, Fethi

    2018-03-22

    In this paper, we address the problem of vehicle localization in urban environments. We rely on visual odometry, calculating the incremental motion, to track the position of the vehicle and on place recognition to correct the accumulated drift of visual odometry, whenever a location is recognized. The algorithm used as a place recognition module is SeqSLAM, addressing challenging environments and achieving quite remarkable results. Specifically, we perform the long-term navigation of a vehicle based on the fusion of visual odometry and SeqSLAM. The template library for this latter is created online using navigation information from the visual odometry module. That is, when a location is recognized, the corresponding information is used as an observation of the filter. The fusion is done using the EKF and the UKF, the well-known nonlinear state estimation methods, to assess the superior alternative. The algorithm is evaluated using the KITTI dataset and the results show the reduction of the navigation errors by loop-closure detection. The overall position error of visual odometery with SeqSLAM is 0.22% of the trajectory, which is much smaller than the navigation errors of visual odometery alone 0.45%. In addition, despite the superiority of the UKF in a variety of estimation problems, our results indicate that the UKF performs as efficiently as the EKF at the expense of an additional computational overhead. This leads to the conclusion that the EKF is a better choice for fusing visual odometry and SeqSlam in a long-term navigation context.

  4. Corticocortical feedback increases the spatial extent of normalization

    Science.gov (United States)

    Nassi, Jonathan J.; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T.

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a “normalization pool.” Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing. PMID:24910596

  5. 'Robot' Hand Illusion under Delayed Visual Feedback: Relationship between the Senses of Ownership and Agency.

    Directory of Open Access Journals (Sweden)

    Mohamad Arif Fahmi Ismail

    Full Text Available The rubber hand illusion (RHI is an illusion of the self-ownership of a rubber hand that is touched synchronously with one's own hand. While the RHI relates to visual and tactile integration, we can also consider a similar illusion with visual and motor integration on a fake hand. We call this a "robot hand illusion" (RoHI, which relates to both the senses of ownership and agency. Here we investigate the effect of delayed visual feedback on the RoHI. Participants viewed a virtual computer graphic hand controlled by their hand movement recorded using a data glove device. We inserted delays of various lengths between the participant's hand and the virtual hand movements (90-590 ms, and the RoHI effects for each delay condition were systematically tested using a questionnaire. The results showed that the participants felt significantly greater RoHI effects with temporal discrepancies of less than 190 ms compared with longer temporal discrepancies, both in the senses of ownership and agency. Additionally, participants felt significant, but weaker, RoHI effects with temporal discrepancies of 290-490 ms in the sense of agency, but not in the sense of ownership. The participants did not feel a RoHI with temporal discrepancies of 590 ms in either the senses of agency or ownership. Our results suggest that a time window of less than 200 ms is critical for multi-sensory integration processes constituting self-body image.

  6. 'Robot' Hand Illusion under Delayed Visual Feedback: Relationship between the Senses of Ownership and Agency.

    Science.gov (United States)

    Ismail, Mohamad Arif Fahmi; Shimada, Sotaro

    2016-01-01

    The rubber hand illusion (RHI) is an illusion of the self-ownership of a rubber hand that is touched synchronously with one's own hand. While the RHI relates to visual and tactile integration, we can also consider a similar illusion with visual and motor integration on a fake hand. We call this a "robot hand illusion" (RoHI), which relates to both the senses of ownership and agency. Here we investigate the effect of delayed visual feedback on the RoHI. Participants viewed a virtual computer graphic hand controlled by their hand movement recorded using a data glove device. We inserted delays of various lengths between the participant's hand and the virtual hand movements (90-590 ms), and the RoHI effects for each delay condition were systematically tested using a questionnaire. The results showed that the participants felt significantly greater RoHI effects with temporal discrepancies of less than 190 ms compared with longer temporal discrepancies, both in the senses of ownership and agency. Additionally, participants felt significant, but weaker, RoHI effects with temporal discrepancies of 290-490 ms in the sense of agency, but not in the sense of ownership. The participants did not feel a RoHI with temporal discrepancies of 590 ms in either the senses of agency or ownership. Our results suggest that a time window of less than 200 ms is critical for multi-sensory integration processes constituting self-body image.

  7. The skill of surface registration in CT-based navigation system for total hip arthroplasty

    International Nuclear Information System (INIS)

    Hananouchi, T.; Sugano, N.; Nishii, T.; Miki, H.; Sakai, T.; Yoshikawa, H.; Iwana, D.; Yamamura, M.; Nakamura, N.

    2007-01-01

    Surface registration of the CT-based navigation system, which is a matching between computational and real spatial spaces, is a key step to guarantee the accuracy of navigation. However, it has not been well described how the accuracy is affected by the registration skill of surgeon. Here, we reported the difference of the registration error between eight surgeons with the experience of navigation and six apprentice surgeons. A cadaveric pelvic model with an acetabular cup was made to measure the skill and learning curve of registration. After surface registration, two cup angles (inclination and anteversion) were recorded in the navigation system and the variance of these cup angles in ten trials were compared between the experienced surgeons and apprentices. In addition, we investigated whether the accuracy of registration by the apprentices was improved by visual information on how to take the surface points. The results showed that there was statistically significant difference in the accuracy of registration between the two groups. The accuracy of the second ten trials after getting the visual information showed great improvements. (orig.)

  8. Navigation using sensory substitution in real and virtual mazes.

    Directory of Open Access Journals (Sweden)

    Daniel-Robert Chebat

    Full Text Available Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.

  9. Sexual Orientation-Related Differences in Virtual Spatial Navigation and Spatial Search Strategies.

    Science.gov (United States)

    Rahman, Qazi; Sharp, Jonathan; McVeigh, Meadhbh; Ho, Man-Ling

    2017-07-01

    Spatial abilities are generally hypothesized to differ between men and women, and people with different sexual orientations. According to the cross-sex shift hypothesis, gay men are hypothesized to perform in the direction of heterosexual women and lesbian women in the direction of heterosexual men on cognitive tests. This study investigated sexual orientation differences in spatial navigation and strategy during a virtual Morris water maze task (VMWM). Forty-four heterosexual men, 43 heterosexual women, 39 gay men, and 34 lesbian/bisexual women (aged 18-54 years) navigated a desktop VMWM and completed measures of intelligence, handedness, and childhood gender nonconformity (CGN). We quantified spatial learning (hidden platform trials), probe trial performance, and cued navigation (visible platform trials). Spatial strategies during hidden and probe trials were classified into visual scanning, landmark use, thigmotaxis/circling, and enfilading. In general, heterosexual men scored better than women and gay men on some spatial learning and probe trial measures and used more visual scan strategies. However, some differences disappeared after controlling for age and estimated IQ (e.g., in visual scanning heterosexual men differed from women but not gay men). Heterosexual women did not differ from lesbian/bisexual women. For both sexes, visual scanning predicted probe trial performance. More feminine CGN scores were associated with lower performance among men and greater performance among women on specific spatial learning or probe trial measures. These results provide mixed evidence for the cross-sex shift hypothesis of sexual orientation-related differences in spatial cognition.

  10. A model of ant route navigation driven by scene familiarity.

    Directory of Open Access Journals (Sweden)

    Bart Baddeley

    2012-01-01

    Full Text Available In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints.

  11. Navigational efficiency of nocturnal Myrmecia ants suffers at low light levels.

    Directory of Open Access Journals (Sweden)

    Ajay Narendra

    Full Text Available Insects face the challenge of navigating to specific goals in both bright sun-lit and dim-lit environments. Both diurnal and nocturnal insects use quite similar navigation strategies. This is despite the signal-to-noise ratio of the navigational cues being poor at low light conditions. To better understand the evolution of nocturnal life, we investigated the navigational efficiency of a nocturnal ant, Myrmecia pyriformis, at different light levels. Workers of M. pyriformis leave the nest individually in a narrow light-window in the evening twilight to forage on nest-specific Eucalyptus trees. The majority of foragers return to the nest in the morning twilight, while few attempt to return to the nest throughout the night. We found that as light levels dropped, ants paused for longer, walked more slowly, the success in finding the nest reduced and their paths became less straight. We found that in both bright and dark conditions ants relied predominantly on visual landmark information for navigation and that landmark guidance became less reliable at low light conditions. It is perhaps due to the poor navigational efficiency at low light levels that the majority of foragers restrict navigational tasks to the twilight periods, where sufficient navigational information is still available.

  12. Sensor guided control and navigation with intelligent machines. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Bijoy K.

    2001-03-26

    This item constitutes the final report on ''Visionics: An integrated approach to analysis and design of intelligent machines.'' The report discusses dynamical systems approach to problems in robust control of possibly time-varying linear systems, problems in vision and visually guided control, and, finally, applications of these control techniques to intelligent navigation with a mobile platform. Robust design of a controller for a time-varying system essentially deals with the problem of synthesizing a controller that can adapt to sudden changes in the parameters of the plant and can maintain stability. The approach presented is to design a compensator that simultaneously stabilizes each and every possible mode of the plant as the parameters undergo sudden and unexpected changes. Such changes can in fact be detected by a visual sensor and, hence, visually guided control problems are studied as a natural consequence. The problem here is to detect parameters of the plant and maintain st ability in the closed loop using a ccd camera as a sensor. The main result discussed in the report is the role of perspective systems theory that was developed in order to analyze such a detection and control problem. The robust control algorithms and the visually guided control algorithms are applied in the context of a PUMA 560 robot arm control where the goal is to visually locate a moving part on a mobile turntable. Such problems are of paramount importance in manufacturing with a certain lack of structure. Sensor guided control problems are extended to problems in robot navigation using a NOMADIC mobile platform with a ccd and a laser range finder as sensors. The localization and map building problems are studied with the objective of navigation in an unstructured terrain.

  13. Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Li Wang

    2018-01-01

    Full Text Available In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera. Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.

  14. Visual target distance, but not visual cursor path length produces shifts in motor behavior

    Directory of Open Access Journals (Sweden)

    Nike eWendker

    2014-03-01

    Full Text Available When using tools effects in body space and distant space often do not correspond. Findings so far demonstrated that in this case visual feedback has more impact on action control than proprioceptive feedback. The present study varies the dimensional overlap between visual and proprioceptive action effects and investigates its impact on aftereffects in motor responses. In two experiments participants perform linear hand movements on a covered digitizer tablet to produce ∩-shaped cursor trajectories on the display. The shape of hand motion and cursor motion (linear vs. curved is dissimilar and therefore does not overlap. In one condition the length of hand amplitude and visual target distance is similar and constant while the length of the cursor path is dissimilar and varies. In another condition the length of the hand amplitude varies while the lengths of visual target distance (similar or dissimilar and cursor path (dissimilar are constant. First, we found that aftereffects depended on the relation between hand path length and visual target distance, and not on the relation between hand and cursor path length. Second, increasing contextual interference did not reveal larger aftereffects. Finally, data exploration demonstrated a considerable benefit from gain repetitions across trials when compared to gain switches. In conclusion, dimensional overlap between visual and proprioceptive action effects modulates human information processing in visually controlled actions. However, adjustment of the internal model seems to occur very fast for this kind of simple linear transformation, so that the impact of prior visual feedback is fleeting.

  15. Non-retinotopic motor-visual recalibration to temporal lag

    Directory of Open Access Journals (Sweden)

    Masaki eTsujita

    2012-11-01

    Full Text Available Temporal order judgment between the voluntary motor action and its perceptual feedback is important in distinguishing between a sensory feedback which is caused by observer’s own action and other stimulus, which are irrelevant to that action. Prolonged exposure to fixed temporal lag between motor action and visual feedback recalibrates motor-visual temporal relationship, and consequently shifts the point of subjective simultaneity (PSS. Previous studies on the audio-visual temporal recalibration without voluntary action revealed that both low and high level processing are involved. However, it is not clear how the low and high level processings affect the recalibration to constant temporal lag between voluntary action and visual feedback. This study examined retinotopic specificity of the motor-visual temporal recalibration. During the adaptation phase, observers repeatedly pressed a key, and visual stimulus was presented in left or right visual field with a fixed temporal lag (0 or 200 ms. In the test phase, observers performed a temporal order judgment for observer’s voluntary keypress and test stimulus, which was presented in the same as or opposite to the visual field in which the stimulus was presented in the adaptation phase. We found that the PSS was shifted toward the exposed lag in both visual fields. These results suggest that the low visual processing, which is retinotopically specific, has minor contribution to the multimodal adaptation, and that the adaptation to shift the PSS mainly depends upon the high level processing such as attention to specific properties of the stimulus.

  16. Navigation Strategy by Contact Sensing Interaction for a Biped Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2008-11-01

    Full Text Available This report presents a basic contact interaction-based navigation strategy for a biped humanoid robot to support current visual-based navigation. The robot's arms were equipped with force sensors to detect physical contact with objects. We proposed a motion algorithm consisting of searching tasks, self-localization tasks, correction of locomotion direction tasks and obstacle avoidance tasks. Priority was given to right-side direction to navigate the robot locomotion. Analysis of trajectory generation, biped gait pattern, and biped walking characteristics was performed to define an efficient navigation strategy in a biped walking humanoid robot. The proposed algorithm is evaluated in an experiment with a 21-dofs humanoid robot operating in a room with walls and obstacles. The experimental results reveal good robot performance when recognizing objects by touching, grasping, and continuously generating suitable trajectories to correct direction and avoid collisions.

  17. Towards Safe Navigation by Formalizing Navigation Rules

    Directory of Open Access Journals (Sweden)

    Arne Kreutzmann

    2013-06-01

    Full Text Available One crucial aspect of safe navigation is to obey all navigation regulations applicable, in particular the collision regulations issued by the International Maritime Organization (IMO Colregs. Therefore, decision support systems for navigation need to respect Colregs and this feature should be verifiably correct. We tackle compliancy of navigation regulations from a perspective of software verification. One common approach is to use formal logic, but it requires to bridge a wide gap between navigation concepts and simple logic. We introduce a novel domain specification language based on a spatio-temporal logic that allows us to overcome this gap. We are able to capture complex navigation concepts in an easily comprehensible representation that can direcly be utilized by various bridge systems and that allows for software verification.

  18. Environmental Feedback and Spatial Conditioning

    DEFF Research Database (Denmark)

    Foged, Isak Worre; Poulsen, Esben Skouboe

    2010-01-01

    with structural integrity, where thermal energy flow through the prototype, to be understood as a membrane, can be controlled and the visual transparancy altered. The work shows performance based feedback systems and physical prototype models driven by information streaming, screening and application....

  19. The visual attention network untangled

    NARCIS (Netherlands)

    Nieuwenhuis, S.; Donner, T.H.

    2011-01-01

    Goals are represented in prefrontal cortex and modulate sensory processing in visual cortex. A new study combines TMS, fMRI and EEG to understand how feedback improves retention of behaviorally relevant visual information.

  20. Oral and maxillofacial surgery with computer-assisted navigation system.

    Science.gov (United States)

    Kawachi, Homare; Kawachi, Yasuyuki; Ikeda, Chihaya; Takagi, Ryo; Katakura, Akira; Shibahara, Takahiko

    2010-01-01

    Intraoperative computer-assisted navigation has gained acceptance in maxillofacial surgery with applications in an increasing number of indications. We adapted a commercially available wireless passive marker system which allows calibration and tracking of virtually every instrument in maxillofacial surgery. Virtual computer-generated anatomical structures are displayed intraoperatively in a semi-immersive head-up display. Continuous observation of the operating field facilitated by computer assistance enables surgical navigation in accordance with the physician's preoperative plans. This case report documents the potential for augmented visualization concepts in surgical resection of tumors in the oral and maxillofacial region. We report a case of T3N2bM0 carcinoma of the maxillary gingival which was surgically resected with the assistance of the Stryker Navigation Cart System. This system was found to be useful in assisting preoperative planning and intraoperative monitoring.

  1. Anatomy of hierarchy: Feedforward and feedback pathways in macaque visual cortex

    Science.gov (United States)

    Markov, Nikola T; Vezoli, Julien; Chameau, Pascal; Falchier, Arnaud; Quilodran, René; Huissoud, Cyril; Lamy, Camille; Misery, Pierre; Giroud, Pascale; Ullman, Shimon; Barone, Pascal; Dehay, Colette; Knoblauch, Kenneth; Kennedy, Henry

    2013-01-01

    The laminar location of the cell bodies and terminals of interareal connections determines the hierarchical structural organization of the cortex and has been intensively studied. However, we still have only a rudimentary understanding of the connectional principles of feedforward (FF) and feedback (FB) pathways. Quantitative analysis of retrograde tracers was used to extend the notion that the laminar distribution of neurons interconnecting visual areas provides an index of hierarchical distance (percentage of supragranular labeled neurons [SLN]). We show that: 1) SLN values constrain models of cortical hierarchy, revealing previously unsuspected areal relations; 2) SLN reflects the operation of a combinatorial distance rule acting differentially on sets of connections between areas; 3) Supragranular layers contain highly segregated bottom-up and top-down streams, both of which exhibit point-to-point connectivity. This contrasts with the infragranular layers, which contain diffuse bottom-up and top-down streams; 4) Cell filling of the parent neurons of FF and FB pathways provides further evidence of compartmentalization; 5) FF pathways have higher weights, cross fewer hierarchical levels, and are less numerous than FB pathways. Taken together, the present results suggest that cortical hierarchies are built from supra- and infragranular counterstreams. This compartmentalized dual counterstream organization allows point-to-point connectivity in both bottom-up and top-down directions. PMID:23983048

  2. Alpha-contingent EEG feedback reduces SPECT rCBF variability

    DEFF Research Database (Denmark)

    McLaughlin, Thomas; Steinberg, Bruce; Mulholland, Thomas

    2005-01-01

    EEG feedback methods, which link the occurrence of alpha to the presentation of repeated visual stimuli, reduce the relative variability of subsequent, alpha-blocking event durations. The temporal association between electro-cortical field activation and regional cerebral blood flow (rCBF) led us...... to investigate whether the reduced variability of alpha-blocking durations with feedback is associated with a reduction in rCBF variability. Reduced variability in the rCBF response domain under EEG feedback control might have methodological implications for future brain-imaging studies. Visual stimuli were...... to quantify the variance-reducing effects of ACS across multiple, distributed areas of the brain. Both EEG and rCBF measures demonstrated decreased variability under ACS. This improved control was seen for localized as well as anatomically distributed rCBF measures....

  3. Visualization of hierarchically structured information for human-computer interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Suh Hyun; Lee, J. K.; Choi, I. K.; Kye, S. C.; Lee, N. K. [Dongguk University, Seoul (Korea)

    2001-11-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchically structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance. In this report, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks. 15 refs., 19 figs., 32 tabs. (Author)

  4. Effects of head-slaved navigation and the use of teleports on spatial orientation in virtual environments

    NARCIS (Netherlands)

    Bakker, N.H.; Passenier, P.O.; Werkhoven, P.J.

    2003-01-01

    The type of navigation interface in a virtual environment (VE) - head slaved or indirect - determines whether or not proprioceptive feedback stimuli are present during movement. In addition, teleports can be used, which do not provide continuous movement but, rather, discontinuously displace the

  5. Olfaction Contributes to Pelagic Navigation in a Coastal Shark.

    Science.gov (United States)

    Nosal, Andrew P; Chao, Yi; Farrara, John D; Chai, Fei; Hastings, Philip A

    2016-01-01

    How animals navigate the constantly moving and visually uniform pelagic realm, often along straight paths between distant sites, is an enduring mystery. The mechanisms enabling pelagic navigation in cartilaginous fishes are particularly understudied. We used shoreward navigation by leopard sharks (Triakis semifasciata) as a model system to test whether olfaction contributes to pelagic navigation. Leopard sharks were captured alongshore, transported 9 km offshore, released, and acoustically tracked for approximately 4 h each until the transmitter released. Eleven sharks were rendered anosmic (nares occluded with cotton wool soaked in petroleum jelly); fifteen were sham controls. Mean swimming depth was 28.7 m. On average, tracks of control sharks ended 62.6% closer to shore, following relatively straight paths that were significantly directed over spatial scales exceeding 1600 m. In contrast, tracks of anosmic sharks ended 37.2% closer to shore, following significantly more tortuous paths that approximated correlated random walks. These results held after swimming paths were adjusted for current drift. This is the first study to demonstrate experimentally that olfaction contributes to pelagic navigation in sharks, likely mediated by chemical gradients as has been hypothesized for birds. Given the similarities between the fluid three-dimensional chemical atmosphere and ocean, further research comparing swimming and flying animals may lead to a unifying paradigm explaining their extraordinary navigational abilities.

  6. THE DEVELOPMENT OF NAVIGATION SYSTEMS IN CIVIL AVIATION

    Directory of Open Access Journals (Sweden)

    Anastasiya Sergeyevna Stepanenko

    2017-01-01

    Full Text Available The article describes the history of navigation systems formation, such as "Cicada" system, which at that time could compete with the US "Transit", European, Chinese Beidou navigation system and the Japanese Quasi-Zenit.The detailed information about improving the American GPS system, launched in 1978 and working till now is provided. The characteristics of GPS-III counterpart "Transit", which became the platform for creating such modern globalnavigation systems as GLONASS and GPS. The process of implementation of the GLONASS system in civil aviation, itssegments, functions and features are considered. The stages of GLONASS satellite system orbital grouping formation are analyzed. The author draws the analogy with the American GPS system, the GALILEO system, which has a number of additional advantages, are given. The author remarks the features of the European counterpart of the GALILEO global nav- igation system. One of the goals of this system is to provide a high-precision positioning system, which Europe can rely on regardless of the Russian GLONASS system, the US - GPS and the Chinese Beidou. GALILEO offers a unique global search and rescue function called SAR, with an important feedback function. The peculiarities of Chinese scientists’ navi- gation system, the Beidou satellite system, and the Japanese global Quasi-Zenith Satellite System are described.Global navigation systems development tendencies are considered. The author dwells upon the path to world satel- lite systems globalization, a good example of which is the trend towards GLONASS and Beidou unification. Most attention was paid to the latest development of Russian scientists’ autonomous navigation system SINS 2015, which is a strap-down inertial navigation system and allows you to navigate the aircraft without being connected to a global satellite system. The ways of navigation systems further development in Russia are determined. The two naturally opposite directions are

  7. Project Management Using Modern Guidance, Navigation and Control Theory

    Science.gov (United States)

    Hill, Terry R.

    2011-01-01

    Implementing guidance, navigation, and control (GN&C) theory principles and applying them to the human element of project management and control is not a new concept. As both the literature on the subject and the real-world applications are neither readily available nor comprehensive with regard to how such principles might be applied, this paper has been written to educate the project manager on the "laws of physics" of his or her project (not to teach a GN&C engineer how to become a project manager) and to provide an intuitive, mathematical explanation as to the control and behavior of projects. This paper will also address how the fundamental principles of modern GN&C were applied to the National Aeronautics and Space Administration's (NASA) Constellation Program (CxP) space suit project, ensuring the project was managed within cost, schedule, and budget. A project that is akin to a physical system can be modeled and managed using the same over arching principles of GN&C that would be used if that project were a complex vehicle, a complex system(s), or complex software with time-varying processes (at times nonlinear) containing multiple data inputs of varying accuracy and a range of operating points. The classic GN&C theory approach could thus be applied to small, well-defined projects; yet when working with larger, multiyear projects necessitating multiple organizational structures, numerous external influences, and a multitude of diverse resources, modern GN&C principles are required to model and manage the project. The fundamental principles of a GN&C system incorporate these basic concepts: State, Behavior, Feedback Control, Navigation, Guidance and Planning Logic systems. The State of a system defines the aspects of the system that can change over time; e.g., position, velocity, acceleration, coordinate-based attitude, and temperature, etc. The Behavior of the system focuses more on what changes are possible within the system; this is denoted in the state

  8. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  9. Development of a force-reflecting robotic platform for cardiac catheter navigation.

    Science.gov (United States)

    Park, Jun Woo; Choi, Jaesoon; Pak, Hui-Nam; Song, Seung Joon; Lee, Jung Chan; Park, Yongdoo; Shin, Seung Min; Sun, Kyung

    2010-11-01

    Electrophysiological catheters are used for both diagnostics and clinical intervention. To facilitate more accurate and precise catheter navigation, robotic cardiac catheter navigation systems have been developed and commercialized. The authors have developed a novel force-reflecting robotic catheter navigation system. The system is a network-based master-slave configuration having a 3-degree of freedom robotic manipulator for operation with a conventional cardiac ablation catheter. The master manipulator implements a haptic user interface device with force feedback using a force or torque signal either measured with a sensor or estimated from the motor current signal in the slave manipulator. The slave manipulator is a robotic motion control platform on which the cardiac ablation catheter is mounted. The catheter motions-forward and backward movements, rolling, and catheter tip bending-are controlled by electromechanical actuators located in the slave manipulator. The control software runs on a real-time operating system-based workstation and implements the master/slave motion synchronization control of the robot system. The master/slave motion synchronization response was assessed with step, sinusoidal, and arbitrarily varying motion commands, and showed satisfactory performance with insignificant steady-state motion error. The current system successfully implemented the motion control function and will undergo safety and performance evaluation by means of animal experiments. Further studies on the force feedback control algorithm and on an active motion catheter with an embedded actuation mechanism are underway. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. Deep imitation learning for 3D navigation tasks.

    Science.gov (United States)

    Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina

    2018-01-01

    Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.

  11. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  12. The effects of link format and screen location on visual search of web pages.

    Science.gov (United States)

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  13. Task-dependent vestibular feedback responses in reaching.

    Science.gov (United States)

    Keyser, Johannes; Medendorp, W Pieter; Selen, Luc P J

    2017-07-01

    When reaching for an earth-fixed object during self-rotation, the motor system should appropriately integrate vestibular signals and sensory predictions to compensate for the intervening motion and its induced inertial forces. While it is well established that this integration occurs rapidly, it is unknown whether vestibular feedback is specifically processed dependent on the behavioral goal. Here, we studied whether vestibular signals evoke fixed responses with the aim to preserve the hand trajectory in space or are processed more flexibly, correcting trajectories only in task-relevant spatial dimensions. We used galvanic vestibular stimulation to perturb reaching movements toward a narrow or a wide target. Results show that the same vestibular stimulation led to smaller trajectory corrections to the wide than the narrow target. We interpret this reduced compensation as a task-dependent modulation of vestibular feedback responses, tuned to minimally intervene with the task-irrelevant dimension of the reach. These task-dependent vestibular feedback corrections are in accordance with a central prediction of optimal feedback control theory and mirror the sophistication seen in feedback responses to mechanical and visual perturbations of the upper limb. NEW & NOTEWORTHY Correcting limb movements for external perturbations is a hallmark of flexible sensorimotor behavior. While visual and mechanical perturbations are corrected in a task-dependent manner, it is unclear whether a vestibular perturbation, naturally arising when the body moves, is selectively processed in reach control. We show, using galvanic vestibular stimulation, that reach corrections to vestibular perturbations are task dependent, consistent with a prediction of optimal feedback control theory. Copyright © 2017 the American Physiological Society.

  14. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    Science.gov (United States)

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  15. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    Directory of Open Access Journals (Sweden)

    Weiwei Kong

    2017-06-01

    Full Text Available [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  16. MONTE: the next generation of mission design and navigation software

    Science.gov (United States)

    Evans, Scott; Taber, William; Drain, Theodore; Smith, Jonathon; Wu, Hsi-Cheng; Guevara, Michelle; Sunseri, Richard; Evans, James

    2018-03-01

    The Mission analysis, Operations and Navigation Toolkit Environment (MONTE) (Sunseri et al. in NASA Tech Briefs 36(9), 2012) is an astrodynamic toolkit produced by the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory. It provides a single integrated environment for all phases of deep space and Earth orbiting missions. Capabilities include: trajectory optimization and analysis, operational orbit determination, flight path control, and 2D/3D visualization. MONTE is presented to the user as an importable Python language module. This allows a simple but powerful user interface via CLUI or script. In addition, the Python interface allows MONTE to be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib. MONTE is the prime operational orbit determination software for all JPL navigated missions.

  17. Footprints: A Visual Search Tool that Supports Discovery and Coverage Tracking.

    Science.gov (United States)

    Isaacs, Ellen; Domico, Kelly; Ahern, Shane; Bart, Eugene; Singhal, Mudita

    2014-12-01

    Searching a large document collection to learn about a broad subject involves the iterative process of figuring out what to ask, filtering the results, identifying useful documents, and deciding when one has covered enough material to stop searching. We are calling this activity "discoverage," discovery of relevant material and tracking coverage of that material. We built a visual analytic tool called Footprints that uses multiple coordinated visualizations to help users navigate through the discoverage process. To support discovery, Footprints displays topics extracted from documents that provide an overview of the search space and are used to construct searches visuospatially. Footprints allows users to triage their search results by assigning a status to each document (To Read, Read, Useful), and those status markings are shown on interactive histograms depicting the user's coverage through the documents across dates, sources, and topics. Coverage histograms help users notice biases in their search and fill any gaps in their analytic process. To create Footprints, we used a highly iterative, user-centered approach in which we conducted many evaluations during both the design and implementation stages and continually modified the design in response to feedback.

  18. Navigation and Image Injection for Control of Bone Removal and Osteotomy Planes in Spine Surgery.

    Science.gov (United States)

    Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven Rainer; Archavlis, Elefterios; Giese, Alf

    2017-04-01

    In contrast to cranial interventions, neuronavigation in spinal surgery is used in few applications, not tapping into its full technological potential. We have developed a method to preoperatively create virtual resection planes and volumes for spinal osteotomies and export 3-D operation plans to a navigation system controlling intraoperative visualization using a surgical microscope's head-up display. The method was developed using a Sawbone ® model of the lumbar spine, demonstrating feasibility with high precision. Computer tomographic and magnetic resonance image data were imported into Amira ® , a 3-D visualization software. Resection planes were positioned, and resection volumes representing intraoperative bone removal were defined. Fused to the original Digital Imaging and Communications in Medicine data, the osteotomy planes were exported to the cranial version of a Brainlab ® navigation system. A navigated surgical microscope with video connection to the navigation system allowed intraoperative image injection to visualize the preplanned resection planes. The workflow was applied to a patient presenting with a congenital hemivertebra of the thoracolumbar spine. Dorsal instrumentation with pedicle screws and rods was followed by resection of the deformed vertebra guided by the in-view image injection of the preplanned resection planes into the optical path of a surgical microscope. Postoperatively, the patient showed no neurological deficits, and the spine was found to be restored in near physiological posture. The intraoperative visualization of resection planes in a microscope's head-up display was found to assist the surgeon during the resection of a complex-shaped bone wedge and may help to further increase accuracy and patient safety. Copyright © 2017 by the Congress of Neurological Surgeons

  19. Multi-rover navigation on the lunar surface

    Science.gov (United States)

    Dabrowski, Borys; Banaszkiewicz, Marek

    2008-07-01

    The paper presents a method of determination an accurate position of a target (rover, immobile sensor, astronaut) on surface of the Moon or other celestial body devoid of navigation infrastructure (like Global Positioning System), by using a group of self-calibrating rovers, which serves as mobile reference points. The rovers are equipped with low-precision clocks synchronized by external broadcasting signal, to measure the moments of receiving radio signals sent by localized target. Based on the registered times, distances between transmitter and receivers installed on beacons are calculated. Each rover determines and corrects its own absolute position and orientation by using odometry navigation and measurements of relative distances and angles to other mobile reference points. Accuracy of navigation has been improved by the use of a calibration algorithm based on the extended Kalman filter, which uses internal encoder readings as inputs and relative measurements of distances and orientations between beacons as feedback information. The key idea in obtaining reliable values of absolute position and orientation of beacons is to first calibrate one of the rovers, using the remaining ones as reference points and then allow the whole group to move together and calibrate all the rovers in-motion. We consider a number of cases, in which basic modeling parameters such as terrain roughness, formation size and shape as well as availability of distance and angle measurements are varied.

  20. Robust exponential stabilization of nonholonomic wheeled mobile robots with unknown visual parameters

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The visual servoing stabilization of nonholonomic mobile robot with unknown camera parameters is investigated.A new kind of uncertain chained model of nonholonomic kinemetic system is obtained based on the visual feedback and the standard chained form of type (1,2) mobile robot.Then,a novel time-varying feedback controller is proposed for exponentially stabilizing the position and orientation of the robot using visual feedback and switching strategy when the camera parameters are not known.The exponential s...

  1. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback.

    Directory of Open Access Journals (Sweden)

    Ing-Shiou Hwang

    Full Text Available Discharge patterns from a population of motor units (MUs were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF. In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13-35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band.

  2. Evaluation of navigation interfaces in virtual environments

    Science.gov (United States)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  3. Visual navigation in insects: coupling of egocentric and geocentric information

    OpenAIRE

    Wehner, R; Michel, B; Antonsen, P

    1996-01-01

    Social hymenopterans such as bees and ants are central-place foragers; they regularly depart from and return to fixed positions in their environment. In returning to the starting point of their foraging excursion or to any other point, they could resort to two fundamentally different ways of navigation by using either egocentric or geocentric systems of reference. In the first case, they would rely on information continuously collected en route (path integration, dead reckoning), i.e. integra...

  4. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  5. Navigating the MESSENGER Spacecraft through End of Mission

    Science.gov (United States)

    Bryan, C. G.; Williams, B. G.; Williams, K. E.; Taylor, A. H.; Carranza, E.; Page, B. R.; Stanbridge, D. R.; Mazarico, E.; Neumann, G. A.; O'Shaughnessy, D. J.; McAdams, J. V.; Calloway, A. B.

    2015-12-01

    The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft orbited the planet Mercury from March 2011 until the end of April 2015, when it impacted the planetary surface after propellant reserves used to maintain the orbit were depleted. This highly successful mission was led by the principal investigator, Sean C. Solomon, of Columbia University. The Johns Hopkins University Applied Physics Laboratory (JHU/APL) designed and assembled the spacecraft and served as the home for spacecraft operations. Spacecraft navigation for the entirety of the mission was provided by the Space Navigation and Flight Dynamics Practice (SNAFD) of KinetX Aerospace. Orbit determination (OD) solutions were generated through processing of radiometric tracking data provided by NASA's Deep Space Network (DSN) using the MIRAGE suite of orbital analysis tools. The MESSENGER orbit was highly eccentric, with periapsis at a high northern latitude and periapsis altitude in the range 200-500 km for most of the orbital mission phase. In a low-altitude "hover campaign" during the final two months of the mission, periapsis altitudes were maintained within a narrow range between about 35 km and 5 km. Navigating a spacecraft so near a planetary surface presented special challenges. Tasks required to meet those challenges included the modeling and estimation of Mercury's gravity field and of solar and planetary radiation pressure, and the design of frequent orbit-correction maneuvers. Superior solar conjunction also presented observational modeling issues. One key to the overall success of the low-altitude hover campaign was a strategy to utilize data from an onboard laser altimeter as a cross-check on the navigation team's reconstructed and predicted estimates of periapsis altitude. Data obtained from the Mercury Laser Altimeter (MLA) on a daily basis provided near-real-time feedback that proved invaluable in evaluating alternative orbit estimation strategies, and

  6. Effect of vibrotactile feedback on an EMG-based proportional cursor control system.

    Science.gov (United States)

    Li, Shunchong; Chen, Xingyu; Zhang, Dingguo; Sheng, Xinjun; Zhu, Xiangyang

    2013-01-01

    Surface electromyography (sEMG) has been introduced into the bio-mechatronics systems, however, most of them are lack of the sensory feedback. In this paper, the effect of vibrotactile feedback for a myoelectric cursor control system is investigated quantitatively. Simultaneous and proportional control signals are extracted from EMG using a muscle synergy model. Different types of feedback including vibrotactile feedback and visual feedback are added, assessed and compared with each other. The results show that vibrotactile feedback is capable of improving the performance of EMG-based human machine interface.

  7. Introduction: A Brief Note on Navigation: How Do We Get Around These Days?

    Directory of Open Access Journals (Sweden)

    Sean Scanlan

    2011-01-01

    Full Text Available The theme of the first issue of nano is Navigation. The usual suspects of navigation come to mind, don’t they? Map, sextant, and compass are essential to understanding how humans find their way from one place to another. But these technologies are not new, and they may not be the most important navigational technologies. Fast-forward to our present age and we must contend with navigating screens, pads, pods, and other information technologies. In fact, if you are reading this, then you know how to navigate several systems: button, login, address, page. The three essays in the first issue of nano speak of navigation as a complex, varied process. First, in “Algebra of the Visual: The London Underground Map and the Art It Has Inspired,” Alan Ashton-Smith explores the organizing principles of London Underground maps. Second, Robert Tally’s “On Literary Cartography: Narrative as a Spatially Symbolic Act” encourages us to consider how narratives operate much as maps do. A. Kendra Greene’s “Five Directions” presents examples of real-world navigation in which getting from A to B involves fitting pieces together, synthesizing.

  8. Master VISUALLY Excel 2010

    CERN Document Server

    Marmel, Elaine

    2010-01-01

    The complete visual reference on Excel basics. Aimed at visual learners who are seeking an all-in-one reference that provides in-depth coveage of Excel from a visual viewpoint, this resource delves into all the newest features of Excel 2010. You'll explore Excel with helpful step-by-step instructions that show you, rather than tell you, how to navigate Excel, work with PivotTables and PivotCharts, use macros to streamline work, and collaborate with other users in one document.: This two-color guide features screen shots with specific, numbered instructions so you can learn the actions you need

  9. Finding Home: Landmark Ambiguity in Human Navigation

    Directory of Open Access Journals (Sweden)

    Simon Jetzschke

    2017-07-01

    Full Text Available Memories of places often include landmark cues, i.e., information provided by the spatial arrangement of distinct objects with respect to the target location. To study how humans combine landmark information for navigation, we conducted two experiments: To this end, participants were either provided with auditory landmarks while walking in a large sports hall or with visual landmarks while walking on a virtual-reality treadmill setup. We found that participants cannot reliably locate their home position due to ambiguities in the spatial arrangement when only one or two uniform landmarks provide cues with respect to the target. With three visual landmarks that look alike, the task is solved without ambiguity, while audio landmarks need to play three unique sounds for a similar performance. This reduction in ambiguity through integration of landmark information from 1, 2, and 3 landmarks is well modeled using a probabilistic approach based on maximum likelihood estimation. Unlike any deterministic model of human navigation (based e.g., on distance or angle information, this probabilistic model predicted both the precision and accuracy of the human homing performance. To further examine how landmark cues are integrated we introduced systematic conflicts in the visual landmark configuration between training of the home position and tests of the homing performance. The participants integrated the spatial information from each landmark near-optimally to reduce spatial variability. When the conflict becomes big, this integration breaks down and precision is sacrificed for accuracy. That is, participants return again closer to the home position, because they start ignoring the deviant third landmark. Relying on two instead of three landmarks, however, goes along with responses that are scattered over a larger area, thus leading to higher variability. To model the breakdown of integration with increasing conflict, the probabilistic model based on a

  10. Mirror Visual Feedback Training Improves Intermanual Transfer in a Sport-Specific Task: A Comparison between Different Skill Levels

    Directory of Open Access Journals (Sweden)

    Fabian Steinberg

    2016-01-01

    Full Text Available Mirror training therapy is a promising tool to initiate neural plasticity and facilitate the recovery process of motor skills after diseases such as stroke or hemiparesis by improving the intermanual transfer of fine motor skills in healthy people as well as in patients. This study evaluated whether these augmented performance improvements by mirror visual feedback (MVF could be used for learning a sport-specific skill and if the effects are modulated by skill level. A sample of 39 young, healthy, and experienced basketball and handball players and 41 novices performed a stationary basketball dribble task at a mirror box in a standing position and received either MVF or direct feedback. After four training days using only the right hand, performance of both hands improved from pre- to posttest measurements. Only the left hand (untrained performance of the experienced participants receiving MVF was more pronounced than for the control group. This indicates that intermanual motor transfer can be improved by MVF in a sport-specific task. However, this effect cannot be generalized to motor learning per se since it is modulated by individuals’ skill level, a factor that might be considered in mirror therapy research.

  11. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  12. Feedforward and feedback motor control abnormalities implicate cerebellar dysfunctions in autism spectrum disorder.

    Science.gov (United States)

    Mosconi, Matthew W; Mohanty, Suman; Greene, Rachel K; Cook, Edwin H; Vaillancourt, David E; Sweeney, John A

    2015-02-04

    Sensorimotor abnormalities are common in autism spectrum disorder (ASD) and among the earliest manifestations of the disorder. They have been studied far less than the social-communication and cognitive deficits that define ASD, but a mechanistic understanding of sensorimotor abnormalities in ASD may provide key insights into the neural underpinnings of the disorder. In this human study, we examined rapid, precision grip force contractions to determine whether feedforward mechanisms supporting initial motor output before sensory feedback can be processed are disrupted in ASD. Sustained force contractions also were examined to determine whether reactive adjustments to ongoing motor behavior based on visual feedback are altered. Sustained force was studied across multiple force levels and visual gains to assess motor and visuomotor mechanisms, respectively. Primary force contractions of individuals with ASD showed greater peak rate of force increases and large transient overshoots. Individuals with ASD also showed increased sustained force variability that scaled with force level and was more severe when visual gain was highly amplified or highly degraded. When sustaining a constant force level, their reactive adjustments were more periodic than controls, and they showed increased reliance on slower feedback mechanisms. Feedforward and feedback mechanism alterations each were associated with more severe social-communication impairments in ASD. These findings implicate anterior cerebellar circuits involved in feedforward motor control and posterior cerebellar circuits involved in transforming visual feedback into precise motor adjustments in ASD. Copyright © 2015 the authors 0270-6474/15/352015-11$15.00/0.

  13. Interface Prostheses With Classifier-Feedback-Based User Training.

    Science.gov (United States)

    Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai

    2017-11-01

    It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well

  14. GPS/MEMS IMU/Microprocessor Board for Navigation

    Science.gov (United States)

    Gender, Thomas K.; Chow, James; Ott, William E.

    2009-01-01

    A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.

  15. Feedback from visual cortical area 7 to areas 17 and 18 in cats: How neural web is woven during feedback.

    Science.gov (United States)

    Yang, X; Ding, H; Lu, J

    2016-01-15

    To investigate the feedback effect from area 7 to areas 17 and 18, intrinsic signal optical imaging combined with pharmacological, morphological methods and functional magnetic resonance imaging (fMRI) was employed. A spatial frequency-dependent decrease in response amplitude of orientation maps was observed in areas 17 and 18 when area 7 was inactivated by a local injection of GABA, or by a lesion induced by liquid nitrogen freezing. The pattern of orientation maps of areas 17 and 18 after the inactivation of area 7, if they were not totally blurred, paralleled the normal one. In morphological experiments, after one point at the shallow layers within the center of the cat's orientation column of area 17 was injected electrophoretically with HRP (horseradish peroxidase), three sequential patches in layers 1, 2 and 3 of area 7 were observed. Employing fMRI it was found that area 7 feedbacks mainly to areas 17 and 18 on ipsilateral hemisphere. Therefore, our conclusions are: (1) feedback from area 7 to areas 17 and 18 is spatial frequency modulated; (2) feedback from area 7 to areas 17 and 18 occurs mainly ipsilaterally; (3) histological feedback pattern from area 7 to area 17 is weblike. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Autonomous vision-based navigation for proximity operations around binary asteroids

    Science.gov (United States)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-06-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  17. Network interactions underlying mirror feedback in stroke: A dynamic causal modeling study

    Directory of Open Access Journals (Sweden)

    Soha Saleh

    2017-01-01

    Full Text Available Mirror visual feedback (MVF is potentially a powerful tool to facilitate recovery of disordered movement and stimulate activation of under-active brain areas due to stroke. The neural mechanisms underlying MVF have therefore been a focus of recent inquiry. Although it is known that sensorimotor areas can be activated via mirror feedback, the network interactions driving this effect remain unknown. The aim of the current study was to fill this gap by using dynamic causal modeling to test the interactions between regions in the frontal and parietal lobes that may be important for modulating the activation of the ipsilesional motor cortex during mirror visual feedback of unaffected hand movement in stroke patients. Our intent was to distinguish between two theoretical neural mechanisms that might mediate ipsilateral activation in response to mirror-feedback: transfer of information between bilateral motor cortices versus recruitment of regions comprising an action observation network which in turn modulate the motor cortex. In an event-related fMRI design, fourteen chronic stroke subjects performed goal-directed finger flexion movements with their unaffected hand while observing real-time visual feedback of the corresponding (veridical or opposite (mirror hand in virtual reality. Among 30 plausible network models that were tested, the winning model revealed significant mirror feedback-based modulation of the ipsilesional motor cortex arising from the contralesional parietal cortex, in a region along the rostral extent of the intraparietal sulcus. No winning model was identified for the veridical feedback condition. We discuss our findings in the context of supporting the latter hypothesis, that mirror feedback-based activation of motor cortex may be attributed to engagement of a contralateral (contralesional action observation network. These findings may have important implications for identifying putative cortical areas, which may be targeted with

  18. Empirical evaluation of a practical indoor mobile robot navigation method using hybrid maps

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong

    2010-01-01

    This video presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as occupancy grids by a laser range finder to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the ‘places-of-interests’ in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... that the method is implemented successfully on physical robot in a hospital environment, which provides a practical solution for indoor navigation....

  19. Looking Good, Feeling Good – Tac Map: a navigation system for the blind

    OpenAIRE

    Chamberlain, Paul; Dieng, Patricia

    2011-01-01

    This paper describes the research and development of a navigation system for the blind that provides a tactile and visual language that can be understood by both sighted and blind users. It describes key work and issues in the development of graphical symbols and in particular the pioneering work of Neurath‟s ISOTYPES, as well as more specific communication systems for blind people. The paper focuses on the development of "TacMap‟, a navigation system for the blind. User engagement has been f...

  20. Autonomous Navigation for Autonomous Underwater Vehicles Based on Information Filters and Active Sensing

    Directory of Open Access Journals (Sweden)

    Tianhong Yan

    2011-11-01

    Full Text Available This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM, and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China. Weak links in the information matrix in an extended information filter (EIF can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM. All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.

  1. Autonomous navigation for autonomous underwater vehicles based on information filters and active sensing.

    Science.gov (United States)

    He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong

    2011-01-01

    This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.

  2. Driven-Walking for Visually Impaired/Blind People through WiMAX

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2010-03-01

    Full Text Available It is known that people who are blind/visually impaired find it difficult to move, especially in unknown places. Usually the only help they have is their walking stick (white cane, a guide dog and sometimes special warning sounds or road signals at specific positions. Material and Method: In this paper we are trying to find a solution on how to build an appropriate navigating system for blind people. Results: Based on benefits of powerful properties of mobile WiMAX standard we suggest an important navigate application which can translate a digital visual environment properly for blind/visually impaired users through a plethora of combinations such as voice, brain or tongue signals. Conclusions: We believe that such an idea will be an initial point for a plethora of applications which will eliminate walking disabilities of blind/visually people.

  3. Odor supported place cell model and goal navigation in rodents

    DEFF Research Database (Denmark)

    Kulvicius, Tomas; Tamosiunaite, Minija; Ainge, James

    2008-01-01

    Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Nevertheless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self......-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self......-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environmental and self...

  4. A cognitive neuroprosthetic that uses cortical stimulation for somatosensory feedback

    Science.gov (United States)

    Klaes, Christian; Shi, Ying; Kellis, Spencer; Minxha, Juri; Revechkis, Boris; Andersen, Richard A.

    2014-10-01

    Objective. Present day cortical brain-machine interfaces (BMIs) have made impressive advances using decoded brain signals to control extracorporeal devices. Although BMIs are used in a closed-loop fashion, sensory feedback typically is visual only. However medical case studies have shown that the loss of somesthesis in a limb greatly reduces the agility of the limb even when visual feedback is available. Approach. To overcome this limitation, this study tested a closed-loop BMI that utilizes intracortical microstimulation to provide ‘tactile’ sensation to a non-human primate. Main result. Using stimulation electrodes in Brodmann area 1 of somatosensory cortex (BA1) and recording electrodes in the anterior intraparietal area, the parietal reach region and dorsal area 5 (area 5d), it was found that this form of feedback can be used in BMI tasks. Significance. Providing somatosensory feedback has the poyential to greatly improve the performance of cognitive neuroprostheses especially for fine control and object manipulation. Adding stimulation to a BMI system could therefore improve the quality of life for severely paralyzed patients.

  5. The neonicotinoid clothianidin interferes with navigation of the solitary bee Osmia cornuta in a laboratory test.

    Science.gov (United States)

    Jin, Nanxiang; Klein, Simon; Leimig, Fabian; Bischoff, Gabriela; Menzel, Randolf

    2015-09-01

    Pollinating insects provide a vital ecosystem service to crops and wild plants. Exposure to low doses of neonicotinoid insecticides has sub-lethal effects on social pollinators such as bumblebees and honeybees, disturbing their navigation and interfering with their development. Solitary Hymenoptera are also very important ecosystem service providers, but the sub-lethal effects of neonicotinoids have not yet been studied well in those animals. We analyzed the ability of walking Osmia to remember a feeding place in a small environment and found that Osmia remembers the feeding place well after 4 days of training. Uptake of field-realistic amounts of the neonicotinoid clothianidin (0.76 ng per bee) altered the animals' sensory responses to the visual environment and interfered with the retrieval of navigational memory. We conclude that the neonicotinoid clothianidin compromises visual guidance and the use of navigational memory in the solitary bee Osmia cornuta. © 2015. Published by The Company of Biologists Ltd.

  6. Effect of biased feedback on motor imagery learning in BCI-teleoperation system

    Directory of Open Access Journals (Sweden)

    Maryam eAlimardani

    2014-04-01

    Full Text Available Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users’ BC performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects’ performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects’ BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects’ online performance, evaluation of brain activity patterns revealed that subjects’ self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects’ motor imagery skills.

  7. Self-Controlled Feedback for a Complex Motor Task

    Directory of Open Access Journals (Sweden)

    Wolf Peter

    2011-12-01

    Full Text Available Self-controlled augmented feedback enhances learning of simple motor tasks. Thereby, learners tend to request feedback after trials that were rated as good by themselves. Feedback after good trials promotes positive reinforcement, which enhances motor learning. The goal of this study was to investigate when naïve learners request terminal visual feedback in a complex motor task, as conclusions drawn on simple tasks can hardly be transferred to complex tasks. Indeed, seven of nine learners stated to have intended to request feedback predominantly after good trials, but in contrast to their intention, kinematic analysis showed that feedback was rather requested randomly (23% after good, 44% after intermediate, 33% after bad trials. Moreover, requesting feedback after good trials did not correlate with learning success. It seems that self-estimation of performance in complex tasks is challenging. As a consequence, learners might have focused on certain movement aspects rather than on the overall movement. Further studies should assess the current focus of the learner in detail to gain more insight in self-estimation capabilities during complex motor task learning.

  8. Improving Canada's Marine Navigation System through e-Navigation

    Directory of Open Access Journals (Sweden)

    Daniel Breton

    2016-06-01

    The conclusion proposed is that on-going work with key partners and stakeholders can be used as the primary mechanism to identify e-Navigation related innovation and needs, and to prioritize next steps. Moving forward in Canada, implementation of new e-navigation services will continue to be stakeholder driven, and used to drive improvements to Canada's marine navigation system.

  9. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  10. Evaluating User Response to In-Car Haptic Feedback Touchscreens Using the Lane Change Test

    Directory of Open Access Journals (Sweden)

    Matthew J. Pitts

    2012-01-01

    Full Text Available Touchscreen interfaces are widely used in modern technology, from mobile devices to in-car infotainment systems. However, touchscreens impose significant visual workload demands on the user which have safety implications for use in cars. Previous studies indicate that the application of haptic feedback can improve both performance of and affective response to user interfaces. This paper reports on and extends the findings of a 2009 study conducted to evaluate the effects of different combinations of touchscreen visual, audible, and haptic feedback on driving and task performance, affective response, and subjective workload; the initial findings of which were originally published in (M. J. Pitts et al., 2009. A total of 48 non-expert users completed the study. A dual-task approach was applied, using the Lane Change Test as the driving task and realistic automotive use case touchscreen tasks. Results indicated that, while feedback type had no effect on driving or task performance, preference was expressed for multimodal feedback over visual alone. Issues relating to workload and cross-modal interaction were also identified.

  11. Spatial frequency-dependent feedback of visual cortical area 21a modulating functional orientation column maps in areas 17 and 18 of the cat.

    Science.gov (United States)

    Huang, Luoxiu; Chen, Xin; Shou, Tiande

    2004-02-20

    The feedback effect of activity of area 21a on orientation maps of areas 17 and 18 was investigated in cats using intrinsic signal optical imaging. A spatial frequency-dependent decrease in response amplitude of orientation maps to grating stimuli was observed in areas 17 and 18 when area 21a was inactivated by local injection of GABA, or by a lesion induced by liquid nitrogen freezing. The decrease in response amplitude of orientation maps of areas 17 and 18 after the area 21a inactivation paralleled the normal response without the inactivation. Application in area 21a of bicuculline, a GABAa receptor antagonist caused an increase in response amplitude of orientation maps of area 17. The results indicate a positive feedback from high-order visual cortical area 21a to lower-order areas underlying a spatial frequency-dependent mechanism.

  12. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  13. Effects of four types of non-obtrusive feedback on computer behaviour, task performance and comfort

    NARCIS (Netherlands)

    Korte, E.M.; Huijsmans, M.A.; de Jong, A.M.; van de Ven, J.G.M.; Ruijsendaal, M.

    2012-01-01

    This study investigated the effects of non-obtrusive feedback on continuous lifted hand/finger behaviour, task performance and comfort. In an experiment with 24 participants the effects of two visual and two tactile feedback signals were compared to a no-feedback condition in a computer task.

  14. Systematic tracking, visualizing, and interpreting of consumer feedback for drinking water quality.

    Science.gov (United States)

    Dietrich, Andrea M; Phetxumphou, Katherine; Gallagher, Daniel L

    2014-12-01

    Consumer feedback and complaints provide utilities with useful data about consumer perceptions of aesthetic water quality in the distribution system. This research provides a systematic approach to interpret consumer complaint water quality data provided by four water utilities that recorded consumer complaints, but did not routinely process the data. The utilities tended to write down a myriad of descriptors that were too numerous or contained a variety of spellings so that electronic "harvesting" was not possible and much manual labor was required to categorize the complaints into majors areas, such as suggested by the Drinking Water Taste and Odor Wheel or existing check-sheets. When the consumer complaint data were categorized and visualized using spider (or radar) and run-time plots, major taste, odor, and appearance patterns emerged that clarified the issue and could provide guidance to the utility on the nature and extent of the problem. A caveat is that while humans readily identify visual issues with the water, such as color, cloudiness, or rust, describing specific tastes and odors in drinking water is acknowledged to be much more difficult for humans to achieve without training. This was demonstrated with two utility groups and a group of consumers identifying the odors of orange, 2-methylisoborneol, and dimethyl trisulfide. All three groups readily and succinctly identified the familiar orange odor. The two utility groups were much more able to identify the musty odor of 2-methylisoborneol, which was likely familiar to them from their work with raw and finished water. Dimethyl trisulfide, a garlic-onion odor associated with sulfur compounds in drinking water, was the least familiar to all three groups, although the laboratory staff did best. These results indicate that utility personnel should be tolerant of consumers who can assuredly say the water is different, but cannot describe the problem. Also, it indicates that a T&O program at a utility would

  15. Direct Visual Editing of Node Attributes in Graphs

    Directory of Open Access Journals (Sweden)

    Christian Eichner

    2016-10-01

    Full Text Available There are many expressive visualization techniques for analyzing graphs. Yet, there is only little research on how existing visual representations can be employed to support data editing. An increasingly relevant task when working with graphs is the editing of node attributes. We propose an integrated visualize-and-edit approach to editing attribute values via direct interaction with the visual representation. The visualize part is based on node-link diagrams paired with attribute-dependent layouts. The edit part is as easy as moving nodes via drag-and-drop gestures. We present dedicated interaction techniques for editing quantitative as well as qualitative attribute data values. The benefit of our novel integrated approach is that one can directly edit the data while the visualization constantly provides feedback on the implications of the data modifications. Preliminary user feedback indicates that our integrated approach can be a useful complement to standard non-visual editing via external tools.

  16. Distinct GABAergic targets of feedforward and feedback connections between lower and higher areas of rat visual cortex.

    Science.gov (United States)

    Gonchar, Yuri; Burkhalter, Andreas

    2003-11-26

    Processing of visual information is performed in different cortical areas that are interconnected by feedforward (FF) and feedback (FB) pathways. Although FF and FB inputs are excitatory, their influences on pyramidal neurons also depend on the outputs of GABAergic neurons, which receive FF and FB inputs. Rat visual cortex contains at least three different families of GABAergic neurons that express parvalbumin (PV), calretinin (CR), and somatostatin (SOM) (Gonchar and Burkhalter, 1997). To examine whether pathway-specific inhibition (Shao and Burkhalter, 1996) is attributable to distinct connections with GABAergic neurons, we traced FF and FB inputs to PV, CR, and SOM neurons in layers 1-2/3 of area 17 and the secondary lateromedial area in rat visual cortex. We found that in layer 2/3 maximally 2% of FF and FB inputs go to CR and SOM neurons. This contrasts with 12-13% of FF and FB inputs onto layer 2/3 PV neurons. Unlike inputs to layer 2/3, connections to layer 1, which contains CR but lacks SOM and PV somata, are pathway-specific: 21% of FB inputs go to CR neurons, whereas FF inputs to layer 1 and its CR neurons are absent. These findings suggest that FF and FB influences on layer 2/3 pyramidal neurons mainly involve disynaptic connections via PV neurons that control the spike outputs to axons and proximal dendrites. Unlike FF input, FB input in addition makes a disynaptic link via CR neurons, which may influence the excitability of distal pyramidal cell dendrites in layer 1.

  17. Control and navigation system for a fixed-wing unmanned aerial vehicle

    Directory of Open Access Journals (Sweden)

    Ruiyong Zhai

    2014-02-01

    Full Text Available This paper presents a flight control and navigation system for a fixed-wing unmanned aerial vehicle (UAV with low-cost micro-electro-mechanical system (MEMS sensors. The system is designed under the inner loop and outer loop strategy. The trajectory tracking navigation loop is the outer loop of the attitude loop, while the attitude control loop is the outer loop of the stabilization loop. The proportional-integral-derivative (PID control was adopted for stabilization and attitude control. The three-dimensional (3D trajectory tracking control of a UAV could be approximately divided into lateral control and longitudinal control. The longitudinal control employs traditional linear PID feedback to achieve the desired altitude of the UAV, while the lateral control uses a non-linear control method to complete the desired trajectory. The non-linear controller can automatically adapt to ground velocity change, which is usually caused by gust disturbance, thus the UAV has good wind resistance characteristics. Flight tests and survey missions were carried out with our self-developed delta fixed-wing UAV and MEMS-based autopilot to confirm the effectiveness and practicality of the proposed navigation method.

  18. Cost of Lightning Strike Related Outages of Visual Navigational Aids at Airports in the United States

    Science.gov (United States)

    Rakas, J.; Nikolic, M.; Bauranov, A.

    2017-12-01

    Lightning storms are a serious hazard that can cause damage to vital human infrastructure. In aviation, lightning strikes cause outages to air traffic control equipment and facilities that result in major disruptions in the network, causing delays and financial costs measured in the millions of dollars. Failure of critical systems, such as Visual Navigational Aids (Visual NAVAIDS), are particularly dangerous since NAVAIDS are an essential part of landing procedures. Precision instrument approach, an operation utilized during the poor visibility conditions, utilizes several of these systems, and their failure leads to holding patterns and ultimately diversions to other airports. These disruptions lead to both ground and airborne delay. Accurate prediction of these outages and their costs is a key prerequisite for successful investment planning. The air traffic management and control sector need accurate information to successfully plan maintenance and develop a more robust system under the threat of increasing lightning rates. To analyze the issue, we couple the Remote Monitoring and Logging System (RMLS) database and the Aviation System Performance Metrics (ASPM) databases to identify lightning-induced outages, and connect them with weather conditions, demand and landing runway to calculate the total delays induced by the outages, as well as the number of cancellations and diversions. The costs are then determined by calculating direct costs to aircraft operators and costs of passengers' time for delays, cancellations and diversions. The results indicate that 1) not all NAVAIDS are created equal, and 2) outside conditions matter. The cost of an outage depends on the importance of the failed system and the conditions that prevailed before, during and after the failure. The outage that occurs during high demand and poor weather conditions is more likely to result in more delays and higher costs.

  19. Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation

    Science.gov (United States)

    2012-09-13

    odometry, SICK laser scanning unit ( Lidar ), Inertial Measurement Unit (IMU) and ultrasonic distance measurement system (Figure 32). The Lidar , IMU...2010, July) GPS world. [Online]. http://www.gpsworld.com/tech-talk- blog/gnss-independent-navigation-solution-using-integrated- lidar -data-11378 [4...Milford, David McKinnon, Michael Warren, Gordon Wyeth, and Ben Upcroft, "Feature-based Visual Odometry and Featureless Place Recognition for SLAM in

  20. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  1. Review of Designs for Haptic Data Visualization.

    Science.gov (United States)

    Paneels, Sabrina; Roberts, Jonathan C

    2010-01-01

    There are many different uses for haptics, such as training medical practitioners, teleoperation, or navigation of virtual environments. This review focuses on haptic methods that display data. The hypothesis is that haptic devices can be used to present information, and consequently, the user gains quantitative, qualitative, or holistic knowledge about the presented data. Not only is this useful for users who are blind or partially sighted (who can feel line graphs, for instance), but also the haptic modality can be used alongside other modalities, to increase the amount of variables being presented, or to duplicate some variables to reinforce the presentation. Over the last 20 years, a significant amount of research has been done in haptic data presentation; e.g., researchers have developed force feedback line graphs, bar charts, and other forms of haptic representations. However, previous research is published in different conferences and journals, with different application emphases. This paper gathers and collates these various designs to provide a comprehensive review of designs for haptic data visualization. The designs are classified by their representation: Charts, Maps, Signs, Networks, Diagrams, Images, and Tables. This review provides a comprehensive reference for researchers and learners, and highlights areas for further research.

  2. Introduction of a standardized multimodality image protocol for navigation-guided surgery of suspected low-grade gliomas.

    Science.gov (United States)

    Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg

    2015-01-01

    OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain

  3. The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review.

    Science.gov (United States)

    Cogné, M; Taillade, M; N'Kaoua, B; Tarruella, A; Klinger, E; Larrue, F; Sauzéon, H; Joseph, P-A; Sorita, E

    2017-06-01

    Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation. Copyright © 2016. Published by Elsevier Masson SAS.

  4. The Two Visual Systems Hypothesis: new challenges and insights from visual form agnosic patient DF

    Directory of Open Access Journals (Sweden)

    Robert Leslie Whitwell

    2014-12-01

    Full Text Available Patient DF, who developed visual form agnosia following carbon monoxide poisoning, is still able to use vision to adjust the configuration of her grasping hand to the geometry of a goal object. This striking dissociation between perception and action in DF provided a key piece of evidence for the formulation of Goodale and Milner’s Two Visual Systems Hypothesis (TVSH. According to the TVSH, the ventral stream plays a critical role in constructing our visual percepts, whereas the dorsal stream mediates the visual control of action, such as visually guided grasping. In this review, we discuss recent studies of DF that provide new insights into the functional organization of the dorsal and ventral streams. We confirm recent evidence that DF has dorsal as well as ventral brain damage – and that her dorsal-stream lesions and surrounding atrophy have increased in size since her first published brain scan. We argue that the damage to DF’s dorsal stream explains her deficits in directing actions at targets in the periphery. We then focus on DF’s ability to accurately adjust her in-flight hand aperture to changes in the width of goal objects (grip scaling whose dimensions she cannot explicitly report. An examination of several studies of DF’s grip scaling under natural conditions reveals a modest though significant deficit. Importantly, however, she continues to show a robust dissociation between form vision for perception and form vision for action. We also review recent studies that explore the role of online visual feedback and terminal haptic feedback in the programming and control of her grasping. These studies make it clear that DF is no more reliant on visual or haptic feedback than are neurologically-intact individuals. In short, we argue that her ability to grasp objects depends on visual feedforward processing carried out by visuomotor networks in her dorsal stream that function in the much the same way as they do in neurologically

  5. Navigation with ECDIS: Choosing the Proper Secondary Positioning Source

    Directory of Open Access Journals (Sweden)

    D. Brčic

    2015-09-01

    Full Text Available The completion of ECDIS mandatory implementation period on-board SOLAS vessels requires certain operational, functional and educational gaping holes to be solved. It especially refers to positioning and its redundancy, which represents fundamental safety factor on-board navigating vessels. The proposed paper deals with primary and secondary positioning used in ECDIS system. Standard positioning methods are described, discussing possibilities of obtained positions’ automatic and manual implementation in ECDIS, beside default methods. With the aim of emphasizing the need and importance of using secondary positioning source in ECDIS, positioning issue from the standpoint of end-users was elaborated, representing a practical feedback of elaborated topic. The survey was conducted in the form of international questionnaire placed among OOWs, ranging from apprentice officers to captains. The result answers and discussion regarding (nonusage of secondary positioning sources in ECDIS were analysed and presented. Answers and statements were elaborated focusing not only in usage of the secondary positioning system in ECDIS, but in navigation in general. The study revealed potential risks arising from the lack of knowledge and even negligence. The paper concludes with summary of findings related to discrepancies between theoretical background, good seamanship practice and real actions taken by OOWs. Further research activities are pointed out, together with planned practical actions in raising awareness regarding navigation with ECDIS.

  6. Mastoidectomy simulation with combined visual and haptic feedback.

    Science.gov (United States)

    Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Zanetti, Gianluigi; Zorcolo, Antonio; John, Nigel W; Stone, Robert J

    2002-01-01

    Mastoidectomy is one of the most common surgical procedures relating to the petrous bone. In this paper we describe our preliminary results in the realization of a virtual reality mastoidectomy simulator. Our system is designed to work on patient-specific volumetric object models directly derived from 3D CT and MRI images. The paper summarizes the detailed task analysis performed in order to define the system requirements, introduces the architecture of the prototype simulator, and discusses the initial feedback received from selected end users.

  7. Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain

    Science.gov (United States)

    Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.

    2017-03-01

    MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.

  8. Enhancing fuzzy robot navigation systems by mimicking human visual perception of natural terrain traversibility

    Science.gov (United States)

    Tunstel, E.; Howard, A.; Edwards, D.; Carlson, A.

    2001-01-01

    This paper presents a technique for learning to assess terrain traversability for outdoor mobile robot navigation using human-embedded logic and real-time perception of terrain features extracted from image data.

  9. Blind MuseumTourer: A System for Self-Guided Tours in Museums and Blind Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Apostolos Meliones

    2018-01-01

    Full Text Available Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils, which has primarily addressed blind or visually impaired (BVI accessibility and self-guided tours in museums. A pilot prototype has been developed and is currently under evaluation at the Tactual Museum with the collaboration of the Lighthouse for the Blind of Greece. This paper describes the functionality of the application and evaluates candidate indoor location determination technologies, such as wireless local area network (WLAN and surface-mounted assistive tactile route indications combined with Bluetooth low energy (BLE beacons and inertial dead-reckoning functionality, to come up with a reliable and highly accurate indoor positioning system adopting the latter solution. The developed concepts, including map matching, a key concept for indoor navigation, apply in a similar way to other indoor guidance use cases involving complex indoor places, such as in hospitals, shopping malls, airports, train stations, public and municipality buildings, office buildings, university buildings, hotel resorts, passenger ships, etc. The presented Android application is effectively a Blind IndoorGuide system for accurate and reliable blind indoor navigation.

  10. The serious game HearHere for elderly with age-related vision loss : effectively training the skill to use auditory information for navigation

    NARCIS (Netherlands)

    Hartendorp, Mijk; Braad, Eelco; Van Sloten, Janke; Steyvers, Frank; Pinkster, Christiaan

    2017-01-01

    More and more people suffer from age-related eye conditions, e.g. Macular Degeneration. One of the problems experienced by these people is navigation. A strategy shown by many juvenile visually impaired persons (VIPs) is using auditory information for navigation. Therefore, it is important to train

  11. Haptic Feedback for Enhancing Realism of Walking Simulations

    DEFF Research Database (Denmark)

    Turchet, Luca; Burelli, Paolo; Serafin, Stefania

    2013-01-01

    system. While during the use of the interactive system subjects physically walked, during the use of the non-interactive system the locomotion was simulated while subjects were sitting on a chair. In both the configurations subjects were exposed to auditory and audio-visual stimuli presented...... with and without the haptic feedback. Results of the experiments provide a clear preference towards the simulations enhanced with haptic feedback showing that the haptic channel can lead to more realistic experiences in both interactive and non-interactive configurations. The majority of subjects clearly...... appreciated the added feedback. However, some subjects found the added feedback disturbing and annoying. This might be due on one hand to the limits of the haptic simulation and on the other hand to the different individual desire to be involved in the simulations. Our findings can be applied to the context...

  12. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  13. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  14. Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747

  15. Examining Evidence of Reliability, Validity, and Fairness for the "SuccessNavigator"™ Assessment. Research Report. ETS RR-13-12

    Science.gov (United States)

    Markle, Ross; Olivera-Aguilar, Margarita; Jackson, Teresa; Noeth, Richard; Robbins, Steven

    2013-01-01

    The "SuccessNavigator"™ assessment is an online, 30 minute self-assessment of psychosocial and study skills designed for students entering postsecondary education. In addition to providing feedback in areas such as classroom and study behaviors, commitment to educational goals, management of academic stress, and connection to social…

  16. OSIRIX: open source multimodality image navigation software

    Science.gov (United States)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  17. PeptideNavigator: An interactive tool for exploring large and complex data sets generated during peptide-based drug design projects.

    Science.gov (United States)

    Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J

    2018-01-01

    There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Wayfinding in the Blind: Larger Hippocampal Volume and Supranormal Spatial Navigation

    Science.gov (United States)

    Fortin, Madeleine; Voss, Patrice; Lord, Catherine; Lassonde, Maryse; Pruessner, Jens; Saint-Amour, Dave; Rainville, Constant; Lepore, Franco

    2008-01-01

    In the absence of visual input, the question arises as to how complex spatial abilities develop and how the brain adapts to the absence of this modality. We explored navigational skills in both early and late blind individuals and structural differences in the hippocampus, a brain region well known to be involved in spatial processing.…

  19. Real-time feedback enhances forward propulsion during walking in old adults.

    Science.gov (United States)

    Franz, Jason R; Maletis, Michela; Kram, Rodger

    2014-01-01

    Reduced propulsive function during the push-off phase of walking plays a central role in the deterioration of walking ability with age. We used real-time propulsive feedback to test the hypothesis that old adults have an underutilized propulsive reserve available during walking. 8 old adults (mean [SD], age: 72.1 [3.9] years) and 11 young adults (age: 21.0 [1.5] years) participated. For our primary aim, old subjects walked: 1) normally, 2) with visual feedback of their peak propulsive ground reaction forces, and 3) with visual feedback of their medial gastrocnemius electromyographic activity during push-off. We asked those subjects to match a target set to 20% and 40% greater propulsive force or push-off muscle activity than normal walking. We tested young subjects walking normally only to provide reference ground reaction force values. Walking normally, old adults exerted 12.5% smaller peak propulsive forces than young adults (Ppush-off muscle activities when we provided propulsive feedback. Most notably, force feedback elicited propulsive forces that were equal to or 10.5% greater than those of young adults (+20% target, P=0.87; +40% target, P=0.02). With electromyographic feedback, old adults significantly increased their push-off muscle activities but without increasing their propulsive forces. Old adults with propulsive deficits have a considerable and underutilized propulsive reserve available during level walking. Further, real-time propulsive feedback represents a promising therapeutic strategy to improve the forward propulsion of old adults and thus maintain their walking ability and independence. © 2013.

  20. Teaching Young Adults with Intellectual and Developmental Disabilities Community-Based Navigation Skills to Take Public Transportation.

    Science.gov (United States)

    Price, Richard; Marsh, Abbie J; Fisher, Marisa H

    2018-03-01

    Facilitating the use of public transportation enhances opportunities for independent living and competitive, community-based employment for individuals with intellectual and developmental disabilities (IDD). Four young adults with IDD were taught through total-task chaining to use the Google Maps application, a self-prompting, visual navigation system, to take the bus to locations around a college campus and the community. Three of four participants learned to use Google Maps to independently navigate public transportation. Google Maps may be helpful in supporting independent travel, highlighting the importance of future research in teaching navigation skills. Learning to independently use public transportation increases access to autonomous activities, such as opportunities to work and to attend postsecondary education programs on large college campuses.Individuals with IDD can be taught through chaining procedures to use the Google Maps application to navigate public transportation.Mobile map applications are an effective and functional modern tool that can be used to teach community navigation.

  1. Boosting the Motor Outcome of the Untrained Hand by Action Observation: Mirror Visual Feedback, Video Therapy, or Both Combined—What Is More Effective?

    Directory of Open Access Journals (Sweden)

    Florian Bähr

    2018-01-01

    Full Text Available Action observation (AO allows access to a network that processes visuomotor and sensorimotor inputs and is believed to be involved in observational learning of motor skills. We conducted three consecutive experiments to examine the boosting effect of AO on the motor outcome of the untrained hand by either mirror visual feedback (MVF, video therapy (VT, or a combination of both. In the first experiment, healthy participants trained either with MVF or without mirror feedback while in the second experiment, participants either trained with VT or observed animal videos. In the third experiment, participants first observed video clips that were followed by either training with MVF or training without mirror feedback. The outcomes for the untrained hand were quantified by scores from five motor tasks. The results demonstrated that MVF and VT significantly increase the motor performance of the untrained hand by the use of AO. We found that MVF was the most effective approach to increase the performance of the target effector. On the contrary, the combination of MVF and VT turns out to be less effective looking from clinical perspective. The gathered results suggest that action-related motor competence with the untrained hand is acquired by both mirror-based and video-based AO.

  2. Force control in the absence of visual and tactile feedback

    NARCIS (Netherlands)

    Mugge, W.; Abbink, D.A.; Schouten, Alfred Christiaan; van der Helm, F.C.T.; Arendzen, J.H.; Meskers, C.G.M.

    2013-01-01

    Motor control tasks like stance or object handling require sensory feedback from proprioception, vision and touch. The distinction between tactile and proprioceptive sensors is not frequently made in dynamic motor control tasks, and if so, mostly based on signal latency. We previously found that

  3. Perception of CPR quality: Influence of CPR feedback, Just-in-Time CPR training and provider role.

    Science.gov (United States)

    Cheng, Adam; Overly, Frank; Kessler, David; Nadkarni, Vinay M; Lin, Yiqun; Doan, Quynh; Duff, Jonathan P; Tofil, Nancy M; Bhanji, Farhan; Adler, Mark; Charnovich, Alex; Hunt, Elizabeth A; Brown, Linda L

    2015-02-01

    Many healthcare providers rely on visual perception to guide cardiopulmonary resuscitation (CPR), but little is known about the accuracy of provider perceptions of CPR quality. We aimed to describe the difference between perceived versus measured CPR quality, and to determine the impact of provider role, real-time visual CPR feedback and Just-in-Time (JIT) CPR training on provider perceptions. We conducted secondary analyses of data collected from a prospective, multicenter, randomized trial of 324 healthcare providers who participated in a simulated cardiac arrest scenario between July 2012 and April 2014. Participants were randomized to one of four permutations of: JIT CPR training and real-time visual CPR feedback. We calculated the difference between perceived and measured quality of CPR and reported the proportion of subjects accurately estimating the quality of CPR within each study arm. Participants overestimated achieving adequate chest compression depth (mean difference range: 16.1-60.6%) and rate (range: 0.2-51%), and underestimated chest compression fraction (0.2-2.9%) across all arms. Compared to no intervention, the use of real-time feedback and JIT CPR training (alone or in combination) improved perception of depth (pCPR quality was poor for chest compression depth (0-13%), rate (5-46%) and chest compression fraction (60-63%). Perception of depth is more accurate in CPR providers versus team leaders (27.8% vs. 7.4%; p=0.043) when using real-time feedback. Healthcare providers' visual perception of CPR quality is poor. Perceptions of CPR depth are improved by using real-time visual feedback and with prior JIT CPR training. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Improving lower limb weight distribution asymmetry during the squat using Nintendo Wii Balance Boards and real-time feedback.

    Science.gov (United States)

    McGough, Rian; Paterson, Kade; Bradshaw, Elizabeth J; Bryant, Adam L; Clark, Ross A

    2012-01-01

    Weight-bearing asymmetry (WBA) may be detrimental to performance and could increase the risk of injury; however, detecting and reducing it is difficult in a field setting. This study assessed whether a portable and simple-to-use system designed with multiple Nintendo Wii Balance Boards (NWBBs) and customized software can be used to evaluate and improve WBA. Fifteen elite Australian Rules Footballers and 32 age-matched, untrained participants were tested for measures of WBA while squatting. The NWBB and customized software provided real-time visual feedback of WBA during half of the trials. Outcome measures included the mean mass difference (MMD) between limbs, interlimb symmetry index (SI), and percentage of time spent favoring a single limb (TFSL). Significant reductions in MMD (p = 0.028) and SI (p = 0.007) with visual feedback were observed for the entire group data. Subgroup analysis revealed significant reductions in MMD (p = 0.047) and SI (p = 0.026) with visual feedback in the untrained sample; however, the reductions in the trained sample were nonsignificant. The trained group showed significantly less WBA for TFSL under both visual conditions (no feedback: p = 0.015, feedback: p = 0.017). Correlation analysis revealed that participants with high levels of WBA had the greatest response to feedback (p professional athletes do not possess the same magnitude of WBA. Inexpensive, portable, and widely available gaming technology may be used to evaluate and improve WBA in clinical and sporting settings.

  5. The use of virtual surgical planning and navigation in the treatment of orbital trauma

    Directory of Open Access Journals (Sweden)

    Alan Scott Herford

    2017-02-01

    Full Text Available Virtual surgical planning (VSP has recently been introduced in craniomaxillofacial surgery with the goal of improving efficiency and precision for complex surgical operations. Among many indications, VSP can also be applied for the treatment of congenital and acquired craniofacial defects, including orbital fractures. VSP permits the surgeon to visualize the complex anatomy of craniofacial region, showing the relationship between bone and neurovascular structures. It can be used to design and print using three-dimensional (3D printing technology and customized surgical models. Additionally, intraoperative navigation may be useful as an aid in performing the surgery. Navigation is useful for both the surgical dissection as well as to confirm the placement of the implant. Navigation has been found to be especially useful for orbit and sinus surgery. The present paper reports a case describing the use of VSP and computerized navigation for the reconstruction of a large orbital floor defect with a custom implant.

  6. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration.

    Science.gov (United States)

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-05-02

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.

  7. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Interactive balance training integrating sensor-based visual feedback of movement performance: a pilot study in older adults.

    Science.gov (United States)

    Schwenk, Michael; Grewal, Gurtej S; Honarvar, Bahareh; Schwenk, Stefanie; Mohler, Jane; Khalsa, Dharma S; Najafi, Bijan

    2014-12-13

    Wearable sensor technology can accurately measure body motion and provide incentive feedback during exercising. The aim of this pilot study was to evaluate the effectiveness and user experience of a balance training program in older adults integrating data from wearable sensors into a human-computer interface designed for interactive training. Senior living community residents (mean age 84.6) with confirmed fall risk were randomized to an intervention (IG, n = 17) or control group (CG, n = 16). The IG underwent 4 weeks (twice a week) of balance training including weight shifting and virtual obstacle crossing tasks with visual/auditory real-time joint movement feedback using wearable sensors. The CG received no intervention. Outcome measures included changes in center of mass (CoM) sway, ankle and hip joint sway measured during eyes open (EO) and eyes closed (EC) balance test at baseline and post-intervention. Ankle-hip postural coordination was quantified by a reciprocal compensatory index (RCI). Physical performance was quantified by the Alternate-Step-Test (AST), Timed-up-and-go (TUG), and gait assessment. User experience was measured by a standardized questionnaire. After the intervention sway of CoM, hip, and ankle were reduced in the IG compared to the CG during both EO and EC condition (p = .007-.042). Improvement was obtained for AST (p = .037), TUG (p = .024), fast gait speed (p = . 010), but not normal gait speed (p = .264). Effect sizes were moderate for all outcomes. RCI did not change significantly. Users expressed a positive training experience including fun, safety, and helpfulness of sensor-feedback. Results of this proof-of-concept study suggest that older adults at risk of falling can benefit from the balance training program. Study findings may help to inform future exercise interventions integrating wearable sensors for guided game-based training in home- and community environments. Future studies should evaluate the

  9. Attention to Color Sharpens Neural Population Tuning via Feedback Processing in the Human Visual Cortex Hierarchy.

    Science.gov (United States)

    Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max

    2017-10-25

    Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the

  10. 33 CFR 2.36 - Navigable waters of the United States, navigable waters, and territorial waters.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Navigable waters of the United States, navigable waters, and territorial waters. 2.36 Section 2.36 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY GENERAL JURISDICTION Jurisdictional Terms § 2.36 Navigable waters...

  11. Haptic and Visual feedback in 3D Audio Mixing Interfaces

    DEFF Research Database (Denmark)

    Gelineck, Steven; Overholt, Daniel

    2015-01-01

    This paper describes the implementation and informal evaluation of a user interface that explores haptic feedback for 3D audio mixing. The implementation compares different approaches using either the LEAP Motion for mid-air hand gesture control, or the Novint Falcon for active haptic feed- back...

  12. Students' Feedback of mDPBL Approach and the Learning Impact towards Computer Networks Teaching and Learning

    Science.gov (United States)

    Winarno, Sri; Muthu, Kalaiarasi Sonai; Ling, Lew Sook

    2018-01-01

    This study presents students' feedback and learning impact on design and development of a multimedia learning in Direct Problem-Based Learning approach (mDPBL) for Computer Networks in Dian Nuswantoro University, Indonesia. This study examined the usefulness, contents and navigation of the multimedia learning as well as learning impacts towards…

  13. A Systematic Review of the Literature on Parenting of Young Children with Visual Impairments and the Adaptions for Video-Feedback Intervention to Promote Positive Parenting (VIPP).

    Science.gov (United States)

    van den Broek, Ellen G C; van Eijden, Ans J P M; Overbeek, Mathilde M; Kef, Sabina; Sterkenburg, Paula S; Schuengel, Carlo

    2017-01-01

    Secure parent-child attachment may help children to overcome the challenges of growing up with a visual or visual-and-intellectual impairment. A large literature exists that provides a blueprint for interventions that promote parental sensitivity and secure attachment. The Video-feedback Intervention to promote Positive Parenting (VIPP) is based on that blueprint. While it has been adapted to several specific at risk populations, children with visual impairment may require additional adjustments. This study aimed to identify the themes that should be addressed in adapting VIPP and similar interventions. A Delphi-consultation was conducted with 13 professionals in the field of visual impairment to select the themes for relationship-focused intervention. These themes informed a systematic literature search. Interaction, intersubjectivity, joint attention, exploration, play and specific behavior were the themes mentioned in the Delphi-group. Paired with visual impairment or vision disorders, infants or young children (and their parents) the search yielded 74 articles, making the six themes for intervention adaptation more specific and concrete. The rich literature on six visual impairment specific themes was dominated by the themes interaction, intersubjectivity, and joint attention. These themes need to be addressed in adapting intervention programs developed for other populations, such as VIPP which currently focuses on higher order constructs of sensitivity and attachment.

  14. How vision is shaped by language comprehension--top-down feedback based on low-spatial frequencies.

    Science.gov (United States)

    Hirschfeld, Gerrit; Zwitserlood, Pienie

    2011-03-04

    Effects of language comprehension on visual processing have been extensively studied within the embodied-language framework. However, it is unknown whether these effects are caused by passive repetition suppression in visual processing areas, or depend on active feedback, based on partial input, from prefrontal regions. Based on a model of top-down feedback during visual recognition, we predicted diminished effects when low-spatial frequencies were removed from targets. We compared low-pass and high-pass filtered pictures in a sentence-picture-verification task. Target pictures matched or mismatched the implied shape of an object mentioned in a preceding sentence, or were unrelated to the sentences. As predicted, there was a large match advantage when the targets contained low-spatial frequencies, but no effect of linguistic context when these frequencies were filtered out. The proposed top-down feedback model is superior to repetition suppression in explaining the current results, as well as earlier results about the lateralization of this effect, and peculiar color match effects. We discuss these findings in the context of recent general proposals of prediction and top-down feedback. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. A unified framework for image retrieval using keyword and visual features.

    Science.gov (United States)

    Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo

    2005-07-01

    In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.

  16. FlexiView: A Magnet-Based Approach for Visualizing Requirements Artifacts

    OpenAIRE

    Ghazi, Parisa; Seyff, Norbert; Glinz, Martin

    2015-01-01

    Requirements engineers create large numbers of artifacts when eliciting and documenting requirements. They need to navigate through these artifacts and display information details at points of interest for reviewing or editing information. [Question/problem] Traditional visualization mechanisms such as scrolling and opening multiple windows lose context when navigating and can be cumbersome to use, hence. On the other hand, focus+context approaches can display details in context, but they dis...

  17. Deep Hierarchies in the Primate Visual Cortex

    DEFF Research Database (Denmark)

    Krüger, Norbert; Jannsen, Per; Kalkan, S.

    2013-01-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition or vision-based navigation and manipulation. This article r...

  18. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    Directory of Open Access Journals (Sweden)

    Anne-Sophie Darmaillacq

    2017-06-01

    Full Text Available Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  19. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish.

    Science.gov (United States)

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E; Dickel, Ludovic

    2017-01-01

    Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e -vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  20. 77 FR 42637 - Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments; Corrections

    Science.gov (United States)

    2012-07-20

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Parts 84 and 115 [Docket No. USCG-2012-0306] RIN 1625-AB86 Navigation and Navigable Waters; Technical, Organizational, and Conforming Amendments...), the Coast Guard published a final rule entitled ``Navigation and Navigable Waters; Technical...

  1. SOVEREIGN: An autonomous neural system for incrementally learning planned action sequences to navigate towards a rewarded goal.

    Science.gov (United States)

    Gnadt, William; Grossberg, Stephen

    2008-06-01

    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory

  2. Sensory bases of navigation.

    Science.gov (United States)

    Gould, J L

    1998-10-08

    Navigating animals need to know both the bearing of their goal (the 'map' step), and how to determine that direction (the 'compass' step). Compasses are typically arranged in hierarchies, with magnetic backup as a last resort when celestial information is unavailable. Magnetic information is often essential to calibrating celestial cues, though, and repeated recalibration between celestial and magnetic compasses is important in many species. Most magnetic compasses are based on magnetite crystals, but others make use of induction or paramagnetic interactions between short-wavelength light and visual pigments. Though odors may be used in some cases, most if not all long-range maps probably depend on magnetite. Magnetitebased map senses are used to measure only latitude in some species, but provide the distance and direction of the goal in others.

  3. Effects of feedback reliability on feedback-related brain activity: A feedback valuation account.

    Science.gov (United States)

    Ernst, Benjamin; Steinhauser, Marco

    2018-04-06

    Adaptive decision making relies on learning from feedback. Because feedback sometimes can be misleading, optimal learning requires that knowledge about the feedback's reliability be utilized to adjust feedback processing. Although previous research has shown that feedback reliability indeed influences feedback processing, the underlying mechanisms through which this is accomplished remain unclear. Here we propose that feedback processing is adjusted by the adaptive, top-down valuation of feedback. We assume that unreliable feedback is devalued relative to reliable feedback, thus reducing the reward prediction errors that underlie feedback-related brain activity and learning. A crucial prediction of this account is that the effects of feedback reliability are susceptible to contrast effects. That is, the effects of feedback reliability should be enhanced when both reliable and unreliable feedback are experienced within the same context, as compared to when only one level of feedback reliability is experienced. To evaluate this prediction, we measured the event-related potentials elicited by feedback in two experiments in which feedback reliability was varied either within or between blocks. We found that the fronto-central valence effect, a correlate of reward prediction errors during reinforcement learning, was reduced for unreliable feedback. But this result was obtained only when feedback reliability was varied within blocks, thus indicating a contrast effect. This suggests that the adaptive valuation of feedback is one mechanism underlying the effects of feedback reliability on feedback processing.

  4. Pantomime-grasping: Advance knowledge of haptic feedback availability supports an absolute visuo-haptic calibration

    Directory of Open Access Journals (Sweden)

    Shirin eDavarpanah Jazi

    2016-05-01

    Full Text Available An emerging issue in movement neurosciences is whether haptic feedback influences the nature of the information supporting a simulated grasping response (i.e., pantomime-grasping. In particular, recent work by our group contrasted pantomime-grasping responses performed with (i.e., PH+ trials and without (i.e., PH- trials terminal haptic feedback in separate blocks of trials. Results showed that PH- trials were mediated via relative visual information. In contrast, PH+ trials showed evidence of an absolute visuo-haptic calibration – a finding attributed to an error signal derived from a comparison between expected and actual haptic feedback (i.e., an internal forward model. The present study examined whether advanced knowledge of haptic feedback availability influences the aforementioned calibration process. To that end, PH- and PH+ trials were completed in separate blocks (i.e., the feedback schedule used in our group’s previous study and a block wherein PH- and PH+ trials were randomly interleaved on a trial-by-trial basis (i.e., random feedback schedule. In other words, the random feedback schedule precluded participants from predicting whether haptic feedback would be available at the movement goal location. We computed just-noticeable-difference (JND values to determine whether responses adhered to, or violated, the relative psychophysical principles of Weber’s law. Results for the blocked feedback schedule replicated our group’s previous work, whereas in the random feedback schedule PH- and PH+ trials were supported via relative visual information. Accordingly, we propose that a priori knowledge of haptic feedback is necessary to support an absolute visuo-haptic calibration. Moreover, our results demonstrate that the presence and expectancy of haptic feedback is an important consideration in contrasting the behavioral and neural properties of natural and stimulated (i.e., pantomime-grasping grasping.

  5. Radar and electronic navigation

    CERN Document Server

    Sonnenberg, G J

    2013-01-01

    Radar and Electronic Navigation, Sixth Edition discusses radar in marine navigation, underwater navigational aids, direction finding, the Decca navigator system, and the Omega system. The book also describes the Loran system for position fixing, the navy navigation satellite system, and the global positioning system (GPS). It reviews the principles, operation, presentations, specifications, and uses of radar. It also describes GPS, a real time position-fixing system in three dimensions (longitude, latitude, altitude), plus velocity information with Universal Time Coordinated (UTC). It is accur

  6. Comparative Visual Analysis of Large Customer Feedback Based on Self-Organizing Sentiment Maps

    OpenAIRE

    Janetzko, Halldór; Jäckle, Dominik; Schreck, Tobias

    2013-01-01

    Textual customer feedback data, e.g., received by surveys or incoming customer email notifications, can be a rich source of information with many applications in Customer Relationship Management (CRM). Nevertheless, to date this valuable source of information is often neglected in practice, as service managers would have to read manually through potentially large amounts of feedback text documents to extract actionable information. As in many cases, a purely manual approach is not feasible, w...

  7. Virtual reality visual feedback for hand-controlled scanning probe microscopy manipulation of single molecules

    Directory of Open Access Journals (Sweden)

    Philipp Leinen

    2015-11-01

    Full Text Available Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926–1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf of the non-contact atomic force microscope (NC-AFM tuning fork sensor as well as the magnitude of the electric current (I flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111 surface.

  8. Virtual reality visual feedback for hand-controlled scanning probe microscopy manipulation of single molecules.

    Science.gov (United States)

    Leinen, Philipp; Green, Matthew F B; Esat, Taner; Wagner, Christian; Tautz, F Stefan; Temirov, Ruslan

    2015-01-01

    Controlled manipulation of single molecules is an important step towards the fabrication of single molecule devices and nanoscale molecular machines. Currently, scanning probe microscopy (SPM) is the only technique that facilitates direct imaging and manipulations of nanometer-sized molecular compounds on surfaces. The technique of hand-controlled manipulation (HCM) introduced recently in Beilstein J. Nanotechnol. 2014, 5, 1926-1932 simplifies the identification of successful manipulation protocols in situations when the interaction pattern of the manipulated molecule with its environment is not fully known. Here we present a further technical development that substantially improves the effectiveness of HCM. By adding Oculus Rift virtual reality goggles to our HCM set-up we provide the experimentalist with 3D visual feedback that displays the currently executed trajectory and the position of the SPM tip during manipulation in real time, while simultaneously plotting the experimentally measured frequency shift (Δf) of the non-contact atomic force microscope (NC-AFM) tuning fork sensor as well as the magnitude of the electric current (I) flowing between the tip and the surface. The advantages of the set-up are demonstrated by applying it to the model problem of the extraction of an individual PTCDA molecule from its hydrogen-bonded monolayer grown on Ag(111) surface.

  9. Ethical Navigation in Leadership Training

    Directory of Open Access Journals (Sweden)

    Øyvind Kvalnes

    2012-05-01

    Full Text Available Business leaders frequently face dilemmas, circumstances where whatever course of action they choose, something of important value will be offended. How can an organisation prepare its decision makers for such situations? This article presents a pedagogical approach to dilemma training for business leaders and managers. It has evolved through ten years of experience with human resource development, where ethics has been an integral part of programs designed to help individuals to become excellent in their professional roles. The core element in our approach is The Navigation Wheel, a figure used to keep track of relevant decision factors. Feedback from participants indicates that dilemma training has helped them to recognise the ethical dimension of leadership. They respond that the tools and concepts are highly relevant in relation to the challenges that occur in the working environment they return to after leadership training.http://dx.doi.org/10.5324/eip.v6i1.1778

  10. Virtual Hand Feedback Reduces Reaction Time in an Interactive Finger Reaching Task.

    Directory of Open Access Journals (Sweden)

    Johannes Brand

    Full Text Available Computer interaction via visually guided hand or finger movements is a ubiquitous part of daily computer usage in work or gaming. Surprisingly, however, little is known about the performance effects of using virtual limb representations versus simpler cursors. In this study 26 healthy right-handed adults performed cued index finger flexion-extension movements towards an on-screen target while wearing a data glove. They received each of four different types of real-time visual feedback: a simple circular cursor, a point light pattern indicating finger joint positions, a cartoon hand and a fully shaded virtual hand. We found that participants initiated the movements faster when receiving feedback in the form of a hand than when receiving circular cursor or point light feedback. This overall difference was robust for three out of four hand versus circle pairwise comparisons. The faster movement initiation for hand feedback was accompanied by a larger movement amplitude and a larger movement error. We suggest that the observed effect may be related to priming of hand information during action perception and execution affecting motor planning and execution. The results may have applications in the use of body representations in virtual reality applications.

  11. Using Augmented Feedback to Decrease Patellofemoral Pain in Runners: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Lauren M. Cornwell

    2016-05-01

    Full Text Available Objective: Patellofemoral pain (PFP is a common injury in running. The cause of patellofemoral pain is multifactorial in nature, which results varied treatment approaches for this disorder. Many studies have examined the effect of using strengthening protocols targeted at subjects’ hip and quadriceps strength. Although these studies have resulted in a reduction in short-term PFP for runners, many continue to experience PFP after undergoing these treatment strategies. A more recent theory regarding the treatment of PFP in runners involves the use of augmented verbal and visual feedback. This treatment strategy involves giving the runner scheduled visual feedback to adapt their running strategies in hopes of reducing their PFP. Much of this research has been done with experienced runners in the age range of 18-22 years old. The purpose of this study was to examine the effects of augmented verbal and real-time visual feedback on patellofemoral pain. The hypothesis was that training with the use of auditory and visual feedback would improve patellofemoral pain in this runner. In clinical practice, auditory and visual feedback to change hip and knee mechanics while running may be used as a treatment strategy for patellofemoral pain. Design and Setting: The study was conducted in a controlled laboratory setting and was an experimental design including a single-subject. Participants: The subject was a recreational female runner that was 22 years of age. The subject was recruited via a flyer distributed on campus. Once the individual agreed to participate, they were given a date to begin the study. This study was approved by the Institutional Review Board at the institution. When the subject arrived at the first meeting, the informed consent was reviewed and signed by the subject. Intervention: At the first visit, the subject was given a PFP questionnaire to determine if they were eligible for the study. For this study, the subject was classified as

  12. Perceptual learning increases the strength of the earliest signals in visual cortex.

    Science.gov (United States)

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  13. A Damping Grid Strapdown Inertial Navigation System Based on a Kalman Filter for Ships in Polar Regions.

    Science.gov (United States)

    Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu

    2017-07-03

    The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.

  14. Granger causal connectivity dissociates navigation networks that subserve allocentric and egocentric path integration.

    Science.gov (United States)

    Lin, Chin-Teng; Chiu, Te-Cheng; Wang, Yu-Kai; Chuang, Chun-Hsiang; Gramann, Klaus

    2018-01-15

    Studies on spatial navigation demonstrate a significant role of the retrosplenial complex (RSC) in the transformation of egocentric and allocentric information into complementary spatial reference frames (SRFs). The tight anatomical connections of the RSC with a wide range of other cortical regions processing spatial information support its vital role within the human navigation network. To better understand how different areas of the navigational network interact, we investigated the dynamic causal interactions of brain regions involved in solving a virtual navigation task. EEG signals were decomposed by independent component analysis (ICA) and subsequently examined for information flow between clusters of independent components (ICs) using direct short-time directed transfer function (sdDTF). The results revealed information flow between the anterior cingulate cortex and the left prefrontal cortex in the theta (4-7 Hz) frequency band and between the prefrontal, motor, parietal, and occipital cortices as well as the RSC in the alpha (8-13 Hz) frequency band. When participants prefered to use distinct reference frames (egocentric vs. allocentric) during navigation was considered, a dominant occipito-parieto-RSC network was identified in allocentric navigators. These results are in line with the assumption that the RSC, parietal, and occipital cortices are involved in transforming egocentric visual-spatial information into an allocentric reference frame. Moreover, the RSC demonstrated the strongest causal flow during changes in orientation, suggesting that this structure directly provides information on heading changes in humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Development of performance measures based on visibility for effective placement of aids to navigation

    Science.gov (United States)

    Fang, Tae Hyun; Kim, Yeon-Gyu; Gong, In-Young; Park, Sekil; Kim, Ah-Young

    2015-09-01

    In order to develop the challenging process of placing Aids to Navigation (AtoN), we propose performance measures which quantifies the effect of such placement. The best placement of AtoNs is that from which the navigator can best recognize the information provided by an AtoN. The visibility of AtoNs depends mostly on light sources, the weather condition and the position of the navigator. Visual recognition is enabled by achieving adequate contrast between the AtoN light source and background light. Therefore, the performance measures can be formulated through the amount of differences between these two lights. For simplification, this approach is based on the values of the human factor suggested by International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA). Performance measures for AtoN placement can be evaluated through AtoN Simulator, which has been being developed by KIOST/KRISO in Korea and has been launched by Korea National Research Program. Simulations for evaluation are carried out at waterway in Busan port in Korea.

  16. Development of performance measures based on visibility for effective placement of aids to navigation

    Directory of Open Access Journals (Sweden)

    Tae Hyun Fang

    2015-05-01

    Full Text Available In order to develop the challenging process of placing Aids to Navigation (AtoN, we propose performance measures which quantifies the effect of such placement. The best placement of AtoNs is that from which the navigator can best recognize the information provided by an AtoN. The visibility of AtoNs depends mostly on light sources, the weather condition and the position of the navigator. Visual recognition is enabled by achieving adequate contrast between the AtoN light source and background light. Therefore, the performance measures can be formulated through the amount of differences between these two lights. For simplification, this approach is based on the values of the human factor suggested by International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA. Performance measures for AtoN placement can be evaluated through AtoN Simulator, which has been being developed by KIOST/KRISO in Korea and has been launched by Korea National Research Program. Simulations for evaluation are carried out at waterway in Busan port in Korea.

  17. Brain-Computer Interfaces With Multi-Sensory Feedback for Stroke Rehabilitation: A Case Study.

    Science.gov (United States)

    Irimia, Danut C; Cho, Woosang; Ortner, Rupert; Allison, Brendan Z; Ignat, Bogdan E; Edlinger, Guenter; Guger, Christoph

    2017-11-01

    Conventional therapies do not provide paralyzed patients with closed-loop sensorimotor integration for motor rehabilitation. This work presents the recoveriX system, a hardware and software platform that combines a motor imagery (MI)-based brain-computer interface (BCI), functional electrical stimulation (FES), and visual feedback technologies for a complete sensorimotor closed-loop therapy system for poststroke rehabilitation. The proposed system was tested on two chronic stroke patients in a clinical environment. The patients were instructed to imagine the movement of either the left or right hand in random order. During these two MI tasks, two types of feedback were provided: a bar extending to the left or right side of a monitor as visual feedback and passive hand opening stimulated from FES as proprioceptive feedback. Both types of feedback relied on the BCI classification result achieved using common spatial patterns and a linear discriminant analysis classifier. After 10 sessions of recoveriX training, one patient partially regained control of wrist extension in her paretic wrist and the other patient increased the range of middle finger movement by 1 cm. A controlled group study is planned with a new version of the recoveriX system, which will have several improvements. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  18. Effect of physical workload and modality of information presentation on pattern recognition and navigation task performance by high-fit young males.

    Science.gov (United States)

    Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David

    2017-11-01

    Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.

  19. New Navigation Post-Processing Tools for Oceanographic Submersibles

    Science.gov (United States)

    Kinsey, J. C.; Whitcomb, L. L.; Yoerger, D. R.; Howland, J. C.; Ferrini, V. L.; Hegrenas, O.

    2006-12-01

    We report the development of Navproc, a new set of software tools for post-processing oceanographic submersible navigation data that exploits previously reported improvements in navigation sensing and estimation (e.g. Eos Trans. AGU, 84(46), Fall Meet. Suppl., Abstract OS32A- 0225, 2003). The development of these tools is motivated by the need to have post-processing software that allows users to compensate for errors in vehicle navigation, recompute the vehicle position, and then save the results for use with quantitative science data (e.g. bathymetric sonar data) obtained during the mission. Navproc does not provide real-time navigation or display of data nor is it capable of high-resolution, three dimensional (3D) data display. Navproc supports the ASCII data formats employed by the vehicles of the National Deep Submergence Facility (NDSF) operated by the Woods Hole Oceanographic Institution (WHOI). Post-processing of navigation data with Navproc is comprised of three tasks. First, data is converted from the logged ASCII file to a binary Matlab file. When loaded into Matlab, each sensor has a data structure containing the time stamped data sampled at the native update rate of the sensor. An additional structure contains the real-time vehicle navigation data. Second, the data can be displayed using a Graphical User Interface (GUI), allowing users to visually inspect the quality of the data and graphically extract portions of the data. Third, users can compensate for errors in the real-time vehicle navigation. Corrections include: (i) manual filtering and median filtering of long baseline (LBL) ranges; (ii) estimation of the Doppler/gyro alignment using previously reported methodologies; and (iii) sound velocity, tide, and LBL transponder corrections. Using these corrections, the Doppler and LBL positions can be recomputed to provide improved estimates of the vehicle position compared to those computed in real-time. The data can be saved in either binary or ASCII

  20. Biophysical network modeling of the dLGN circuit: Effects of cortical feedback on spatial response properties of relay cells.

    Directory of Open Access Journals (Sweden)

    Pablo Martínez-Cañada

    2018-01-01

    Full Text Available Despite half-a-century of research since the seminal work of Hubel and Wiesel, the role of the dorsal lateral geniculate nucleus (dLGN in shaping the visual signals is not properly understood. Placed on route from retina to primary visual cortex in the early visual pathway, a striking feature of the dLGN circuit is that both the relay cells (RCs and interneurons (INs not only receive feedforward input from retinal ganglion cells, but also a prominent feedback from cells in layer 6 of visual cortex. This feedback has been proposed to affect synchronicity and other temporal properties of the RC firing. It has also been seen to affect spatial properties such as the center-surround antagonism of thalamic receptive fields, i.e., the suppression of the response to very large stimuli compared to smaller, more optimal stimuli. Here we explore the spatial effects of cortical feedback on the RC response by means of a a comprehensive network model with biophysically detailed, single-compartment and multicompartment neuron models of RCs, INs and a population of orientation-selective layer 6 simple cells, consisting of pyramidal cells (PY. We have considered two different arrangements of synaptic feedback from the ON and OFF zones in the visual cortex to the dLGN: phase-reversed ('push-pull' and phase-matched ('push-push', as well as different spatial extents of the corticothalamic projection pattern. Our simulation results support that a phase-reversed arrangement provides a more effective way for cortical feedback to provide the increased center-surround antagonism seen in experiments both for flashing spots and, even more prominently, for patch gratings. This implies that ON-center RCs receive direct excitation from OFF-dominated cortical cells and indirect inhibitory feedback from ON-dominated cortical cells. The increased center-surround antagonism in the model is accompanied by spatial focusing, i.e., the maximum RC response occurs for smaller stimuli

  1. Biophysical network modeling of the dLGN circuit: Effects of cortical feedback on spatial response properties of relay cells

    Science.gov (United States)

    Martínez-Cañada, Pablo; Halnes, Geir; Fyhn, Marianne

    2018-01-01

    Despite half-a-century of research since the seminal work of Hubel and Wiesel, the role of the dorsal lateral geniculate nucleus (dLGN) in shaping the visual signals is not properly understood. Placed on route from retina to primary visual cortex in the early visual pathway, a striking feature of the dLGN circuit is that both the relay cells (RCs) and interneurons (INs) not only receive feedforward input from retinal ganglion cells, but also a prominent feedback from cells in layer 6 of visual cortex. This feedback has been proposed to affect synchronicity and other temporal properties of the RC firing. It has also been seen to affect spatial properties such as the center-surround antagonism of thalamic receptive fields, i.e., the suppression of the response to very large stimuli compared to smaller, more optimal stimuli. Here we explore the spatial effects of cortical feedback on the RC response by means of a a comprehensive network model with biophysically detailed, single-compartment and multicompartment neuron models of RCs, INs and a population of orientation-selective layer 6 simple cells, consisting of pyramidal cells (PY). We have considered two different arrangements of synaptic feedback from the ON and OFF zones in the visual cortex to the dLGN: phase-reversed (‘push-pull’) and phase-matched (‘push-push’), as well as different spatial extents of the corticothalamic projection pattern. Our simulation results support that a phase-reversed arrangement provides a more effective way for cortical feedback to provide the increased center-surround antagonism seen in experiments both for flashing spots and, even more prominently, for patch gratings. This implies that ON-center RCs receive direct excitation from OFF-dominated cortical cells and indirect inhibitory feedback from ON-dominated cortical cells. The increased center-surround antagonism in the model is accompanied by spatial focusing, i.e., the maximum RC response occurs for smaller stimuli when

  2. Persuasive performance feedback: the effect of framing on self-efficacy.

    Science.gov (United States)

    Choe, Eun Kyoung; Lee, Bongshin; Munson, Sean; Pratt, Wanda; Kientz, Julie A

    2013-01-01

    Self-monitoring technologies have proliferated in recent years as they offer excellent potential for promoting healthy behaviors. Although these technologies have varied ways of providing real-time feedback on a user's current progress, we have a dearth of knowledge of the framing effects on the performance feedback these tools provide. With an aim to create influential, persuasive performance feedback that will nudge people toward healthy behaviors, we conducted an online experiment to investigate the effect of framing on an individual's self-efficacy. We identified 3 different types of framing that can be applicable in presenting performance feedback: (1) the valence of performance (remaining vs. achieved framing), (2) presentation type (text-only vs. text with visual), and (3) data unit (raw vs. percentage). Results show that the achieved framing could lead to an increased perception of individual's performance capabilities. This work provides empirical guidance for creating persuasive performance feedback, thereby helping people designing self-monitoring technologies to promote healthy behaviors.

  3. Persuasive Performance Feedback: The Effect of Framing on Self-Efficacy

    Science.gov (United States)

    Choe, Eun Kyoung; Lee, Bongshin; Munson, Sean; Pratt, Wanda; Kientz, Julie A.

    2013-01-01

    Self-monitoring technologies have proliferated in recent years as they offer excellent potential for promoting healthy behaviors. Although these technologies have varied ways of providing real-time feedback on a user’s current progress, we have a dearth of knowledge of the framing effects on the performance feedback these tools provide. With an aim to create influential, persuasive performance feedback that will nudge people toward healthy behaviors, we conducted an online experiment to investigate the effect of framing on an individual’s self-efficacy. We identified 3 different types of framing that can be applicable in presenting performance feedback: (1) the valence of performance (remaining vs. achieved framing), (2) presentation type (text-only vs. text with visual), and (3) data unit (raw vs. percentage). Results show that the achieved framing could lead to an increased perception of individual’s performance capabilities. This work provides empirical guidance for creating persuasive performance feedback, thereby helping people designing self-monitoring technologies to promote healthy behaviors. PMID:24551378

  4. Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.

    Science.gov (United States)

    Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna

    2016-09-01

    Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.

  5. 33 CFR 66.05-100 - Designation of navigable waters as State waters for private aids to navigation.

    Science.gov (United States)

    2010-07-01

    ... as State waters for private aids to navigation. 66.05-100 Section 66.05-100 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY AIDS TO NAVIGATION PRIVATE AIDS TO NAVIGATION State Aids to Navigation § 66.05-100 Designation of navigable waters as State waters for private aids to...

  6. Switching from reaching to navigation: differential cognitive strategies for spatial memory in children and adults.

    Science.gov (United States)

    Belmonti, Vittorio; Cioni, Giovanni; Berthoz, Alain

    2015-07-01

    Navigational and reaching spaces are known to involve different cognitive strategies and brain networks, whose development in humans is still debated. In fact, high-level spatial processing, including allocentric location encoding, is already available to very young children, but navigational strategies are not mature until late childhood. The Magic Carpet (MC) is a new electronic device translating the traditional Corsi Block-tapping Test (CBT) to navigational space. In this study, the MC and the CBT were used to assess spatial memory for navigation and for reaching, respectively. Our hypothesis was that school-age children would not treat MC stimuli as navigational paths, assimilating them to reaching sequences. Ninety-one healthy children aged 6 to 11 years and 18 adults were enrolled. Overall short-term memory performance (span) on both tests, effects of sequence geometry, and error patterns according to a new classification were studied. Span increased with age on both tests, but relatively more in navigational than in reaching space, particularly in males. Sequence geometry specifically influenced navigation, not reaching. The number of body rotations along the path affected MC performance in children more than in adults, and in women more than in men. Error patterns indicated that navigational sequences were increasingly retained as global paths across development, in contrast to separately stored reaching locations. A sequence of spatial locations can be coded as a navigational path only if a cognitive switch from a reaching mode to a navigation mode occurs. This implies the integration of egocentric and allocentric reference frames, of visual and idiothetic cues, and access to long-term memory. This switch is not yet fulfilled at school age due to immature executive functions. © 2014 John Wiley & Sons Ltd.

  7. Motor transfer from map ocular exploration to locomotion during spatial navigation from memory.

    Science.gov (United States)

    Demichelis, Alixia; Olivier, Gérard; Berthoz, Alain

    2013-02-01

    Spatial navigation from memory can rely on two different strategies: a mental simulation of a kinesthetic spatial navigation (egocentric route strategy) or visual-spatial memory using a mental map (allocentric survey strategy). We hypothesized that a previously performed "oculomotor navigation" on a map could be used by the brain to perform a locomotor memory task. Participants were instructed to (1) learn a path on a map through a sequence of vertical and horizontal eyes movements and (2) walk on the slabs of a "magic carpet" to recall this path. The main results showed that the anisotropy of ocular movements (horizontal ones being more efficient than vertical ones) influenced performances of participants when they change direction on the central slab of the magic carpet. These data suggest that, to find their way through locomotor space, subjects mentally repeated their past ocular exploration of the map, and this visuo-motor memory was used as a template for the locomotor performance.

  8. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  9. Navigation Problems in Blind-to-Blind Pedestrians Tele-assistance Navigation

    OpenAIRE

    Balata , Jan; Mikovec , Zdenek; Maly , Ivo

    2015-01-01

    International audience; We raise a question whether it is possible to build a large-scale navigation system for blind pedestrians where a blind person navigates another blind person remotely by mobile phone. We have conducted an experiment, in which we observed blind people navigating each other in a city center in 19 sessions. We focused on problems in the navigator’s attempts to direct the traveler to the destination. We observed 96 problems in total, classified them on the basis of the typ...

  10. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  11. A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals

    Science.gov (United States)

    Hicks, Stephen L.; Wilson, Iain; Muhammed, Louwai; Worsfold, John; Downes, Susan M.; Kennard, Christopher

    2013-01-01

    Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals. PMID:23844067

  12. ACCURACY EVALUATION OF THE OBJECT LOCATION VISUALIZATION FOR GEO-INFORMATION AND DISPLAY SYSTEMS OF MANNED AIRCRAFTS NAVIGATION COMPLEXES

    Directory of Open Access Journals (Sweden)

    M. O. Kostishin

    2014-01-01

    Full Text Available The paper deals with the issue of accuracy estimating for the object location display in the geographic information systems and display systems of manned aircrafts navigation complexes. Application features of liquid crystal screens with a different number of vertical and horizontal pixels are considered at displaying of geographic information data on different scales. Estimation display of navigation parameters values on board the aircraft is done in two ways: a numeric value is directly displayed on the screen of multi-color indicator, and a silhouette of the object is formed on the screen on a substrate background, which is a graphical representation of area map in the flight zone. Various scales of area digital map display currently used in the aviation industry have been considered. Calculation results of one pixel scale interval, depending on the specifications of liquid crystal screen and zoom of the map display area on the multifunction digital display, are given. The paper contains experimental results of the accuracy evaluation for area position display of the aircraft based on the data from the satellite navigation system and inertial navigation system, obtained during the flight program run of the real object. On the basis of these calculations a family of graphs was created for precision error display of the object reference point position using the onboard indicators with liquid crystal screen with different screen resolutions (6 "×8", 7.2 "×9.6", 9"×12" for two map display scales (1:0 , 25 km, 1-2 km. These dependency graphs can be used both to assess the error value of object area position display in existing navigation systems and to calculate the error value in upgrading facilities.

  13. Construct and face validity of a virtual reality-based camera navigation curriculum.

    Science.gov (United States)

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Trade typhoon over Japan: Turbulence metaphor and spatial production cycles feedback loops of the Japanese economy, 1980–85–90

    Directory of Open Access Journals (Sweden)

    M. Sonis

    2002-01-01

    Full Text Available This paper deals with the turbulence similitude between whirlpool structure of atmosphere disturbances and the spatial production cycles. Such an analogy leads to the production cycles feedback loops superposition analysis of trade feedbacks reflecting the economic phenomena of horizontal and vertical trade specifications. Moreover, the visualization of this process is achieved with the help of coloring the different permutation matrices presenting the hierarchy of production cycles feedback loops. In this manner the qualitative presentation of Japan inter-regional and inter-industry trade, 1980–85–90, is visualized and interpreted.

  15. Attentional effects in the visual pathways

    DEFF Research Database (Denmark)

    Bundesen, Claus; Larsen, Axel; Kyllingsbæk, Søren

    2002-01-01

    nucleus. Frontal activations were found in a region that seems implicated in visual short-term memory (posterior parts of the superior sulcus and the middle gyrus). The reverse, color-shape comparison showed bilateral increases in rCBF in the anterior cingulate gyri, superior frontal gyri, and superior...... and middle temporal gyri. The attentional effects found by the shape-color comparison in the thalamus and the primary visual cortex may have been generated by feedback signals preserving visual representations of selected stimuli in short-term memory....

  16. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    Science.gov (United States)

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  17. Building a grid-semantic map for the navigation of service robots through human–robot interaction

    Directory of Open Access Journals (Sweden)

    Cheng Zhao

    2015-11-01

    Full Text Available This paper presents an interactive approach to the construction of a grid-semantic map for the navigation of service robots in an indoor environment. It is based on the Robot Operating System (ROS framework and contains four modules, namely Interactive Module, Control Module, Navigation Module and Mapping Module. Three challenging issues have been focused during its development: (i how human voice and robot visual information could be effectively deployed in the mapping and navigation process; (ii how semantic names could combine with coordinate data in an online Grid-Semantic map; and (iii how a localization–evaluate–relocalization method could be used in global localization based on modified maximum particle weight of the particle swarm. A number of experiments are carried out in both simulated and real environments such as corridors and offices to verify its feasibility and performance.

  18. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  19. Technology-Based Feedback and Its Efficacy in Improving Gait Parameters in Patients with Abnormal Gait: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Gema Chamorro-Moriana

    2018-01-01

    Full Text Available This systematic review synthesized and analyzed clinical findings related to the effectiveness of innovative technological feedback for tackling functional gait recovery. An electronic search of PUBMED, PEDro, WOS, CINAHL, and DIALNET was conducted from January 2011 to December 2016. The main inclusion criteria were: patients with modified or abnormal gait; application of technology-based feedback to deal with functional recovery of gait; any comparison between different kinds of feedback applied by means of technology, or any comparison between technological and non-technological feedback; and randomized controlled trials. Twenty papers were included. The populations were neurological patients (75%, orthopedic and healthy subjects. All participants were adults, bar one. Four studies used exoskeletons, 6 load platforms and 5 pressure sensors. The breakdown of the type of feedback used was as follows: 60% visual, 40% acoustic and 15% haptic. 55% used terminal feedback versus 65% simultaneous feedback. Prescriptive feedback was used in 60% of cases, while 50% used descriptive feedback. 62.5% and 58.33% of the trials showed a significant effect in improving step length and speed, respectively. Efficacy in improving other gait parameters such as balance or range of movement is observed in more than 75% of the studies with significant outcomes. Conclusion: Treatments based on feedback using innovative technology in patients with abnormal gait are mostly effective in improving gait parameters and therefore useful for the functional recovery of patients. The most frequently highlighted types of feedback were immediate visual feedback followed by terminal and immediate acoustic feedback.

  20. Continuous Auditory Feedback of Eye Movements: An Exploratory Study toward Improving Oculomotor Control

    Directory of Open Access Journals (Sweden)

    Eric O. Boyer

    2017-04-01

    Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.

  1. Influence of anatomic landmarks in the virtual environment on simulated angled laparoscope navigation

    Science.gov (United States)

    Christie, Lorna S.; Goossens, Richard H. M.; de Ridder, Huib; Jakimowicz, Jack J.

    2010-01-01

    Background The aim of this study is to investigate the influence of the presence of anatomic landmarks on the performance of angled laparoscope navigation on the SimSurgery SEP simulator. Methods Twenty-eight experienced laparoscopic surgeons (familiar with 30° angled laparoscope, >100 basic laparoscopic procedures, >5 advanced laparoscopic procedures) and 23 novices (no laparoscopy experience) performed the Camera Navigation task in an abstract virtual environment (CN-box) and in a virtual representation of the lower abdomen (CN-abdomen). They also rated the realism and added value of the virtual environments on seven-point scales. Results Within both groups, the CN-box task was accomplished in less time and with shorter tip trajectory than the CN-abdomen task (Wilcoxon test, p  0.05). In both groups, the CN tasks were perceived as hard work and more challenging than anticipated. Conclusions Performance of the angled laparoscope navigation task is influenced by the virtual environment surrounding the exercise. The task was performed better in an abstract environment than in a virtual environment with anatomic landmarks. More insight is required into the influence and function of different types of intrinsic and extrinsic feedback on the effectiveness of preclinical simulator training. PMID:20419318

  2. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    Science.gov (United States)

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  3. Improving Multisensor Positioning of Land Vehicles with Integrated Visual Odometry for Next-Generation Self-Driving Cars

    Directory of Open Access Journals (Sweden)

    Muhammed Tahsin Rahman

    2018-01-01

    Full Text Available For their complete realization, autonomous vehicles (AVs fundamentally rely on the Global Navigation Satellite System (GNSS to provide positioning and navigation information. However, in area such as urban cores, parking lots, and under dense foliage, which are all commonly frequented by AVs, GNSS signals suffer from blockage, interference, and multipath. These effects cause high levels of errors and long durations of service discontinuity that mar the performance of current systems. The prevalence of vision and low-cost inertial sensors provides an attractive opportunity to further increase the positioning and navigation accuracy in such GNSS-challenged environments. This paper presents enhancements to existing multisensor integration systems utilizing the inertial navigation system (INS to aid in Visual Odometry (VO outlier feature rejection. A scheme called Aided Visual Odometry (AVO is developed and integrated with a high performance mechanization architecture utilizing vehicle motion and orientation sensors. The resulting solution exhibits improved state covariance convergence and navigation accuracy, while reducing computational complexity. Experimental verification of the proposed solution is illustrated through three real road trajectories, over two different land vehicles, and using two low-cost inertial measurement units (IMUs.

  4. A Randomized Control Trial of Cardiopulmonary Feedback Devices and Their Impact on Infant Chest Compression Quality: A Simulation Study.

    Science.gov (United States)

    Austin, Andrea L; Spalding, Carmen N; Landa, Katrina N; Myer, Brian R; Donald, Cure; Smith, Jason E; Platt, Gerald; King, Heather C

    2017-10-27

    In effort to improve chest compression quality among health care providers, numerous feedback devices have been developed. Few studies, however, have focused on the use of cardiopulmonary resuscitation feedback devices for infants and children. This study evaluated the quality of chest compressions with standard team-leader coaching, a metronome (MetroTimer by ONYX Apps), and visual feedback (SkillGuide Cardiopulmonary Feedback Device) during simulated infant cardiopulmonary resuscitation. Seventy voluntary health care providers who had recently completed Pediatric Advanced Life Support or Basic Life Support courses were randomized to perform simulated infant cardiopulmonary resuscitation into 1 of 3 groups: team-leader coaching alone (control), coaching plus metronome, or coaching plus SkillGuide for 2 minutes continuously. Rate, depth, and frequency of complete recoil during cardiopulmonary resuscitation were recorded by the Laerdal SimPad device for each participant. American Heart Association-approved compression techniques were randomized to either 2-finger or encircling thumbs. The metronome was associated with more ideal compression rate than visual feedback or coaching alone (104/min vs 112/min and 113/min; P = 0.003, 0.019). Visual feedback was associated with more ideal depth than auditory (41 mm vs 38.9; P = 0.03). There were no significant differences in complete recoil between groups. Secondary outcomes of compression technique revealed a difference of 1 mm. Subgroup analysis of male versus female showed no difference in mean number of compressions (221.76 vs 219.79; P = 0.72), mean compression depth (40.47 vs 39.25; P = 0.09), or rate of complete release (70.27% vs 64.96%; P = 0.54). In the adult literature, feedback devices often show an increase in quality of chest compressions. Although more studies are needed, this study did not demonstrate a clinically significant improvement in chest compressions with the addition of a metronome or visual

  5. Sensor-Based Interactive Balance Training with Visual Joint Movement Feedback for Improving Postural Stability in Diabetics with Peripheral Neuropathy: A Randomized Controlled Trial.

    Science.gov (United States)

    Grewal, Gurtej Singh; Schwenk, Michael; Lee-Eng, Jacqueline; Parvaneh, Saman; Bharara, Manish; Menzies, Robert A; Talal, Talal K; Armstrong, David G; Najafi, Bijan

    2015-01-01

    Individuals with diabetic peripheral neuropathy (DPN) have deficits in sensory and motor skills leading to inadequate proprioceptive feedback, impaired postural balance and higher fall risk. This study investigated the effect of sensor-based interactive balance training on postural stability and daily physical activity in older adults with diabetes. Thirty-nine older adults with DPN were enrolled (age 63.7 ± 8.2 years, BMI 30.6 ± 6, 54% females) and randomized to either an intervention (IG) or a control (CG) group. The IG received sensor-based interactive exercise training tailored for people with diabetes (twice a week for 4 weeks). The exercises focused on shifting weight and crossing virtual obstacles. Body-worn sensors were implemented to acquire kinematic data and provide real-time joint visual feedback during the training. Outcome measurements included changes in center of mass (CoM) sway, ankle and hip joint sway measured during a balance test while the eyes were open and closed at baseline and after the intervention. Daily physical activities were also measured during a 48-hour period at baseline and at follow-up. Analysis of covariance was performed for the post-training outcome comparison. Compared with the CG, the patients in the IG showed a significantly reduced CoM sway (58.31%; p = 0.009), ankle sway (62.7%; p = 0.008) and hip joint sway (72.4%; p = 0.017) during the balance test with open eyes. The ankle sway was also significantly reduced in the IG group (58.8%; p = 0.037) during measurements while the eyes were closed. The number of steps walked showed a substantial but nonsignificant increase (+27.68%; p = 0.064) in the IG following training. The results of this randomized controlled trial demonstrate that people with DPN can significantly improve their postural balance with diabetes-specific, tailored, sensor-based exercise training. The results promote the use of wearable technology in exercise training; however, future studies comparing this

  6. The quality of visual information about the lower extremities influences visuomotor coordination during virtual obstacle negotiation.

    Science.gov (United States)

    Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M

    2018-05-09

    Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.

  7. A Computerized Tablet with Visual Feedback of Hand Position for Functional Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Mahta eKarimpoor

    2015-03-01

    Full Text Available Neuropsychological tests - behavioral tasks that very commonly involve handwriting and drawing - are widely used in the clinic to detect abnormal brain function. Functional magnetic resonance imaging (fMRI may be useful in increasing the specificity of such tests. However, performing complex pen-and-paper tests during fMRI involves engineering challenges. Previously, we developed an fMRI-compatible, computerized tablet system to address this issue. However, the tablet did not include visual feedback of hand position (VFHP, a human factors component that may be important for fMRI of certain patient populations. A real-time system was thus developed to provide VFHP and integrated with the tablet in an augmented reality display. The effectiveness of the system was initially tested in young healthy adults who performed various handwriting tasks in front of a computer display with and without VFHP. Pilot fMRI of writing tasks were performed by two representative individuals with and without VFHP. Quantitative analysis of the behavioral results indicated improved writing performance with VFHP. The pilot fMRI results suggest that writing with VFHP requires less neural resources compared to the without VFHP condition, to maintain similar behavior. Thus, the tablet system with VFHP is recommended for future fMRI studies involving patients with impaired brain function and where ecologically valid behavior is important.

  8. A computerized tablet with visual feedback of hand position for functional magnetic resonance imaging

    Science.gov (United States)

    Karimpoor, Mahta; Tam, Fred; Strother, Stephen C.; Fischer, Corinne E.; Schweizer, Tom A.; Graham, Simon J.

    2015-01-01

    Neuropsychological tests behavioral tasks that very commonly involve handwriting and drawing are widely used in the clinic to detect abnormal brain function. Functional magnetic resonance imaging (fMRI) may be useful in increasing the specificity of such tests. However, performing complex pen-and-paper tests during fMRI involves engineering challenges. Previously, we developed an fMRI-compatible, computerized tablet system to address this issue. However, the tablet did not include visual feedback of hand position (VFHP), a human factors component that may be important for fMRI of certain patient populations. A real-time system was thus developed to provide VFHP and integrated with the tablet in an augmented reality display. The effectiveness of the system was initially tested in young healthy adults who performed various handwriting tasks in front of a computer display with and without VFHP. Pilot fMRI of writing tasks were performed by two representative individuals with and without VFHP. Quantitative analysis of the behavioral results indicated improved writing performance with VFHP. The pilot fMRI results suggest that writing with VFHP requires less neural resources compared to the without VFHP condition, to maintain similar behavior. Thus, the tablet system with VFHP is recommended for future fMRI studies involving patients with impaired brain function and where ecologically valid behavior is important. PMID:25859201

  9. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  10. Follower-Centered Perspective on Feedback: Effects of Feedback Seeking on Identification and Feedback Environment

    OpenAIRE

    Gong, Zhenxing; Li, Miaomiao; Qi, Yaoyuan; Zhang, Na

    2017-01-01

    In the formation mechanism of the feedback environment, the existing research pays attention to external feedback sources and regards individuals as objects passively accepting feedback. Thus, the external source fails to realize the individuals’ need for feedback, and the feedback environment cannot provide them with useful information, leading to a feedback vacuum. The aim of this study is to examine the effect of feedback-seeking by different strategies on the supervisor-feedback environme...

  11. Virtual Reality Feedback Cues for Improvement of Gait in Patients with Parkinson's Disease

    Directory of Open Access Journals (Sweden)

    Samih Badarny

    2014-03-01

    Full Text Available Background: Our aim was to study the effects of visual feedback cues, responding dynamically to patient's self‐motion and provided through a portable see‐through virtual reality apparatus, on the walking abilities of patients with Parkinson's disease.Methods: Twenty patients participated. On‐line and residual effects on walking speed and stride length were measured. Results Attaching the visual feedback device to the patient with the display turned off showed a negligible effect of about 2%. With the display turned on, 56% of the patients improved either their walking speed, or their stride length, or both, by over 20%. After device removal, and waiting for 15 minutes, the patients were instructed to walk again: 68% of the patients showed over 20% improvement in either walking speed or stride length or both. One week after participating in the first test, 36% of the patients showed over 20% improvement in baseline performance with respect to the previous test. Some of the patients reported that they still walked on the tiles in their minds.Discussion: Improvements in walking abilities were measured in patients with Parkinson's disease using virtual reality visual feedback cues. Residual effects suggest the examination of this approach in a comprehensive therapy program.

  12. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    Science.gov (United States)

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  13. Navigating actions through the rodent parietal cortex

    Directory of Open Access Journals (Sweden)

    Jonathan R. Whitlock

    2014-05-01

    Full Text Available The posterior parietal cortex (PPC participates in a manifold of cognitive functions, including visual attention, working memory, spatial processing and movement planning. Given the vast interconnectivity of PPC with sensory and motor areas, it is not surprising that neuronal recordings show that PPC often encodes mixtures of spatial information as well as the movements required to reach a goal. Recent work sought to discern the relative strength of spatial versus motor signaling in PPC by recording single unit activity in PPC of freely behaving rats during selective changes in either the spatial layout of the local environment or in the pattern of locomotor behaviors executed during navigational tasks. The results revealed unequivocally a predominant sensitivity of PPC neurons to locomotor action structure, with subsets of cells even encoding upcoming movements more than 1 second in advance. In light of these and other recent findings in the field, I propose that one of the key contributions of PPC to navigation is the synthesis of goal-directed behavioral sequences, and that the rodent PPC may serve as an apt system to investigate cellular mechanisms for spatial motor planning as traditionally studied in humans and monkeys.

  14. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    Directory of Open Access Journals (Sweden)

    Teresa eSollfrank

    2015-08-01

    Full Text Available A repetitive movement practice by motor imagery (MI can influence motor cortical excitability in the electroencephalogram (EEG. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007. This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during motor imagery. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronisation (ERD of the upper alpha band (10-12 Hz over the sensorimotor cortices thereby potentially improving MI based BCI protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb motor imagery present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (2D vs. 3D. The largest upper alpha band power decrease was obtained during motor imagery after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D visualization modality group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during MI. Realistic visual feedback, consistent with the participant’s motor imagery, might be helpful for accomplishing successful motor imagery and the use of such feedback may assist in making BCI a more natural interface for motor imagery based BCI rehabilitation.

  15. Indoor wayfinding and navigation

    CERN Document Server

    2015-01-01

    Due to the widespread use of navigation systems for wayfinding and navigation in the outdoors, researchers have devoted their efforts in recent years to designing navigation systems that can be used indoors. This book is a comprehensive guide to designing and building indoor wayfinding and navigation systems. It covers all types of feasible sensors (for example, Wi-Fi, A-GPS), discussing the level of accuracy, the types of map data needed, the data sources, and the techniques for providing routes and directions within structures.

  16. Control algorithms for autonomous robot navigation

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1985-01-01

    This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced

  17. Using Inertial Sensors in Smartphones for Curriculum Experiments of Inertial Navigation Technology

    Directory of Open Access Journals (Sweden)

    Xiaoji Niu

    2015-03-01

    Full Text Available Inertial technology has been used in a wide range of applications such as guidance, navigation, and motion tracking. However, there are few undergraduate courses that focus on the inertial technology. Traditional inertial navigation systems (INS and relevant testing facilities are expensive and complicated in operation, which makes it inconvenient and risky to perform teaching experiments with such systems. To solve this issue, this paper proposes the idea of using smartphones, which are ubiquitous and commonly contain off-the-shelf inertial sensors, as the experimental devices. A series of curriculum experiments are designed, including the Allan variance test, the calibration test, the initial leveling test and the drift feature test. These experiments are well-selected and can be implemented simply with the smartphones and without any other specialized tools. The curriculum syllabus was designed and tentatively carried out on 14 undergraduate students with a science and engineering background. Feedback from the students show that the curriculum can help them gain a comprehensive understanding of the inertial technology such as calibration and modeling of the sensor errors, determination of the device attitude and accumulation of the sensor errors in the navigation algorithm. The use of inertial sensors in smartphones provides the students the first-hand experiences and intuitive feelings about the function of inertial sensors. Moreover, it can motivate students to utilize ubiquitous low-cost sensors in their future research.

  18. How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?

    Science.gov (United States)

    Gray, Rob

    2009-01-01

    Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…

  19. iShoes for blind and visually impaired people

    OpenAIRE

    Assairi, Bandar; Holmes, Violeta

    2013-01-01

    This paper presents the development of an iShoes system for blind and visually impaired people. The iShoes system utilizes a microcontroller with sound output interfaced with ultrasonic sensors. The prototype system is designed to be specifically mounted on/in the shoes to aid navigation in urban routes. The ultrasonic transducers determine the range from an obstacle and then play an audio message to reflect the distance from the target. This system will assist blind and visually impaired peo...

  20. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    Science.gov (United States)

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  1. Piloting the feasibility of head-mounted video technology to augment student feedback during simulated clinical decision-making: An observational design pilot study.

    Science.gov (United States)

    Forbes, Helen; Bucknall, Tracey K; Hutchinson, Alison M

    2016-04-01

    Clinical decision-making is a complex activity that is critical to patient safety. Simulation, augmented by feedback, affords learners the opportunity to learn critical clinical decision-making skills. More detailed feedback following simulation exercises has the potential to further enhance student learning, particularly in relation to developing improved clinical decision-making skills. To investigate the feasibility of head-mounted video camera recordings, to augment feedback, following acute patient deterioration simulations. Pilot study using an observational design. Ten final-year nursing students participated in three simulation exercises, each focussed on detection and management of patient deterioration. Two observers collected behavioural data using an adapted version of Gaba's Clinical Simulation Tool, to provide verbal feedback to each participant, following each simulation exercise. Participants wore a head-mounted video camera during the second simulation exercise only. Video recordings were replayed to participants to augment feedback, following the second simulation exercise. Data were collected on: participant performance (observed and perceived); participant perceptions of feedback methods; and head-mounted video camera recording feasibility and capability for detailed audio-visual feedback. Management of patient deterioration improved for six participants (60%). Increased perceptions of confidence (70%) and competence (80%), were reported by the majority of participants. Few participants (20%) agreed that the video recording specifically enhanced their learning. The visual field of the head-mounted video camera was not always synchronised with the participant's field of vision, thus affecting the usefulness of some recordings. The usefulness of the video recordings, to enhance verbal feedback to participants on detection and management of simulated patient deterioration, was inconclusive. Modification of the video camera glasses, to improve

  2. Mobile Augmented Reality enhances indoor navigation for wheelchair users

    Directory of Open Access Journals (Sweden)

    Luciene Chagas de Oliveira

    Full Text Available Introduction: Individuals with mobility impairments associated with lower limb disabilities often face enormous challenges to participate in routine activities and to move around various environments. For many, the use of wheelchairs is paramount to provide mobility and social inclusion. Nevertheless, they still face a number of challenges to properly function in our society. Among the many difficulties, one in particular stands out: navigating in complex internal environments (indoors. The main objective of this work is to propose an architecture based on Mobile Augmented Reality to support the development of indoor navigation systems dedicated to wheelchair users, that is also capable of recording CAD drawings of the buildings and dealing with accessibility issues for that population. Methods Overall, five main functional requirements are proposed: the ability to allow for indoor navigation by means of Mobile Augmented Reality techniques; the capacity to register and configure building CAD drawings and the position of fiducial markers, points of interest and obstacles to be avoided by the wheelchair user; the capacity to find the best route for wheelchair indoor navigation, taking stairs and other obstacles into account; allow for the visualization of virtual directional arrows in the smartphone displays; and incorporate touch or voice commands to interact with the application. The architecture is proposed as a combination of four layers: User interface; Control; Service; and Infrastructure. A proof-of-concept application was developed and tests were performed with disable volunteers operating manual and electric wheelchairs. Results The application was implemented in Java for the Android operational system. A local database was used to store the test building CAD drawings and the position of fiducial markers and points of interest. The Android Augmented Reality library was used to implement Augmented Reality and the Blender open source

  3. An investigation of the roles of geomagnetic and acoustic cues in whale navigation and orientation

    Science.gov (United States)

    Allen, Ann Nichole

    Many species of whales migrate annually between high-latitude feeding grounds and low-latitude breeding grounds. Yet, very little is known about how these animals navigate during these migrations. This thesis takes a first look at the roles of geomagnetic and acoustic cues in humpback whale navigation and orientation, in addition to documenting some effects of human-produced sound on beaked whales. The tracks of satellite-tagged humpback whales migrating from Hawaii to Alaska were found to have systematic deviations from the most direct route to their destination. For each whale, a migration track was modeled using only geomagnetic inclination and intensity as navigation cues. The directions in which the observed and modeled tracks deviated from the direct route were compared and found to match for 7 out of 9 tracks, which suggests that migrating humpback whales may use geomagnetic cues for navigation. Additionally, in all cases the observed tracks followed a more direct route to the destination than the modeled tracks, indicating that the whales are likely using additional navigational cues to improve their routes. There is a significant amount of sound available in the ocean to aid in navigation and orientation of a migrating whale. This research investigates the possibility that humpback whales migrating near-shore listen to sounds of snapping shrimp to detect the presence of obstacles, such as rocky islands. A visual tracking study was used, together with hydrophone recordings near a rocky island, to determine whether the whales initiated an avoidance reaction at distances that varied with the acoustic detection range of the island. No avoidance reaction was found. Propagation modeling of the snapping shrimp sounds suggested that the detection range of the island was beyond the visual limit of the survey, indicating that snapping shrimp sounds may be suited as a long-range indicator of a rocky island. Lastly, this thesis identifies a prolonged avoidance

  4. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  5. A Leapfrog Navigation System

    Science.gov (United States)

    Opshaug, Guttorm Ringstad

    There are times and places where conventional navigation systems, such as the Global Positioning System (GPS), are unavailable due to anything from temporary signal occultations to lack of navigation system infrastructure altogether. The goal of the Leapfrog Navigation System (LNS) is to provide localized positioning services for such cases. The concept behind leapfrog navigation is to advance a group of navigation units teamwise into an area of interest. In a practical 2-D case, leapfrogging assumes known initial positions of at least two currently stationary navigation units. Two or more mobile units can then start to advance into the area of interest. The positions of the mobiles are constantly being calculated based on cross-range distance measurements to the stationary units, as well as cross-ranges among the mobiles themselves. At some point the mobile units stop, and the stationary units are released to move. This second team of units (now mobile) can then overtake the first team (now stationary) and travel even further towards the common goal of the group. Since there always is one stationary team, the position of any unit can be referenced back to the initial positions. Thus, LNS provides absolute positioning. I developed navigation algorithms needed to solve leapfrog positions based on cross-range measurements. I used statistical tools to predict how position errors would grow as a function of navigation unit geometry, cross-range measurement accuracy and previous position errors. Using this knowledge I predicted that a 4-unit Leapfrog Navigation System using 100 m baselines and 200 m leap distances could travel almost 15 km before accumulating absolute position errors of 10 m (1sigma). Finally, I built a prototype leapfrog navigation system using 4 GPS transceiver ranging units. I placed the 4 units in the vertices a 10m x 10m square, and leapfrogged the group 20 meters forwards, and then back again (40 m total travel). Average horizontal RMS position

  6. Motion-guided attention promotes adaptive communications during social navigation.

    Science.gov (United States)

    Lemasson, B H; Anderson, J J; Goodwin, R A

    2013-03-07

    Animals are capable of enhanced decision making through cooperation, whereby accurate decisions can occur quickly through decentralized consensus. These interactions often depend upon reliable social cues, which can result in highly coordinated activities in uncertain environments. Yet information within a crowd may be lost in translation, generating confusion and enhancing individual risk. As quantitative data detailing animal social interactions accumulate, the mechanisms enabling individuals to rapidly and accurately process competing social cues remain unresolved. Here, we model how motion-guided attention influences the exchange of visual information during social navigation. We also compare the performance of this mechanism to the hypothesis that robust social coordination requires individuals to numerically limit their attention to a set of n-nearest neighbours. While we find that such numerically limited attention does not generate robust social navigation across ecological contexts, several notable qualities arise from selective attention to motion cues. First, individuals can instantly become a local information hub when startled into action, without requiring changes in neighbour attention level. Second, individuals can circumvent speed-accuracy trade-offs by tuning their motion thresholds. In turn, these properties enable groups to collectively dampen or amplify social information. Lastly, the minority required to sway a group's short-term directional decisions can change substantially with social context. Our findings suggest that motion-guided attention is a fundamental and efficient mechanism underlying collaborative decision making during social navigation.

  7. TacTool: a tactile rapid prototyping tool for visual interfaces

    NARCIS (Netherlands)

    Keyson, D.V.; Tang, H.K.; Anzai, Y.; Ogawa, K.; Mori, H.

    1995-01-01

    This paper describes the TacTool development tool and input device for designing and evaluating visual user interfaces with tactile feedback. TacTool is currently supported by the IPO trackball with force feedback in the x and y directions. The tool is designed to enable both the designer and the

  8. Trunk motion visual feedback during walking improves dynamic balance in older adults: Assessor blinded randomized controlled trial.

    Science.gov (United States)

    Anson, Eric; Ma, Lei; Meetam, Tippawan; Thompson, Elizabeth; Rathore, Roshita; Dean, Victoria; Jeka, John

    2018-05-01

    Virtual reality and augmented feedback have become more prevalent as training methods to improve balance. Few reports exist on the benefits of providing trunk motion visual feedback (VFB) during treadmill walking, and most of those reports only describe within session changes. To determine whether trunk motion VFB treadmill walking would improve over-ground balance for older adults with self-reported balance problems. 40 adults (75.8 years (SD 6.5)) with self-reported balance difficulties or a history of falling were randomized to a control or experimental group. Everyone walked on a treadmill at a comfortable speed 3×/week for 4 weeks in 2 min bouts separated by a seated rest. The control group was instructed to look at a stationary bulls-eye target while the experimental group also saw a moving cursor superimposed on the stationary bulls-eye that represented VFB of their walking trunk motion. The experimental group was instructed to keep the cursor in the center of the bulls-eye. Somatosensory (monofilaments and joint position testing) and vestibular function (canal specific clinical head impulses) was evaluated prior to intervention. Balance and mobility were tested before and after the intervention using Berg Balance Test, BESTest, mini-BESTest, and Six Minute Walk. There were no significant differences between groups before the intervention. The experimental group significantly improved on the BESTest (p = 0.031) and the mini-BEST (p = 0.019). The control group did not improve significantly on any measure. Individuals with more profound sensory impairments had a larger improvement on dynamic balance subtests of the BESTest. Older adults with self-reported balance problems improve their dynamic balance after training using trunk motion VFB treadmill walking. Individuals with worse sensory function may benefit more from trunk motion VFB during walking than individuals with intact sensory function. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Optimal motion planning using navigation measure

    Science.gov (United States)

    Vaidya, Umesh

    2018-05-01

    We introduce navigation measure as a new tool to solve the motion planning problem in the presence of static obstacles. Existence of navigation measure guarantees collision-free convergence at the final destination set beginning with almost every initial condition with respect to the Lebesgue measure. Navigation measure can be viewed as a dual to the navigation function. While the navigation function has its minimum at the final destination set and peaks at the obstacle set, navigation measure takes the maximum value at the destination set and is zero at the obstacle set. A linear programming formalism is proposed for the construction of navigation measure. Set-oriented numerical methods are utilised to obtain finite dimensional approximation of this navigation measure. Application of the proposed navigation measure-based theoretical and computational framework is demonstrated for a motion planning problem in a complex fluid flow.

  10. Integrated navigation method of a marine strapdown inertial navigation system using a star sensor

    International Nuclear Information System (INIS)

    Wang, Qiuying; Diao, Ming; Gao, Wei; Zhu, Minghong; Xiao, Shu

    2015-01-01

    This paper presents an integrated navigation method of the strapdown inertial navigation system (SINS) using a star sensor. According to the principle of SINS, its own navigation information contains an error that increases with time. Hence, the inertial attitude matrix from the star sensor is introduced as the reference information to correct the SINS increases error. For the integrated navigation method, the vehicle’s attitude can be obtained in two ways: one is calculated from SINS; the other, which we have called star sensor attitude, is obtained as the product between the SINS position and the inertial attitude matrix from the star sensor. Therefore, the SINS position error is introduced in the star sensor attitude error. Based on the characteristics of star sensor attitude error and the mathematical derivation, the SINS navigation errors can be obtained by the coupling calculation between the SINS attitude and the star sensor attitude. Unlike several current techniques, the navigation process of this method is non-radiating and invulnerable to jamming. The effectiveness of this approach was demonstrated by simulation and experimental study. The results show that this integrated navigation method can estimate the attitude error and the position error of SINS. Therefore, the SINS navigation accuracy is improved. (paper)

  11. Self-Management of Patient Body Position, Pose, and Motion Using Wide-Field, Real-Time Optical Measurement Feedback: Results of a Volunteer Study

    International Nuclear Information System (INIS)

    Parkhurst, James M.; Price, Gareth J.; Sharrock, Phil J.; Jackson, Andrew S.N.; Stratford, Julie; Moore, Christopher J.

    2013-01-01

    Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup pose and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies

  12. Visual feedback navigation for cable tracking by autonomous underwater vehicles; Jiritsugata kaichu robot no gazo shori ni motozuku cable jido tsuiju

    Energy Technology Data Exchange (ETDEWEB)

    Takai, M.; Ura, T. [The University of Tokyo, Tokyo (Japan). Institute of Industrial Science; Balasuriya, B.; Lam, W. [The University of Tokyo, Tokyo (Japan); Kuroda, Y. [Meiji Univ., Tokyo (Japan)

    1997-08-01

    A vision processing unit was introduced into autonomous underwater vehicles (AUV) to judge the visual situation and to construct an environmental observation platform that can collect wide-range and high-precision measurement data. The cable optionally installed at the bottom of the sea was recognized by vision processing to propose automatic tracking technique. An estimator that compensates for the hough conversion or time delay and a PSA controller that is used as a target value set mechanism or lower-level controller were introduced as the factor technology required for automatic tracking. The feature of the automatic tracking is that a general-purpose platform which can observe the prescribed range environmentally in high precision and density can be constructed because the observation range required by the observer can be prescribed near the sea-bottom surface using a cable. The verification result off Omi Hachiman at Lake Biwa showed that AUV can be used for the high-precision environmental survey in the range prescribed near the sea-bottom surface using a cable. 8 refs., 8 figs., 1 tab.

  13. Sensitivity to the visual field origin of natural image patches in human low-level visual cortex

    Directory of Open Access Journals (Sweden)

    Damien J. Mannion

    2015-06-01

    Full Text Available Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7 while we used functional magnetic resonance imaging (fMRI to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field and the image patch source location (above or below fixation; the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.

  14. Motion Sensors and Transducers to Navigate an Intelligent Mechatronic Platform for Outdoor Applications

    Directory of Open Access Journals (Sweden)

    Michail G. PAPOUTSIDAKIS

    2016-03-01

    Full Text Available The initial goal of this project is to investigate if different sensor types and their attached transducers can support everyday human needs. Nowadays, there is a constant need to automate many time consuming applications not only in industrial environments but also in smaller scale applications, therefore robotics is a field that continuously tracks research interest. The area of human assistance by machines in everyday needs, continues to grow and to keep users interest very high. "Mechatronics" differ from Robotics in terms of integrated electronics, the advantage of being easily re-programmable and more over the versatility of hosting all kind of sensor types, sensor networks, transducers and actuators. In this research project, such an integrated autonomous device will be presented, focusing around the use of sensors and their feedback signals for proximity, position, motion, distance, placement and finally navigation. The ultimate sensor type choice for the task as well as all transducers signals management will also be highlighted. An up-to-date technology microcontroller will host all the above information and moreover move the mechatronic platform via motor actuators. The control algorithm which will be designed for the application is responsible for receiving all feedback signals, processing them and safely navigate the system in order to undertake its mission. The project scenario, the necessary electronic equipment and the controller design method will be highlighted in the following paragraphs of this document. Conclusions and results of sensor usage, platform's performance and problems solutions, forms the rest of this paper body.

  15. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly

    Directory of Open Access Journals (Sweden)

    Yi-Ching Chen

    2017-11-01

    Full Text Available Error amplification (EA feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG. EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds, and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.

  16. 14 CFR 129.17 - Aircraft communication and navigation equipment for operations under IFR or over the top.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Aircraft communication and navigation....S.-REGISTERED AIRCRAFT ENGAGED IN COMMON CARRIAGE General § 129.17 Aircraft communication and... accuracy required for ATC; (ii) One marker beacon receiver providing visual and aural signals; and (iii...

  17. Feedback control of one's own action: Self-other sensory attribution in motor control.

    Science.gov (United States)

    Asai, Tomohisa

    2015-12-15

    The sense of agency, the subjective experience of controlling one's own action, has an important function in motor control. When we move our own body or even external tools, we attribute that movement to ourselves and utilize that sensory information in order to correct "our own" movement in theory. The dynamic relationship between conscious self-other attribution and feedback control, however, is still unclear. Participants were required to make a sinusoidal reaching movement and received its visual feedback (i.e., cursor). When participants received a fake movement that was spatio-temporally close to their actual movement, illusory self-attribution of the fake movement was observed. In this situation, since participants tried to control the cursor but it was impossible to do so, the movement error was increased (Experiment 1). However, when the visual feedback was reduced to make self-other attribution difficult, there was no further increase in the movement error (Experiment 2). These results indicate that conscious self-other sensory attribution might coordinate sensory input and motor output. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Uncertainty of feedback and state estimation determines the speed of motor adaptation

    Directory of Open Access Journals (Sweden)

    Kunlin Wei

    2010-05-01

    Full Text Available Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.

  19. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    , while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also...

  20. Feedforward, horizontal, and feedback processing in the visual cortex.

    NARCIS (Netherlands)

    Spekreijse, H.; Lamme, V.A.F.

    1998-01-01

    The cortical visual system consists of many richly interconnected areas. Each area is characterized by more or less specific receptive field tuning properties. However, these tuning properties reflect only a subset of the interactions that occur within and between areas. Neuronal responses may be