WorldWideScience

Sample records for visually guided movements

  1. Activation of Visuomotor Systems during Visually Guided Movements: A Functional MRI Study

    Science.gov (United States)

    Ellermann, Jutta M.; Siegal, Joel D.; Strupp, John P.; Ebner, Timothy J.; Ugurbil, Kâmil

    1998-04-01

    The dorsal stream is a dominant visuomotor pathway that connects the striate and extrastriate cortices to posterior parietal areas. In turn, the posterior parietal areas send projections to the frontal primary motor and premotor areas. This cortical pathway is hypothesized to be involved in the transformation of a visual input into the appropriate motor output. In this study we used functional magnetic resonance imaging (fMRI) of the entire brain to determine the patterns of activation that occurred while subjects performed a visually guided motor task. In nine human subjects, fMRI data were acquired on a 4-T whole-body MR system equipped with a head gradient coil and a birdcage RF coil using aT*2-weighted EPI sequence. Functional activation was determined for three different tasks: (1) a visuomotor task consisting of moving a cursor on a screen with a joystick in relation to various targets, (2) a hand movement task consisting of moving the joystick without visual input, and (3) a eye movement task consisting of moving the eyes alone without visual input. Blood oxygenation level-dependent (BOLD) contrast-based activation maps of each subject were generated using period cross-correlation statistics. Subsequently, each subject's brain was normalized to Talairach coordinates, and the individual maps were compared on a pixel by pixel basis. Significantly activated pixels common to at least four out of six subjects were retained to construct the final functional image. The pattern of activation during visually guided movements was consistent with the flow of information from striate and extrastriate visual areas, to the posterior parietal complex, and then to frontal motor areas. The extensive activation of this network and the reproducibility among subjects is consistent with a role for the dorsal stream in transforming visual information into motor behavior. Also extensively activated were the medial and lateral cerebellar structures, implicating the cortico

  2. DEVELOPMENT OF VISUALLY GUIDED BEHAVIOR REQUIRES ORIENTED CONTOURS

    NARCIS (Netherlands)

    BRENNER, E; CORNELISSEN, FW

    1992-01-01

    Kittens do not learn to use visual information to guide their behaviour if they are deprived of the optic flow that accompanies their own movements. We show that the optic flow that is required for developing visually guided behaviour is derived from changes in contour orientations, rather than from

  3. Gaze strategies during visually-guided versus memory-guided grasping.

    Science.gov (United States)

    Prime, Steven L; Marotta, Jonathan J

    2013-03-01

    Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream.

  4. The consummatory origins of visually guided reaching in human infants: a dynamic integration of whole-body and upper-limb movements.

    Science.gov (United States)

    Foroud, Afra; Whishaw, Ian Q

    2012-06-01

    Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Memory-guided reaching in a patient with visual hemiagnosia.

    Science.gov (United States)

    Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc

    2016-06-01

    The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Eye movements in interception with delayed visual feedback.

    Science.gov (United States)

    Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli

    2018-04-19

    The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.

  7. The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    Science.gov (United States)

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Wong, Agnes M. F.

    2012-01-01

    Background Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position. Results Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their

  8. Evaluation of the Leap Motion Controller during the performance of visually-guided upper limb movements.

    Science.gov (United States)

    Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James

    2018-01-01

    Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between

  9. Separate visual representations for perception and for visually guided behavior

    Science.gov (United States)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  10. Memory-guided saccade processing in visual form agnosia (patient DF).

    Science.gov (United States)

    Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika

    2010-01-01

    According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.

  11. Accuracy of visually and memory-guided antisaccades in man.

    Science.gov (United States)

    Krappmann, P; Everling, S; Flohr, H

    1998-10-01

    Primary saccades to remembered targets are generally not precise, but rather undershoot target position. The major source of this saccadic undershoot may be (a) a memory-related process or (b) a poor spatial resolution in those processes which transfer the retinotopic target information into an intermediate memory-linked representation of space. The aim of this study was to investigate whether distortions of eye positions in the antisaccade task, which are characterized by inherent co-ordinate transformation processes, may completely account for the spatial inaccuracies of memory-guided antisaccades. The results show that the spatial inaccuracy of primary and secondary eye movements in the visually guided antisaccade task was comparable to that in the memory-guided antisaccade task. In both conditions, the direction error component was less dysmetric than the amplitude error component. Secondary eye movements were significantly corrective. This increase of eye position accuracy was achieved by reducing the amplitude error only. It is concluded from this study that at least some of the distortion of memory-guided saccades is due to inaccuracies in the sensorimotor co-ordinate transformations.

  12. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    Science.gov (United States)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  13. VISUALIZATION OF SPATIO-TEMPORAL RELATIONS IN MOVEMENT EVENT USING MULTI-VIEW

    Directory of Open Access Journals (Sweden)

    K. Zheng

    2017-09-01

    Full Text Available Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  14. Covert oculo-manual coupling induced by visually guided saccades.

    Directory of Open Access Journals (Sweden)

    Luca eFalciati

    2013-10-01

    Full Text Available Hand pointing to objects under visual guidance is one of the most common motor behaviors in everyday life. In natural conditions, gaze and arm movements are commonly aimed at the same target and the accuracy of both systems is considerably enhanced if eye and hand move together. Evidence supports the viewpoint that gaze and limb control systems are not independent but at least partially share a common neural controller. The aim of the present study was to verify whether a saccade execution induces excitability changes in the upper-limb corticospinal system (CSS, even in the absence of a manual response. This effect would provide evidence for the existence of a common drive for ocular and arm motor systems during fast aiming movements. Single-pulse TMS was applied to the left motor cortex of 19 subjects during a task involving visually guided saccades, and motor evoked potentials (MEPs induced in hand and wrist muscles of the contralateral relaxed arm were recorded. Subjects had to make visually guided saccades to one of 6 positions along the horizontal meridian (±5°, ±10° or ±15°. During each trial, TMS was randomly delivered at one of 3 different time delays: shortly after the end of the saccade or 300 ms or 540 ms after saccade onset. Fast eye movements towards a peripheral target were accompanied by changes in upper-limb CSS excitability. MEP amplitude was highest immediately after the end of the saccade and gradually decreased at longer TMS delays. In addition to the change in overall CSS excitability, MEPs were specifically modulated in different muscles, depending on the target position and the TMS delay. By applying a simple model of a manual pointing movement, we demonstrated that the observed changes in CSS excitability are compatible with the facilitation of an arm motor program for a movement aimed at the same target of the gaze. These results provide evidence in favor of the existence of a common drive for both eye and arm

  15. Visual explorer facilitator's guide

    CERN Document Server

    Palus, Charles J

    2010-01-01

    Grounded in research and practice, the Visual Explorer™ Facilitator's Guide provides a method for supporting collaborative, creative conversations about complex issues through the power of images. The guide is available as a component in the Visual Explorer Facilitator's Letter-sized Set, Visual Explorer Facilitator's Post card-sized Set, Visual Explorer Playing Card-sized Set, and is also available as a stand-alone title for purchase to assist multiple tool users in an organization.

  16. Visual reinforcement shapes eye movements in visual search.

    Science.gov (United States)

    Paeye, Céline; Schütz, Alexander C; Gegenfurtner, Karl R

    2016-08-01

    We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task.

  17. Visualizing guided tours

    DEFF Research Database (Denmark)

    Poulsen, Signe Herbers; Fjord-Larsen, Mads; Hansen, Frank Allan

    This paper identifies several problems with navigating and visualizing guided tours in traditional hypermedia systems. We discuss solutions to these problems, including the representation of guided tours as 3D metro maps with content preview. Issues regarding navigation and disorientation...

  18. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Directory of Open Access Journals (Sweden)

    Dmitry Kit

    Full Text Available Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  19. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Science.gov (United States)

    Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  20. MODELLING SYNERGISTIC EYE MOVEMENTS IN THE VISUAL FIELD

    Directory of Open Access Journals (Sweden)

    BARITZ Mihaela

    2015-06-01

    Full Text Available Some theoretical and practical considerations about eye movements in visual field are presented in the first part of this paper. These movements are developed into human body to be synergistic and are allowed to obtain the visual perception in 3D space. The theoretical background of the eye movements’ analysis is founded on the establishment of movement equations of the eyeball, as they consider it a solid body with a fixed point. The exterior actions, the order and execution of the movements are ensured by the neural and muscular external system and thus the position, stability and movements of the eye can be quantified through the method of reverse kinematic. The purpose of these researches is the development of a simulation model of human binocular visual system, an acquisition methodology and an experimental setup for data processing and recording regarding the eye movements, presented in the second part of the paper. The modeling system of ocular movements aims to establish the binocular synergy and limits of visual field changes in condition of ocular motor dysfunctions. By biomechanical movements of eyeball is established a modeling strategy for different sort of processes parameters like convergence, fixation and eye lens accommodation to obtain responses from binocular balance. The results of modelling processes and the positions of eye ball and axis in visual field are presented in the final part of the paper.

  1. Visual short-term memory guides infants' visual attention.

    Science.gov (United States)

    Mitsven, Samantha G; Cantrell, Lisa M; Luck, Steven J; Oakes, Lisa M

    2018-08-01

    Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. The effect of different brightness conditions on visually and memory guided saccades.

    Science.gov (United States)

    Felßberg, Anna-Maria; Dombrowe, Isabel

    2018-01-01

    It is commonly assumed that saccades in the dark are slower than saccades in a lit room. Early studies that investigated this issue using electrooculography (EOG) often compared memory guided saccades in darkness to visually guided saccades in an illuminated room. However, later studies showed that memory guided saccades are generally slower than visually guided saccades. Research on this topic is further complicated by the fact that the different existing eyetracking methods do not necessarily lead to consistent measurements. In the present study, we independently manipulated task (memory guided/visually guided) and screen brightness (dark, medium and light) in an otherwise completely dark room, and measured the peak velocity and the duration of the participant's saccades using a popular pupil-cornea reflection (p-cr) eyetracker (Eyelink 1000). Based on a critical reading of the literature, including a recent study using cornea-reflection (cr) eye tracking, we did not expect any velocity or duration differences between the three brightness conditions. We found that memory guided saccades were generally slower than visually guided saccades. In both tasks, eye movements on a medium and light background were equally fast and had similar durations. However, saccades on the dark background were slower and had shorter durations, even after we corrected for the effect of pupil size changes. This means that this is most likely an artifact of current pupil-based eye tracking. We conclude that the common assumption that saccades in the dark are slower than in the light is probably not true, however pupil-based eyetrackers tend to underestimate the peak velocity of saccades on very dark backgrounds, creating the impression that this might be the case. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Data visualization a guide to visual storytelling for libraries

    CERN Document Server

    2016-01-01

    Data Visualization: A Guide to Visual Storytelling for Libraries is a practical guide to the skills and tools needed to create beautiful and meaningful visual stories through data visualization. Learn how to sift through complex datasets to better understand a variety of metrics, such as trends in user behavior and electronic resource usage, return on investment (ROI) and impact metrics, and learning and reference analytics. Sections include: .Identifying and interpreting datasets for visualization .Tools and technologies for creating meaningful visualizations .Case studies in data visualization and dashboards Understanding and communicating trends from your organization s data is essential. Whether you are looking to make more informed decisions by visualizing organizational data, or to tell the story of your library s impact on your community, this book will give you the tools to make it happen."

  4. Watching your foot move - an fMRI study of visuomotor interactions during foot movement

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Jensen, Jesper Lundbye; Petersen, Nicolas

    2007-01-01

    are activated during self-generated ankle movements guided by visual feedback as compared with externally generated movements under similar visual and proprioceptive conditions. We found a distinct network, comprising the posterior parietal cortex and lateral cerebellar hemispheres, which showed increased...... activation during visually guided self-generated ankle movements. Furthermore, we found differential activation in the cerebellum depending on the different main effects, that is, whether movements were self- or externally generated regardless of visual feedback, presence or absence of visual feedback...

  5. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  6. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  7. Eye movements and attention in reading, scene perception, and visual search.

    Science.gov (United States)

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  8. Acting without seeing: eye movements reveal visual processing without awareness.

    Science.gov (United States)

    Spering, Miriam; Carrasco, Marisa

    2015-04-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information.

    Science.gov (United States)

    Strauss, Soeren; Woodgate, Philip J W; Sami, Saber A; Heinke, Dietmar

    2015-12-01

    We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain's attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO's predictions and also lessons for neurobiologically inspired robotics emerging from this work. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  10. Exploratory Visual Analysis for Animal Movement Ecology

    NARCIS (Netherlands)

    Slingsby, A.; van Loon, E.

    2016-01-01

    Movement ecologists study animals' movement to help understand their behaviours and interactions with each other and the environment. Data from GPS loggers are increasingly important for this. These data need to be processed, segmented and summarised for further visual and statistical analysis,

  11. Endpoints of arm movements to visual targets

    NARCIS (Netherlands)

    van den Dobbelsteen, John; Brenner, Eli; Smeets, Jeroen B J

    2001-01-01

    Reaching out for objects with an unseen arm involves using both visual and kinesthetic information. Neither visual nor kinesthetic information is perfect. Each is subject to both constant and variable errors. To evaluate how such errors influence performance in natural goal-directed movements, we

  12. Distinct eye movement patterns enhance dynamic visual acuity

    Science.gov (United States)

    Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157

  13. Distinct eye movement patterns enhance dynamic visual acuity.

    Science.gov (United States)

    Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.

  14. Classification of visual and linguistic tasks using eye-movement features.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  15. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets.

  16. Spatio-temporal flow maps for visualizing movement and contact patterns

    Directory of Open Access Journals (Sweden)

    Bing Ni

    2017-03-01

    Full Text Available The advanced telecom technologies and massive volumes of intelligent mobile phone users have yielded a huge amount of real-time data of people’s all-in-one telecommunication records, which we call telco big data. With telco data and the domain knowledge of an urban city, we are now able to analyze the movement and contact patterns of humans in an unprecedented scale. Flow map is widely used to display the movements of humans from one single source to multiple destinations by representing locations as nodes and movements as edges. However, it fails the task of visualizing both movement and contact data. In addition, analysts often need to compare and examine the patterns side by side, and do various quantitative analysis. In this work, we propose a novel spatio-temporal flow map layout to visualize when and where people from different locations move into the same places and make contact. We also propose integrating the spatiotemporal flow maps into existing spatiotemporal visualization techniques to form a suite of techniques for visualizing the movement and contact patterns. We report a potential application the proposed techniques can be applied to. The results show that our design and techniques properly unveil hidden information, while analysis can be achieved efficiently. Keywords: Spatio-temporal data, Flow map, Urban mobility

  17. Memory-guided saccades show effect of a perceptual illusion whereas visually guided saccades do not.

    Science.gov (United States)

    Massendari, Delphine; Lisi, Matteo; Collins, Thérèse; Cavanagh, Patrick

    2018-01-01

    The double-drift stimulus (a drifting Gabor with orthogonal internal motion) generates a large discrepancy between its physical and perceived path. Surprisingly, saccades directed to the double-drift stimulus land along the physical, and not perceived, path (Lisi M, Cavanagh P. Curr Biol 25: 2535-2540, 2015). We asked whether memory-guided saccades exhibited the same dissociation from perception. Participants were asked to keep their gaze centered on a fixation dot while the double-drift stimulus moved back and forth on a linear path in the periphery. The offset of the fixation was the go signal to make a saccade to the target. In the visually guided saccade condition, the Gabor kept moving on its trajectory after the go signal but was removed once the saccade began. In the memory conditions, the Gabor disappeared before or at the same time as the go-signal (0- to 1,000-ms delay) and participants made a saccade to its remembered location. The results showed that visually guided saccades again targeted the physical rather than the perceived location. However, memory saccades, even with 0-ms delay, had landing positions shifted toward the perceived location. Our result shows that memory- and visually guided saccades are based on different spatial information. NEW & NOTEWORTHY We compared the effect of a perceptual illusion on two types of saccades, visually guided vs. memory-guided saccades, and found that whereas visually guided saccades were almost unaffected by the perceptual illusion, memory-guided saccades exhibited a strong effect of the illusion. Our result is the first evidence in the literature to show that visually and memory-guided saccades use different spatial representations.

  18. Visual attention and stability

    NARCIS (Netherlands)

    Mathot, Sebastiaan; Theeuwes, Jan

    2011-01-01

    In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as

  19. Directional Tuning Curves, Elementary Movement Detectors, and the Estimation of the Direction of Visual Movement

    NARCIS (Netherlands)

    Hateren, J.H. van

    1990-01-01

    Both the insect brain and the vertebrate retina detect visual movement with neurons having broad, cosine-shaped directional tuning curves oriented in either of two perpendicular directions. This article shows that this arrangement can lead to isotropic estimates of the direction of movement: for any

  20. Effect of visual biofeedback of posterior tongue movement on articulation rehabilitation in dysarthria patients.

    Science.gov (United States)

    Yano, J; Shirahige, C; Oki, K; Oisaka, N; Kumakura, I; Tsubahara, A; Minagi, S

    2015-08-01

    Articulation is driven by various combinations of movements of the lip, tongue, soft palate, pharynx and larynx, where the tongue plays an especially important role. In patients with cerebrovascular disorder, lingual motor function is often affected, causing dysarthria. We aimed to evaluate the effect of visual biofeedback of posterior tongue movement on articulation rehabilitation in dysarthria patients with cerebrovascular disorder. Fifteen dysarthria patients (10 men and 5 women; mean age, 70.7 ± 10.3 years) agreed to participate in this study. A device for measuring the movement of the posterior part of the tongue was used for the visual biofeedback. Subjects were instructed to produce repetitive articulation of [ka] as fast and steadily as possible between a lungful with/without visual biofeedback. For both the unaffected and affected sides, the range of ascending and descending movement of the posterior tongue with visual biofeedback was significantly larger than that without visual biofeedback. The coefficient of variation for these movements with visual biofeedback was significantly smaller than that without visual biofeedback. With visual biofeedback, the range of ascent exhibited a significant and strong correlation with that of descent for both the unaffected and affected sides. The results of this study revealed that the use of visual biofeedback leads to prompt and preferable change in the movement of the posterior part of the tongue. From the standpoint of pursuing necessary rehabilitation for patients with attention and memory disorders, visualization of tongue movement would be of marked clinical benefit. © 2015 John Wiley & Sons Ltd.

  1. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue

    Directory of Open Access Journals (Sweden)

    Ashley J Booth

    2015-06-01

    Full Text Available The ease of synchronising movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronising with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g. a dot following an oscillatory trajectory. Similarly, when synchronising with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals’ ability to synchronise movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centred on a large projection screen. The target dot was surrounded by 2, 8 or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100 or 200ms. We found participants’ timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14. This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronise movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  2. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    OpenAIRE

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E.; Dickel, Ludovic

    2017-01-01

    International audience; Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavio...

  3. Acting without seeing: Eye movements reveal visual processing without awareness Miriam Spering & Marisa Carrasco

    Science.gov (United States)

    Spering, Miriam; Carrasco, Marisa

    2015-01-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. We review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movements. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. PMID:25765322

  4. Tactile Gap Detection Deteriorates during Bimanual Symmetrical Movements under Mirror Visual Feedback.

    Directory of Open Access Journals (Sweden)

    Janet H Bultitude

    Full Text Available It has been suggested that incongruence between signals for motor intention and sensory input can cause pain and other sensory abnormalities. This claim is supported by reports that moving in an environment of induced sensorimotor conflict leads to elevated pain and sensory symptoms in those with certain painful conditions. Similar procedures can lead to reports of anomalous sensations in healthy volunteers too. In the present study, we used mirror visual feedback to investigate the effects of sensorimotor incongruence on responses to stimuli that arise from sources external to the body, in particular, touch. Incongruence between the sensory and motor signals for the right arm was manipulated by having the participants make symmetrical or asymmetrical movements while watching a reflection of their left arm in a parasagittal mirror, or the left hand surface of a similarly positioned opaque board. In contrast to our prediction, sensitivity to the presence of gaps in tactile stimulation of the right forearm was not reduced when participants made asymmetrical movements during mirror visual feedback, as compared to when they made symmetrical or asymmetrical movements with no visual feedback. Instead, sensitivity was reduced when participants made symmetrical movements during mirror visual feedback relative to the other three conditions. We suggest that small discrepancies between sensory and motor information, as they occur during mirror visual feedback with symmetrical movements, can impair tactile processing. In contrast, asymmetrical movements with mirror visual feedback may not impact tactile processing because the larger discrepancies between sensory and motor information may prevent the integration of these sources of information. These results contrast with previous reports of anomalous sensations during exposure to both low and high sensorimotor conflict, but are nevertheless in agreement with a forward model interpretation of perceptual

  5. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    Science.gov (United States)

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Cognitive Control Network Contributions to Memory-Guided Visual Attention.

    Science.gov (United States)

    Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C

    2016-05-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  7. Eye movements during object recognition in visual agnosia.

    Science.gov (United States)

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. MOVEMENT DATA: ALTERNATIVE OD MATRIX VISUALIZATIONS

    Directory of Open Access Journals (Sweden)

    M.-J. Kraak

    2017-01-01

    Full Text Available The content of an Origin and Destination matrix informs about the nature of movement and connectivity between locations. These could be point locations, like airports, or regions, like countries. The path of the flow can be known in detail (the path of an airplane or only be abstract (migration between provinces. The type of movement or flow can be qualitative (different airline flying between two airports or quantitative (the number of migrants between two countries, or both. Traditionally, this type of data is visualized in flow maps. In these maps flows are often represented by arrows of different colors and width to represent the flow between an origin and a destination.

  9. Vibrating makes for better seeing: from the fly's micro eye movements to hyperacute visual sensors

    OpenAIRE

    Stéphane eViollet

    2014-01-01

    Active vision means that visual perception not only depends closely on the subject's own movements, but that these movements actually contribute to the visual perceptual processes. Vertebrates' and invertebrates' eye movements are probably part of an active visual process, but their exact role still remains to be determined. In this paper, studies on the retinal micro-movements occurring in the compound eye of the fly are reviewed. Several authors have located and identified the muscles invo...

  10. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    Science.gov (United States)

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  11. Eye Movements and Visual Search: A Bibliography,

    Science.gov (United States)

    1983-01-01

    duration and velocity. Neurology, 1975, 25, 1065-1070. EYM, SAC 40 Bard, C.; Fleury, M.; Carriere, L.; Halle, M. Analysis of Gymnastics Judges’ Visual...Nodine, C.F.; Carmody, D.P.; Herman, E. Eye Movements During Search for Artistically Embedded Targets. Bulletin of the Psychonomic Society, 1979, 13

  12. Hawk eyes I: diurnal raptors differ in visual fields and degree of eye movement.

    Directory of Open Access Journals (Sweden)

    Colleen T O'Rourke

    Full Text Available BACKGROUND: Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. METHODOLOGY/PRINCIPAL FINDINGS: We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33° and wide blind areas (∼82°, but intermediate degree of eye movement (∼5°, which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°, small blind areas (∼60°, and high degree of eye movement (∼8°, which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1° may help stabilize the image when hovering above prey before an attack. CONCLUSIONS: We conclude that: (a there are between-species differences in visual field configuration in these diurnal raptors; (b these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats; (c variations in the degree of eye movement between species appear associated with foraging strategies; and (d the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence

  13. Hawk eyes I: diurnal raptors differ in visual fields and degree of eye movement.

    Science.gov (United States)

    O'Rourke, Colleen T; Hall, Margaret I; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-09-22

    Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while

  14. Model-Based Synthesis of Visual Speech Movements from 3D Video

    Directory of Open Access Journals (Sweden)

    Edge JamesD

    2009-01-01

    Full Text Available We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into phonetic units. A dynamic parameterisation of this data is constructed which maintains the relationship between lip shapes and velocities; within this parameterisation a model of how lips move is built and is used in the animation of visual speech movements from speech audio input. The mapping from audio parameters to lip movements is disambiguated by selecting only the most similar stored phonetic units to the target utterance during synthesis. By combining properties of model-based synthesis (e.g., HMMs, neural nets with unit selection we improve the quality of our speech synthesis.

  15. Vibrating makes for better seeing: from the fly's micro eye movements to hyperacute visual sensors

    Directory of Open Access Journals (Sweden)

    Stéphane eViollet

    2014-04-01

    Full Text Available Active vision means that visual perception not only depends closely on the subject's own movements, but that these movements actually contribute to the visual perceptual processes. Vertebrates' and invertebrates' eye movements are probably part of an active visual process, but their exact role still remains to be determined. In this paper, studies on the retinal micro-movements occurring in the compound eye of the fly are reviewed. Several authors have located and identified the muscles involved in these small retinal movements. Others have established that these retinal micro-movements occur in walking and flying flies, but their exact functional role still remains to be determined. Many robotic studies have been performed in which animals' (flies' and spiders' miniature eye movements have been modelled, simulated and even implemented mechanically. Several robotic platforms have been endowed with artificial visual sensors performing periodic micro-scanning movements. Artificial eyes performing these active retinal micro-movements have some extremely interesting properties, such as hyperacuity and the ability to detect very slow movements (motion hyperacuity. The fundamental role of miniature eye movements still remains to be described in detail, but several studies on natural and artificial eyes have advanced considerably toward this goal.

  16. Top-down contextual knowledge guides visual attention in infancy.

    Science.gov (United States)

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  17. Calculation modelling of the RCCA movement through bowed FA guide tubes

    International Nuclear Information System (INIS)

    Razoumovsky, D.V.; Lihkachev, Yu.I.; Troyanov, V.M.

    2000-01-01

    Rod control cluster assembly movement through the bowed guide tubes is considered. The movement equation is presented with some of the assumptions and special attention is paid to the determination of the mechanical friction force. The numerical algorithm is described and some results of parametric studies are presented. (author)

  18. Shade determination using camouflaged visual shade guides and an electronic spectrophotometer.

    Science.gov (United States)

    Kvalheim, S F; Øilo, M

    2014-03-01

    The aim of the present study was to compare a camouflaged visual shade guide to a spectrophotometer designed for restorative dentistry. Two operators performed analyses of 66 subjects. One central upper incisor was measured four times by each operator; twice with a camouflaged visual shade guide and twice with a spectrophotometer Both methods had acceptable repeatability rates, but the electronic shade determination showed higher repeatability. In general, the electronically determined shades were darker than the visually determined shades. The use of a camouflaged visual shade guide seems to be an adequate method to reduce operator bias.

  19. Development of 4D jaw movement visualization system for dental diagnosis support

    Science.gov (United States)

    Aoki, Yoshimitsu; Terajima, Masahiko; Nakasima, Akihiko

    2004-10-01

    A person with an asymmetric morphology of maxillofacial skeleton reportedly possesses an asymmetric jaw function and the risk to express temporomandibular disorder is high. A comprehensive analysis from the point of view of both the morphology and the function such as maxillofacial or temporomandibular joint morphology, dental occlusion, and features of mandibular movement pathways is essential. In this study, the 4D jaw movement visualization system was developed to visually understand the characteristic jaw movement, 3D maxillofacial skeleton structure, and the alignment of the upper and lower teeth of a patient. For this purpose, the 3D reconstructed images of the cranial and mandibular bones, obtained by computed tomography, were measured using a non-contact 3D measuring device, and the obtained morphological images of teeth model were integrated and activated on the 6 DOF jaw movement data. This system was experimentally applied and visualized in a jaw deformity patient and its usability as a clinical diagnostic support system was verified.

  20. Invertebrate neurobiology: visual direction of arm movements in an octopus.

    Science.gov (United States)

    Niven, Jeremy E

    2011-03-22

    An operant task in which octopuses learn to locate food by a visual cue in a three-choice maze shows that they are capable of integrating visual and mechanosensory information to direct their arm movements to a goal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Stimulation of the substantia nigra influences the specification of memory-guided saccades

    Science.gov (United States)

    Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel

    2013-01-01

    In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects

  2. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    Science.gov (United States)

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  3. Independence of Movement Preparation and Movement Initiation.

    Science.gov (United States)

    Haith, Adrian M; Pakpoor, Jina; Krakauer, John W

    2016-03-09

    Initiating a movement in response to a visual stimulus takes significantly longer than might be expected on the basis of neural transmission delays, but it is unclear why. In a visually guided reaching task, we forced human participants to move at lower-than-normal reaction times to test whether normal reaction times are strictly necessary for accurate movement. We found that participants were, in fact, capable of moving accurately ∼80 ms earlier than their reaction times would suggest. Reaction times thus include a seemingly unnecessary delay that accounts for approximately one-third of their duration. Close examination of participants' behavior in conventional reaction-time conditions revealed that they generated occasional, spontaneous errors in trials in which their reaction time was unusually short. The pattern of these errors could be well accounted for by a simple model in which the timing of movement initiation is independent of the timing of movement preparation. This independence provides an explanation for why reaction times are usually so sluggish: delaying the mean time of movement initiation relative to preparation reduces the risk that a movement will be initiated before it has been appropriately prepared. Our results suggest that preparation and initiation of movement are mechanistically independent and may have a distinct neural basis. The results also demonstrate that, even in strongly stimulus-driven tasks, presentation of a stimulus does not directly trigger a movement. Rather, the stimulus appears to trigger an internal decision whether to make a movement, reflecting a volitional rather than reactive mode of control. Copyright © 2016 the authors 0270-6474/16/363007-10$15.00/0.

  4. Magnifying visual target information and the role of eye movements in motor sequence learning.

    Science.gov (United States)

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Visually induced eye movements in Wallenberg's syndrome

    International Nuclear Information System (INIS)

    Kanayama, R.; Nakamura, T.; Ohki, M.; Kimura, Y.; Koike, Y.; Kato, I.

    1991-01-01

    Eighteen patients with Wallenberg's syndrome were investigated concerning visually induced eye movements. All results were analysed quantitatively using a computer. In 16 out of 18 patients, OKN slow-phase velocities were impaired, in the remaining 2 patients they were normal. All patients showed reduced visual suppression of caloric nystagmus during the slow-phase of nystagmus toward the lesion side, except 3 patients who showed normal visual suppression in both directions. CT scan failed to detect either the brainstem or the cerebellar lesions in any cases, but MRI performed on the most recent cases demonstrated the infractions clearly. These findings suggest that infractions are localized in the medulla in the patients of group A, but extend to the cerebellum as well as to the medulla in patients of group B. (au)

  6. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    Science.gov (United States)

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  7. A dual visual-local feedback model of the vergence eye movement system

    NARCIS (Netherlands)

    Erkelens, C.J.

    2011-01-01

    Pure vergence movements are the eye movements that we make when we change our binocular fixation between targets differing in distance but not in direction relative to the head. Pure vergence is slow and controlled by visual feedback. Saccades are the rapid eye movements that we make between targets

  8. Acting without seeing: Eye movements reveal visual processing without awareness Miriam Spering & Marisa Carrasco

    OpenAIRE

    Spering, Miriam; Carrasco, Marisa

    2015-01-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. We review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movements. Such dissociations reveal situations in which eye movements are sensitive to part...

  9. A visual analytics design for studying rhythm patterns from human daily movement data

    Directory of Open Access Journals (Sweden)

    Wei Zeng

    2017-06-01

    Full Text Available Human’s daily movements exhibit high regularity in a space–time context that typically forms circadian rhythms. Understanding the rhythms for human daily movements is of high interest to a variety of parties from urban planners, transportation analysts, to business strategists. In this paper, we present an interactive visual analytics design for understanding and utilizing data collected from tracking human’s movements. The resulting system identifies and visually presents frequent human movement rhythms to support interactive exploration and analysis of the data over space and time. Case studies using real-world human movement data, including massive urban public transportation data in Singapore and the MIT reality mining dataset, and interviews with transportation researches were conducted to demonstrate the effectiveness and usefulness of our system.

  10. Peripheral vision benefits spatial learning by guiding eye movements.

    Science.gov (United States)

    Yamamoto, Naohide; Philbeck, John W

    2013-01-01

    The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.

  11. GeoVisual Analytics for the Exploration of Complex Movement Patterns on Arterial Roads

    DEFF Research Database (Denmark)

    Kveladze, Irma; Agerholm, Niels

    2018-01-01

    Visualization of complex spatio-temporal traffic movements on the road network is a challenging task since it requires simultaneous representation of vehicle measurement characteristics and traffic network regulation rules. Previously proposed visual representations addressed issues related....... Arterial roads are important for the mobility and connectivity of modern society, but they also have traffic regulations that are not always followed by the vulnerable road users. In order to understand complex movement behaviors between vehicle drivers and pedestrians on the arterial roads, a Geo......Visual Analytics approach was developed in dialog with traffic experts. The exploratory interactive tools have assisted experts to extract unknown information about movement patterns from large traffic data at different levels of details. The results of the analysis revealed detailed patterns of speed variations...

  12. Semantic guidance of eye movements in real-world scenes

    OpenAIRE

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movemen...

  13. The influence of artificial scotomas on eye movements during visual search

    NARCIS (Netherlands)

    Cornelissen, FW; Bruin, KJ; Kooijman, AC

    Purpose. Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Methods. Subjects performed a visual search task while their eye movements were

  14. Eye Movement Correlates of Expertise in Visual Arts.

    Science.gov (United States)

    Francuz, Piotr; Zaniewski, Iwo; Augustynowicz, Paweł; Kopiś, Natalia; Jankowski, Tomasz

    2018-01-01

    The aim of this study was to search for oculomotor correlates of expertise in visual arts, in particular with regard to paintings. Achieving this goal was possible by gathering data on eye movements of two groups of participants: experts and non-experts in visual arts who viewed and appreciated the aesthetics of paintings. In particular, we were interested in whether visual arts experts more accurately recognize a balanced composition in one of the two paintings being compared simultaneously, and whether people who correctly recognize harmonious paintings are characterized by a different visual scanning strategy than those who do not recognize them. For the purposes of this study, 25 paintings with an almost ideal balanced composition have been chosen. Some of these paintings are masterpieces of the world cultural heritage, and some of them are unknown. Using Photoshop, the artist developed three additional versions of each of these paintings, differing from the original in the degree of destruction of its harmonious composition: slight, moderate, or significant. The task of the participants was to look at all versions of the same painting in pairs (including the original) and decide which of them looked more pleasing. The study involved 23 experts in art, students of art history, art education or the Academy of Fine Arts, and 19 non-experts, students in the social sciences and the humanities. The experimental manipulation of comparing pairs of paintings, whose composition is at different levels of harmony, has proved to be an effective tool for differentiating people because of their ability to distinguish paintings with balanced composition from an unbalanced one. It turned out that this ability only partly coincides with expertise understood as the effect of education in the field of visual arts. We also found that the eye movements of people who more accurately appreciated paintings with balanced composition differ from those who more liked their altered

  15. Eye Movement Correlates of Expertise in Visual Arts

    Directory of Open Access Journals (Sweden)

    Piotr Francuz

    2018-03-01

    Full Text Available The aim of this study was to search for oculomotor correlates of expertise in visual arts, in particular with regard to paintings. Achieving this goal was possible by gathering data on eye movements of two groups of participants: experts and non-experts in visual arts who viewed and appreciated the aesthetics of paintings. In particular, we were interested in whether visual arts experts more accurately recognize a balanced composition in one of the two paintings being compared simultaneously, and whether people who correctly recognize harmonious paintings are characterized by a different visual scanning strategy than those who do not recognize them. For the purposes of this study, 25 paintings with an almost ideal balanced composition have been chosen. Some of these paintings are masterpieces of the world cultural heritage, and some of them are unknown. Using Photoshop, the artist developed three additional versions of each of these paintings, differing from the original in the degree of destruction of its harmonious composition: slight, moderate, or significant. The task of the participants was to look at all versions of the same painting in pairs (including the original and decide which of them looked more pleasing. The study involved 23 experts in art, students of art history, art education or the Academy of Fine Arts, and 19 non-experts, students in the social sciences and the humanities. The experimental manipulation of comparing pairs of paintings, whose composition is at different levels of harmony, has proved to be an effective tool for differentiating people because of their ability to distinguish paintings with balanced composition from an unbalanced one. It turned out that this ability only partly coincides with expertise understood as the effect of education in the field of visual arts. We also found that the eye movements of people who more accurately appreciated paintings with balanced composition differ from those who more liked

  16. Semantic Enrichment of Movement Behavior with Foursquare--A Visual Analytics Approach.

    Science.gov (United States)

    Krueger, Robert; Thom, Dennis; Ertl, Thomas

    2015-08-01

    In recent years, many approaches have been developed that efficiently and effectively visualize movement data, e.g., by providing suitable aggregation strategies to reduce visual clutter. Analysts can use them to identify distinct movement patterns, such as trajectories with similar direction, form, length, and speed. However, less effort has been spent on finding the semantics behind movements, i.e. why somebody or something is moving. This can be of great value for different applications, such as product usage and consumer analysis, to better understand urban dynamics, and to improve situational awareness. Unfortunately, semantic information often gets lost when data is recorded. Thus, we suggest to enrich trajectory data with POI information using social media services and show how semantic insights can be gained. Furthermore, we show how to handle semantic uncertainties in time and space, which result from noisy, unprecise, and missing data, by introducing a POI decision model in combination with highly interactive visualizations. Finally, we evaluate our approach with two case studies on a large electric scooter data set and test our model on data with known ground truth.

  17. Visual tuning and metrical perception of realistic point-light dance movements

    Science.gov (United States)

    Su, Yi-Huang

    2016-01-01

    Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals’ preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory ‘subdivision effect’, suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception. PMID:26947252

  18. Visual tuning and metrical perception of realistic point-light dance movements.

    Science.gov (United States)

    Su, Yi-Huang

    2016-03-07

    Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals' preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory 'subdivision effect', suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception.

  19. Visually Guided Step Descent in Children with Williams Syndrome

    Science.gov (United States)

    Cowie, Dorothy; Braddick, Oliver; Atkinson, Janette

    2012-01-01

    Individuals with Williams syndrome (WS) have impairments in visuospatial tasks and in manual visuomotor control, consistent with parietal and cerebellar abnormalities. Here we examined whether individuals with WS also have difficulties in visually controlling whole-body movements. We investigated visual control of stepping down at a change of…

  20. Politicizing Precarity, Producing Visual Dialogues on Migration: Transnational Public Spaces in Social Movements

    Directory of Open Access Journals (Sweden)

    Nicole Doerr

    2010-05-01

    Full Text Available In a period characterized by weak public consent over European integration, the purpose of this article is to analyze images created by transnational activists who aim to politicize the social question and migrants' subjectivity in the European Union (EU. I will explore the content of posters and images produced by social movement activists for their local and joint European protest actions, and shared on blogs and homepages. I suspect that the underexplored visual dimension of emerging transnational public spaces created by activists offers a promising field of analysis. My aim is to give an empirical example of how we can study potential "visual dialogues" in transnational public spaces created within social movements. An interesting case for visual analysis is the grassroots network of local activist groups that created a joint "EuroMayday" against precarity and which mobilized protest parades across Europe. I will first discuss the relevance of "visual dialogues" in the EuroMayday protests from the perspective of discursive theories of democracy and social movements studies. Then I discuss activists' transnational sharing of visual images as a potentially innovative cultural practice aimed at politicizing and re-interpreting official imaginaries of citizenship, labor flexibility and free mobility in Europe. I also discuss the limits on emerging transnational "visual dialogues" posed by place-specific visual cultures. URN: urn:nbn:de:0114-fqs1002308

  1. Visual information transfer across eye movements in the monkey

    NARCIS (Netherlands)

    Khayat, Paul S.; Spekreijse, Henk; Roelfsema, Pieter R.

    2004-01-01

    During normal viewing, the eyes move from one location to another in order to sample the visual environment. Information acquired before the eye movement facilitates post-saccadic processing. This "preview effect" indicates that some information is maintained in transsaccadic memory and combined

  2. Influence of social presence on eye movements in visual search tasks.

    Science.gov (United States)

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  3. Rehearsal in serial memory for visual-spatial information: evidence from eye movements.

    Science.gov (United States)

    Tremblay, Sébastien; Saint-Aubin, Jean; Jalbert, Annie

    2006-06-01

    It is well established that rote rehearsal plays a key role in serial memory for lists of verbal items. Although a great deal of research has informed us about the nature of verbal rehearsal, much less attention has been devoted to rehearsal in serial memory for visual-spatial information. By using the dot task--a visual-spatial analogue of the classical verbal serial recall task--with delayed recall, performance and eyetracking data were recorded in order to establish whether visual-spatial rehearsal could be evidenced by eye movement. The use of eye movement as a form of rehearsal is detectable (Experiment 1), and it seems to contribute to serial memory performance over and above rehearsal based on shifts of spatial attention (Experiments 1 and 2).

  4. Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.

    Science.gov (United States)

    Chun, Marvin M.; Jiang, Yuhong

    1998-01-01

    Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)

  5. Design and test of a Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during training of upper limb movement.

    Science.gov (United States)

    Simonsen, Daniel; Popovic, Mirjana B; Spaich, Erika G; Andersen, Ole Kæseler

    2017-11-01

    The present paper describes the design and test of a low-cost Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during the execution of an upper limb exercise. Eleven sub-acute stroke patients with varying degrees of upper limb function were recruited. Each subject participated in a control session (repeated twice) and a feedback session (repeated twice). In each session, the subjects were presented with a rectangular pattern displayed on a vertical mounted monitor embedded in the table in front of the patient. The subjects were asked to move a marker inside the rectangular pattern by using their most affected hand. During the feedback session, the thickness of the rectangular pattern was changed according to the performance of the subject, and the color of the marker changed according to its position, thereby guiding the subject's movements. In the control session, the thickness of the rectangular pattern and the color of the marker did not change. The results showed that the movement similarity and smoothness was higher in the feedback session than in the control session while the duration of the movement was longer. The present study showed that adaptive visual feedback delivered by use of the Kinect sensor can increase the similarity and smoothness of upper limb movement in stroke patients.

  6. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  7. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory

    Directory of Open Access Journals (Sweden)

    Teri Lawton

    2017-05-01

    Full Text Available The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK. The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual

  8. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory.

    Science.gov (United States)

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement

  9. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory

    Science.gov (United States)

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement

  10. Contribution of execution noise to arm movement variability in three-dimensional space.

    Science.gov (United States)

    Apker, Gregory A; Buneo, Christopher A

    2012-01-01

    Reaching movements are subject to noise associated with planning and execution, but precisely how these noise sources interact to determine patterns of endpoint variability in three-dimensional space is not well understood. For frontal plane movements, variability is largest along the depth axis (the axis along which visual planning noise is greatest), with execution noise contributing to this variability along the movement direction. Here we tested whether these noise sources interact in a similar way for movements directed in depth. Subjects performed sequences of two movements from a single starting position to targets that were either both contained within a frontal plane ("frontal sequences") or where the first was within the frontal plane and the second was directed in depth ("depth sequences"). For both sequence types, movements were performed with or without visual feedback of the hand. When visual feedback was available, endpoint distributions for frontal and depth sequences were generally anisotropic, with the principal axes of variability being strongly aligned with the depth axis. Without visual feedback, endpoint distributions for frontal sequences were relatively isotropic and movement direction dependent, while those for depth sequences were similar to those with visual feedback. Overall, the results suggest that in the presence of visual feedback, endpoint variability is dominated by uncertainty associated with planning and updating visually guided movements. In addition, the results suggest that without visual feedback, increased uncertainty in hand position estimation effectively unmasks the effect of execution-related noise, resulting in patterns of endpoint variability that are highly movement direction dependent.

  11. Subconscious visual cues during movement execution allow correct online choice reactions.

    Directory of Open Access Journals (Sweden)

    Christian Leukel

    Full Text Available Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28 ± 5 years executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition or had to abort the movement (stop-condition. The cue was presented with different display durations (20-160 ms. In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task.

  12. Knowledge scaffolding visualizations: A guiding framework

    Directory of Open Access Journals (Sweden)

    Elitsa Alexander

    2015-06-01

    Full Text Available In this paper we provide a guiding framework for understanding and selecting visual representations in the knowledge management (KM practice. We build on an interdisciplinary analogy between two connotations of the notion of “scaffolding”: physical scaffolding from an architectural-engineering perspective and scaffolding of the “everyday knowing in practice” from a KM perspective. We classify visual structures for knowledge communication in teams into four types of scaffolds: grounded (corresponding e.g., to perspectives diagrams or dynamic facilitation diagrams, suspended (e.g., negotiation sketches, argument maps, panel (e.g., roadmaps or timelines and reinforcing (e.g., concept diagrams. The article concludes with a set of recommendations in the form of questions to ask whenever practitioners are choosing visualizations for specific KM needs. Our recommendations aim at providing a framework at a broad-brush level to aid choosing a suitable visualization template depending on the type of KM endeavour.

  13. Combined visual illusion effects on the perceived index of difficulty and movement outcomes in discrete and continuous fitts' tapping.

    Science.gov (United States)

    Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin

    2016-01-01

    The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.

  14. Short-Term Plasticity of the Visuomotor Map during Grasping Movements in Humans

    Science.gov (United States)

    Safstrom, Daniel; Edin, Benoni B.

    2005-01-01

    During visually guided grasping movements, visual information is transformed into motor commands. This transformation is known as the "visuomotor map." To investigate limitations in the short-term plasticity of the visuomotor map in normal humans, we studied the maximum grip aperture (MGA) during the reaching phase while subjects grasped objects…

  15. Tracking without perceiving: a dissociation between eye movements and motion perception.

    Science.gov (United States)

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  16. Updating visual memory across eye movements for ocular and arm motor control.

    Science.gov (United States)

    Thompson, Aidan A; Henriques, Denise Y P

    2008-11-01

    Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.

  17. Saccadic Eye Movements Impose a Natural Bottleneck on Visual Short-Term Memory

    Science.gov (United States)

    Ohl, Sven; Rolfs, Martin

    2017-01-01

    Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory…

  18. Basal ganglia neuronal activity during scanning eye movements in Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Tomáš Sieger

    Full Text Available The oculomotor role of the basal ganglia has been supported by extensive evidence, although their role in scanning eye movements is poorly understood. Nineteen Parkinsońs disease patients, which underwent implantation of deep brain stimulation electrodes, were investigated with simultaneous intraoperative microelectrode recordings and single channel electrooculography in a scanning eye movement task by viewing a series of colored pictures selected from the International Affective Picture System. Four patients additionally underwent a visually guided saccade task. Microelectrode recordings were analyzed selectively from the subthalamic nucleus, substantia nigra pars reticulata and from the globus pallidus by the WaveClus program which allowed for detection and sorting of individual neurons. The relationship between neuronal firing rate and eye movements was studied by crosscorrelation analysis. Out of 183 neurons that were detected, 130 were found in the subthalamic nucleus, 30 in the substantia nigra and 23 in the globus pallidus. Twenty percent of the neurons in each of these structures showed eye movement-related activity. Neurons related to scanning eye movements were mostly unrelated to the visually guided saccades. We conclude that a relatively large number of basal ganglia neurons are involved in eye motion control. Surprisingly, neurons related to scanning eye movements differed from neurons activated during saccades suggesting functional specialization and segregation of both systems for eye movement control.

  19. Basal ganglia neuronal activity during scanning eye movements in Parkinson's disease.

    Science.gov (United States)

    Sieger, Tomáš; Bonnet, Cecilia; Serranová, Tereza; Wild, Jiří; Novák, Daniel; Růžička, Filip; Urgošík, Dušan; Růžička, Evžen; Gaymard, Bertrand; Jech, Robert

    2013-01-01

    The oculomotor role of the basal ganglia has been supported by extensive evidence, although their role in scanning eye movements is poorly understood. Nineteen Parkinsońs disease patients, which underwent implantation of deep brain stimulation electrodes, were investigated with simultaneous intraoperative microelectrode recordings and single channel electrooculography in a scanning eye movement task by viewing a series of colored pictures selected from the International Affective Picture System. Four patients additionally underwent a visually guided saccade task. Microelectrode recordings were analyzed selectively from the subthalamic nucleus, substantia nigra pars reticulata and from the globus pallidus by the WaveClus program which allowed for detection and sorting of individual neurons. The relationship between neuronal firing rate and eye movements was studied by crosscorrelation analysis. Out of 183 neurons that were detected, 130 were found in the subthalamic nucleus, 30 in the substantia nigra and 23 in the globus pallidus. Twenty percent of the neurons in each of these structures showed eye movement-related activity. Neurons related to scanning eye movements were mostly unrelated to the visually guided saccades. We conclude that a relatively large number of basal ganglia neurons are involved in eye motion control. Surprisingly, neurons related to scanning eye movements differed from neurons activated during saccades suggesting functional specialization and segregation of both systems for eye movement control.

  20. Semantic guidance of eye movements in real-world scenes.

    Science.gov (United States)

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    Science.gov (United States)

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for

  2. Visually induced eye movements in Wallenberg's syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kanayama, R.; Nakamura, T.; Ohki, M.; Kimura, Y.; Koike, Y. (Dept. of Otolaryngology, Yamagata Univ. School of Medicine (Japan)); Kato, I. (Dept. of Otolaryngology, St. Marianna Univ. School of Medicine, Kawasaki (Japan))

    1991-01-01

    Eighteen patients with Wallenberg's syndrome were investigated concerning visually induced eye movements. All results were analysed quantitatively using a computer. In 16 out of 18 patients, OKN slow-phase velocities were impaired, in the remaining 2 patients they were normal. All patients showed reduced visual suppression of caloric nystagmus during the slow-phase of nystagmus toward the lesion side, except 3 patients who showed normal visual suppression in both directions. CT scan failed to detect either the brainstem or the cerebellar lesions in any cases, but MRI performed on the most recent cases demonstrated the infractions clearly. These findings suggest that infractions are localized in the medulla in the patients of group A, but extend to the cerebellum as well as to the medulla in patients of group B. (au).

  3. Left neglected, but only in far space: Spatial biases in healthy participants revealed in a visually-guided grasping task

    Directory of Open Access Journals (Sweden)

    Natalie ede Bruin

    2014-01-01

    Full Text Available Hemispatial neglect is a common outcome of stroke that is characterised by the inability to orient towards, and attend to stimuli in contralesional space. It is established that hemispatial neglect has a perceptual component, however, the presence and severity of motor impairments is controversial. Establishing the nature of space use and spatial biases during visually-guided actions amongst healthy individuals is critical to understanding the presence of visuomotor deficits in patients with neglect. Accordingly, three experiments were conducted to investigate the effect of object spatial location on patterns of grasping. Experiment 1 required right-handed participants to reach and grasp for blocks in order to construct 3D models. The blocks were scattered on a tabletop divided into equal size quadrants: left near, left far, right near, and right far. Identical sets of building blocks were available in each quadrant. Space use was dynamic, with participants initially grasping blocks from right near space and tending to ‘neglect’ left far space until the final stages of the task. Experiment 2 repeated the protocol with left-handed participants. Remarkably, left-handed participants displayed a similar pattern of space use to right-handed participants. In Experiment 3 eye movements were examined to investigate whether ‘neglect’ for grasping in left far reachable space had its origins in attentional biases. It was found that patterns of eye movements mirrored patterns of reach-to-grasp movements. We conclude that there are spatial biases during visually-guided grasping, specifically, a tendency to neglect left far reachable space, and that this ‘neglect’ is attentional in origin. The results raise the possibility that visuomotor impairments reported among patients with right hemisphere lesions when working in contralesional space may result in part from this inherent tendency to ‘neglect’ left far space irrespective of the presence

  4. Visual straight-ahead preference in saccadic eye movements.

    Science.gov (United States)

    Camors, Damien; Trotter, Yves; Pouget, Pierre; Gilardeau, Sophie; Durand, Jean-Baptiste

    2016-03-15

    Ocular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades.

  5. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish

    Directory of Open Access Journals (Sweden)

    Anne-Sophie Darmaillacq

    2017-06-01

    Full Text Available Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e-vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  6. Visual Ecology and the Development of Visually Guided Behavior in the Cuttlefish.

    Science.gov (United States)

    Darmaillacq, Anne-Sophie; Mezrai, Nawel; O'Brien, Caitlin E; Dickel, Ludovic

    2017-01-01

    Cuttlefish are highly visual animals, a fact reflected in the large size of their eyes and visual-processing centers of their brain. Adults detect their prey visually, navigate using visual cues such as landmarks or the e -vector of polarized light and display intense visual patterns during mating and agonistic encounters. Although much is known about the visual system in adult cuttlefish, few studies have investigated its development and that of visually-guided behavior in juveniles. This review summarizes the results of studies of visual development in embryos and young juveniles. The visual system is the last to develop, as in vertebrates, and is functional before hatching. Indeed, embryonic exposure to prey, shelters or complex background alters postembryonic behavior. Visual acuity and lateralization, and polarization sensitivity improve throughout the first months after hatching. The production of body patterning in juveniles is not the simple stimulus-response process commonly presented in the literature. Rather, it likely requires the complex integration of visual information, and is subject to inter-individual differences. Though the focus of this review is vision in cuttlefish, it is important to note that other senses, particularly sensitivity to vibration and to waterborne chemical signals, also play a role in behavior. Considering the multimodal sensory dimensions of natural stimuli and their integration and processing by individuals offer new exciting avenues of future inquiry.

  7. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    Science.gov (United States)

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  8. Medical Visualization and Simulation for Customizable Surgical Guides

    NARCIS (Netherlands)

    Kroes, T.

    2015-01-01

    This thesis revolves around the development of medical visualization tools for the planning of CSG-based surgery. To this end, we performed an extensive computerassisted surgery (CAS) literature study, developed a novel optimization technique for customizable surgical guides (CSG), and introduce

  9. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    Directory of Open Access Journals (Sweden)

    Teresa eSollfrank

    2015-08-01

    Full Text Available A repetitive movement practice by motor imagery (MI can influence motor cortical excitability in the electroencephalogram (EEG. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007. This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during motor imagery. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronisation (ERD of the upper alpha band (10-12 Hz over the sensorimotor cortices thereby potentially improving MI based BCI protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb motor imagery present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (2D vs. 3D. The largest upper alpha band power decrease was obtained during motor imagery after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D visualization modality group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during MI. Realistic visual feedback, consistent with the participant’s motor imagery, might be helpful for accomplishing successful motor imagery and the use of such feedback may assist in making BCI a more natural interface for motor imagery based BCI rehabilitation.

  10. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    Science.gov (United States)

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  11. Developmental visual perception deficits with no indications of prosopagnosia in a child with abnormal eye movements.

    Science.gov (United States)

    Gilaie-Dotan, Sharon; Doron, Ravid

    2017-06-01

    Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    Science.gov (United States)

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  13. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study

    Directory of Open Access Journals (Sweden)

    Nocchi Federico

    2012-07-01

    Full Text Available Abstract Background The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb and non-biological (abstract object movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. Methods A visual functional Magnetic Resonance Imaging (fMRI task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. Results The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes. Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. Conclusions This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain’s ability to assimilate abstract object movements with human motor gestures. In both conditions

  14. 工作记忆表征引导视觉注意选择的眼动研究%Working Memory Representation Does Guide Visual Attention:Evidence from Eye Movements

    Institute of Scientific and Technical Information of China (English)

    张豹; 黄赛; 祁禄

    2013-01-01

    工作记忆表征能否引导视觉注意选择?目前实验结果尚不一致.有研究者认为能否观察到注意引导效应取决于视觉搜索类型.研究采用工作记忆任务与视觉搜索任务相结合的双任务范式,结合眼动追踪技术,对不同视觉搜索类型下的注意引导效应进行验证.实验1结果发现,不管视觉搜索任务的靶子是否变化,在早期的眼动指标上都发现了显著的注意引导效应,但注意引导效应在靶子固定的视觉搜索任务下表现得更强.实验2在平衡两种视觉搜索任务中的工作记忆负载后发现,两种视觉搜索任务下都出现了显著的注意引导效应,但没有发现实验1中所出现的任务间差异.实验结果否定了视觉搜索类型对注意引导效应的决定性影响,同时也提示工作记忆负载可能在注意引导效应中起重要作用.%Whether the working memory representations could guide visual attention to select the matching stimuli in visual search is still controversial. By requiring the participants to perform a visual search task while online keeping some objects in working memory, some researchers have observed a stronger interference from the distractor when it was identical or related to the object held in memory. But other researchers did not observe such attentional guidance effect even using similar procedures. Olivers (2009) examined several possible influencing factors through a series of experiments and finally attributed the discrepancy to the search type whether the search target was varied or not across trials throughout the experiment. However, according to our analysis, there were several factors might confound the results in the critical experiment of Olivers (2009). So here, we used the classic dual task combined with eye movement tracking technology to reexamine and evaluate the effect of the search type on the top-down guiding process of visual attention from working memory representations

  15. Optimal sensorimotor control in eye movement sequences.

    Science.gov (United States)

    Munuera, Jérôme; Morel, Pierre; Duhamel, Jean-René; Deneve, Sophie

    2009-03-11

    Fast and accurate motor behavior requires combining noisy and delayed sensory information with knowledge of self-generated body motion; much evidence indicates that humans do this in a near-optimal manner during arm movements. However, it is unclear whether this principle applies to eye movements. We measured the relative contributions of visual sensory feedback and the motor efference copy (and/or proprioceptive feedback) when humans perform two saccades in rapid succession, the first saccade to a visual target and the second to a memorized target. Unbeknownst to the subject, we introduced an artificial motor error by randomly "jumping" the visual target during the first saccade. The correction of the memory-guided saccade allowed us to measure the relative contributions of visual feedback and efferent copy (and/or proprioceptive feedback) to motor-plan updating. In a control experiment, we extinguished the target during the saccade rather than changing its location to measure the relative contribution of motor noise and target localization error to saccade variability without any visual feedback. The motor noise contribution increased with saccade amplitude, but remained <30% of the total variability. Subjects adjusted the gain of their visual feedback for different saccade amplitudes as a function of its reliability. Even during trials where subjects performed a corrective saccade to compensate for the target-jump, the correction by the visual feedback, while stronger, remained far below 100%. In all conditions, an optimal controller predicted the visual feedback gain well, suggesting that humans combine optimally their efferent copy and sensory feedback when performing eye movements.

  16. Effect of Visual Angle on the Head Movement Caused by Changing Binocular Disparity

    Directory of Open Access Journals (Sweden)

    Toru Maekawa

    2011-10-01

    Full Text Available It has been shown that vertical binocular disparity has no or little effect on the perception of visual direction (Banks et al., 2002. On the other hand, our previous study has reported that a continuous change of vertical disparity causes an involuntary sway of the head (Maekawa et al., 2009. We predict that the difference between those results attributes to the dissociation between the processes for perception and action in the brain. The aim of this study is to investigate in more details the condition that influences the process of disparity information. The present experiment particularly varied the visual angle of stimulus presentation and measured the head movement and body sway caused by changing vertical disparity. Results showed that the head movement was greater as the visual angle of the stimulus was smaller. It has been reported that stimulus of only small visual angle affect depth perception (Erklens et al., 1995. Thus, our result suggests that perception and action produced by vertical disparity are consistent as far as the effect of the stimulus size is concerned.

  17. Visuomotor signals for reaching movements in the rostro-dorsal sector of the monkey thalamic reticular nucleus.

    Science.gov (United States)

    Saga, Yosuke; Nakayama, Yoshihisa; Inoue, Ken-Ichi; Yamagata, Tomoko; Hashimoto, Masashi; Tremblay, Léon; Takada, Masahiko; Hoshi, Eiji

    2017-05-01

    The thalamic reticular nucleus (TRN) collects inputs from the cerebral cortex and thalamus and, in turn, sends inhibitory outputs to the thalamic relay nuclei. This unique connectivity suggests that the TRN plays a pivotal role in regulating information flow through the thalamus. Here, we analyzed the roles of TRN neurons in visually guided reaching movements. We first used retrograde transneuronal labeling with rabies virus, and showed that the rostro-dorsal sector of the TRN (TRNrd) projected disynaptically to the ventral premotor cortex (PMv). In other experiments, we recorded neurons from the TRNrd or PMv while monkeys performed a visuomotor task. We found that neurons in the TRNrd and PMv showed visual-, set-, and movement-related activity modulation. These results indicate that the TRNrd, as well as the PMv, is involved in the reception of visual signals and in the preparation and execution of reaching movements. The fraction of neurons that were non-selective for the location of visual signals or the direction of reaching movements was greater in the TRNrd than in the PMv. Furthermore, the fraction of neurons whose activity increased from the baseline was greater in the TRNrd than in the PMv. The timing of activity modulation of visual-related and movement-related neurons was similar in TRNrd and PMv neurons. Overall, our data suggest that TRNrd neurons provide motor thalamic nuclei with inhibitory inputs that are predominantly devoid of spatial selectivity, and that these signals modulate how these nuclei engage in both sensory processing and motor output during visually guided reaching behavior. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. The Bauhaus movement and its influence in graphic design, visual communication and architecture in Greece

    Directory of Open Access Journals (Sweden)

    Konstantinos Kyriakopoulos

    2017-04-01

    Full Text Available This paper attempts to present the elements defining the philosophical approach, the characteristics and the style of the Bauhaus movement. More specific it presents the social background of the period during which this school was established and the vision of its main representatives. It analyzes the way it influenced graphic design, visual communication and architecture in Greece. A comparison has been made between typical Bauhaus works and works of contemporary graphics aiming to find how they were influenced by the Bauhaus movement. Especially, it presents the projects (posters and buildings and the artists who worked according to the Bauhaus rules. This is a small research of how the Bauhaus school influenced modern graphic art and visual communication design in Greece until today. The conclusion of this research is that the Bauhaus movement which was the first to combine art with technology to obtain clarity and functionality rather than aesthetics, still has a crucial affect on modern design, graphic arts and visual communication in Greece

  19. The Bauhaus movement and its influence in graphic design, visual communication and architecture in Greece

    Directory of Open Access Journals (Sweden)

    Konstantinos Kyriakopoulos

    2016-07-01

    Full Text Available This paper attempts to present the elements defining the philosophical approach, the characteristics and the style of the Bauhaus movement. More specific it presents the social background of the period during which this school was established and the vision of its main representatives. It analyzes the way it influenced graphic design, visual communication and architecture in Greece. A comparison has been made between typical Bauhaus works and works of contemporary graphics aiming to find how they were influenced by the Bauhaus movement. Especially, it presents the projects (posters and buildings and the artists who worked according to the Bauhaus rules. This is a small research of how the Bauhaus school influenced modern graphic art and visual communication design in Greece until today. The conclusion of this research is that the Bauhaus movement which was the first to combine art with technology to obtain clarity and functionality rather than aesthetics, still has a crucial affect on modern design, graphic arts and visual communication in Greece.

  20. Hand movement deviations in a visual search task with cross modal cuing

    Directory of Open Access Journals (Sweden)

    Hürol Aslan

    2007-01-01

    Full Text Available The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants’ reaction times, we paid special attention to tracking the hand movements toward the target. According to the results, the auditory stimuli unassociated with the target locations slightly –but significantly- increased the deviation of the hand movement from the path leading to the target location. The increase in the deviation depended on the degree of association between auditory stimuli and target locations, albeit not on the level of detail in the instructions about the task.

  1. On the barn owl's visual pre-attack behavior: 1. Structure of head movements and motion patterns

    NARCIS (Netherlands)

    Ohayon, S.; Willigen, R.F. van der; Wagner, H.; Katsman, I.; Rivlin, E.

    2006-01-01

    Barn owls exhibit a rich repertoire of head movements before taking off for prey capture. These movements occur mainly at light levels that allow for the visual detection of prey. To investigate these movements and their functional relevance, we filmed the pre-attack behavior of barn owls. Off-line

  2. Simple Smartphone-Based Guiding System for Visually Impaired People.

    Science.gov (United States)

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-06-13

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  3. Simple Smartphone-Based Guiding System for Visually Impaired People

    Directory of Open Access Journals (Sweden)

    Bor-Shing Lin

    2017-06-01

    Full Text Available Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  4. Coordination of eye and head components of movements evoked by stimulation of the paramedian pontine reticular formation

    Science.gov (United States)

    Barton, Ellen J.; Sparks, David L.

    2013-01-01

    Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibuloocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently. PMID:18458891

  5. Race Guides Attention in Visual Search.

    Directory of Open Access Journals (Sweden)

    Marte Otten

    Full Text Available It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female. Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear.

  6. Application for TJ-II Signals Visualization: User's Guide

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A. B.; Cremy, C.; Vega, J.

    2000-01-01

    In this documents are described the functionalities of the application developed by the Data Acquisition Group for TJ-II signal visualization. There are two versions of the application, the On-line version, used for signal visualization during TJ-II operation, and the Off-line version, used for signal visualization without TJ-II operation. Both versions of the application consist in a graphical user interface developed for X/Motif, in which most of the actions can be done using the mouse buttons. The functionalities of both versions of the application are described in this user's guide, beginning at the application start-up and explaining in detail all the options that it provides and the actions that can be done with each graphic control. (Author) 8 refs

  7. The eye movements of dyslexic children during reading and visual search: impact of the visual attention span.

    Science.gov (United States)

    Prado, Chloé; Dubois, Matthieu; Valdois, Sylviane

    2007-09-01

    The eye movements of 14 French dyslexic children having a VA span reduction and 14 normal readers were compared in two tasks of visual search and text reading. The dyslexic participants made a higher number of rightward fixations in reading only. They simultaneously processed the same low number of letters in both tasks whereas normal readers processed far more letters in reading. Importantly, the children's VA span abilities related to the number of letters simultaneously processed in reading. The atypical eye movements of some dyslexic readers in reading thus appear to reflect difficulties to increase their VA span according to the task request.

  8. The Orientation of Visual Space from the Perspective of Hummingbirds

    Directory of Open Access Journals (Sweden)

    Luke P. Tyrrell

    2018-01-01

    Full Text Available Vision is a key component of hummingbird behavior. Hummingbirds hover in front of flowers, guide their bills into them for foraging, and maneuver backwards to undock from them. Capturing insects is also an important foraging strategy for most hummingbirds. However, little is known about the visual sensory specializations hummingbirds use to guide these two foraging strategies. We characterized the hummingbird visual field configuration, degree of eye movement, and orientation of the centers of acute vision. Hummingbirds had a relatively narrow binocular field (~30° that extended above and behind their heads. Their blind area was also relatively narrow (~23°, which increased their visual coverage (about 98% of their celestial hemisphere. Additionally, eye movement amplitude was relatively low (~9°, so their ability to converge or diverge their eyes was limited. We confirmed that hummingbirds have two centers of acute vision: a fovea centralis, projecting laterally, and an area temporalis, projecting more frontally. This retinal configuration is similar to other predatory species, which may allow hummingbirds to enhance their success at preying on insects. However, there is no evidence that their temporal area could visualize the bill tip or that eye movements could compensate for this constraint. Therefore, guidance of precise bill position during the process of docking occurs via indirect cues or directly with low visual acuity despite having a temporal center of acute vision. The large visual coverage may favor the detection of predators and competitors even while docking into a flower. Overall, hummingbird visual configuration does not seem specialized for flower docking.

  9. The Orientation of Visual Space from the Perspective of Hummingbirds.

    Science.gov (United States)

    Tyrrell, Luke P; Goller, Benjamin; Moore, Bret A; Altshuler, Douglas L; Fernández-Juricic, Esteban

    2018-01-01

    Vision is a key component of hummingbird behavior. Hummingbirds hover in front of flowers, guide their bills into them for foraging, and maneuver backwards to undock from them. Capturing insects is also an important foraging strategy for most hummingbirds. However, little is known about the visual sensory specializations hummingbirds use to guide these two foraging strategies. We characterized the hummingbird visual field configuration, degree of eye movement, and orientation of the centers of acute vision. Hummingbirds had a relatively narrow binocular field (~30°) that extended above and behind their heads. Their blind area was also relatively narrow (~23°), which increased their visual coverage (about 98% of their celestial hemisphere). Additionally, eye movement amplitude was relatively low (~9°), so their ability to converge or diverge their eyes was limited. We confirmed that hummingbirds have two centers of acute vision: a fovea centralis , projecting laterally, and an area temporalis , projecting more frontally. This retinal configuration is similar to other predatory species, which may allow hummingbirds to enhance their success at preying on insects. However, there is no evidence that their temporal area could visualize the bill tip or that eye movements could compensate for this constraint. Therefore, guidance of precise bill position during the process of docking occurs via indirect cues or directly with low visual acuity despite having a temporal center of acute vision. The large visual coverage may favor the detection of predators and competitors even while docking into a flower. Overall, hummingbird visual configuration does not seem specialized for flower docking.

  10. Effects of kinesthetic versus visual imagery practice on two technical dance movements: a pilot study.

    Science.gov (United States)

    Girón, Elizabeth Coker; McIsaac, Tara; Nilsen, Dawn

    2012-03-01

    Motor imagery is a type of mental practice that involves imagining the body performing a movement in the absence of motor output. Dance training traditionally incorporates mental practice techniques, but quantitative effects of motor imagery on the performance of dance movements are largely unknown. This pilot study compared the effects of two different imagery modalities, external visual imagery and kinesthetic imagery, on pelvis and hip kinematics during two technical dance movements, plié and sauté. Each of three female dance students (mean age = 19.7 years, mean years of training = 10.7) was assigned to use a type of imagery practice: visual imagery, kinesthetic imagery, or no imagery. Effects of motor imagery on peak external hip rotation varied by both modality and task. Kinesthetic imagery increased peak external hip rotation for pliés, while visual imagery increased peak external hip rotation for sautés. Findings suggest that the success of motor imagery in improving performance may be task-specific. Dancers may benefit from matching imagery modality to technical tasks in order to improve alignment and thereby avoid chronic injury.

  11. Context-dependent adaptation of visually-guided arm movements and vestibular eye movements: role of the cerebellum

    Science.gov (United States)

    Lewis, Richard F.

    2003-01-01

    Accurate motor control requires adaptive processes that correct for gradual and rapid perturbations in the properties of the controlled object. The ability to quickly switch between different movement synergies using sensory cues, referred to as context-dependent adaptation, is a subject of considerable interest at present. The potential function of the cerebellum in context-dependent adaptation remains uncertain, but the data reviewed below suggest that it may play a fundamental role in this process.

  12. The cost of making an eye movement : A direct link between visual working memory and saccade execution

    NARCIS (Netherlands)

    Schut, Martijn J; Van der Stoep, Nathan; Postma, Albert; Van der Stigchel, Stefan

    2017-01-01

    To facilitate visual continuity across eye movements, the visual system must presaccadically acquire information about the future foveal image. Previous studies have indicated that visual working memory (VWM) affects saccade execution. However, the reverse relation, the effect of saccade execution

  13. Caulimoviridae Tubule-Guided Transport Is Dictated by Movement Protein Properties ▿

    Science.gov (United States)

    Sánchez-Navarro, Jesús; Fajardo, Thor; Zicca, Stefania; Pallás, Vicente; Stavolone, Livia

    2010-01-01

    Plant viruses move through plasmodesmata (PD) either as nucleoprotein complexes (NPCs) or as tubule-guided encapsidated particles with the help of movement proteins (MPs). To explore how and why MPs specialize in one mechanism or the other, we tested the exchangeability of MPs encoded by DNA and RNA virus genomes by means of an engineered alfalfa mosaic virus (AMV) system. We show that Caulimoviridae (DNA genome virus) MPs are competent for RNA virus particle transport but are unable to mediate NPC movement, and we discuss this restriction in terms of the evolution of DNA virus MPs as a means of mediating DNA viral genome entry into the RNA-trafficking PD pathway. PMID:20130061

  14. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    NARCIS (Netherlands)

    Jesse, A.; McQueen, J.M.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes

  15. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements.

    Science.gov (United States)

    Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J Douglas

    2016-01-01

    In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.

  16. Increased central common drive to ankle plantar flexor and dorsiflexor muscles during visually guided gait

    DEFF Research Database (Denmark)

    Jensen, Peter; Jensen, Nicole Jacqueline; Terkildsen, Cecilie Ulbæk

    2018-01-01

    When we walk in a challenging environment, we use visual information to modify our gait and place our feet carefully on the ground. Here, we explored how central common drive to ankle muscles changes in relation to visually guided foot placement. Sixteen healthy adults aged 23 ± 5 years participa......When we walk in a challenging environment, we use visual information to modify our gait and place our feet carefully on the ground. Here, we explored how central common drive to ankle muscles changes in relation to visually guided foot placement. Sixteen healthy adults aged 23 ± 5 years...

  17. Move faster, think later: Women who play action video games have quicker visually-guided responses with later onset visuomotor-related brain activity.

    Science.gov (United States)

    Gorbet, Diana J; Sergio, Lauren E

    2018-01-01

    A history of action video game (AVG) playing is associated with improvements in several visuospatial and attention-related skills and these improvements may be transferable to unrelated tasks. These facts make video games a potential medium for skill-training and rehabilitation. However, examinations of the neural correlates underlying these observations are almost non-existent in the visuomotor system. Further, the vast majority of studies on the effects of a history of AVG play have been done using almost exclusively male participants. Therefore, to begin to fill these gaps in the literature, we present findings from two experiments. In the first, we use functional MRI to examine brain activity in experienced, female AVG players during visually-guided reaching. In the second, we examine the kinematics of visually-guided reaching in this population. Imaging data demonstrate that relative to women who do not play, AVG players have less motor-related preparatory activity in the cuneus, middle occipital gyrus, and cerebellum. This decrease is correlated with estimates of time spent playing. Further, these correlations are strongest during the performance of a visuomotor mapping that spatially dissociates eye and arm movements. However, further examinations of the full time-course of visuomotor-related activity in the AVG players revealed that the decreased activity during motor preparation likely results from a later onset of activity in AVG players, which occurs closer to beginning motor execution relative to the non-playing group. Further, the data presented here suggest that this later onset of preparatory activity represents greater neural efficiency that is associated with faster visually-guided responses.

  18. Move faster, think later: Women who play action video games have quicker visually-guided responses with later onset visuomotor-related brain activity

    Science.gov (United States)

    Gorbet, Diana J.; Sergio, Lauren E.

    2018-01-01

    A history of action video game (AVG) playing is associated with improvements in several visuospatial and attention-related skills and these improvements may be transferable to unrelated tasks. These facts make video games a potential medium for skill-training and rehabilitation. However, examinations of the neural correlates underlying these observations are almost non-existent in the visuomotor system. Further, the vast majority of studies on the effects of a history of AVG play have been done using almost exclusively male participants. Therefore, to begin to fill these gaps in the literature, we present findings from two experiments. In the first, we use functional MRI to examine brain activity in experienced, female AVG players during visually-guided reaching. In the second, we examine the kinematics of visually-guided reaching in this population. Imaging data demonstrate that relative to women who do not play, AVG players have less motor-related preparatory activity in the cuneus, middle occipital gyrus, and cerebellum. This decrease is correlated with estimates of time spent playing. Further, these correlations are strongest during the performance of a visuomotor mapping that spatially dissociates eye and arm movements. However, further examinations of the full time-course of visuomotor-related activity in the AVG players revealed that the decreased activity during motor preparation likely results from a later onset of activity in AVG players, which occurs closer to beginning motor execution relative to the non-playing group. Further, the data presented here suggest that this later onset of preparatory activity represents greater neural efficiency that is associated with faster visually-guided responses. PMID:29364891

  19. The human oculomotor response to simultaneous visual and physical movements at two different frequencies

    Science.gov (United States)

    Wall, C.; Assad, A.; Aharon, G.; Dimitri, P. S.; Harris, L. R.

    2001-01-01

    In order to investigate interactions in the visual and vestibular systems' oculomotor response to linear movement, we developed a two-frequency stimulation technique. Thirteen subjects lay on their backs and were oscillated sinusoidally along their z-axes at between 0.31 and 0.81 Hz. During the oscillation subjects viewed a large, high-contrast, visual pattern oscillating in the same direction as the physical motion but at a different, non-harmonically related frequency. The evoked eye movements were measured by video-oculography and spectrally analysed. We found significant signal level at the sum and difference frequencies as well as at other frequencies not present in either stimulus. The emergence of new frequencies indicates non-linear processing consistent with an agreement-detector system that have previously proposed.

  20. Eye movements as an index of pathologist visual expertise: a pilot study.

    Directory of Open Access Journals (Sweden)

    Tad T Brunyé

    Full Text Available A pilot study examined the extent to which eye movements occurring during interpretation of digitized breast biopsy whole slide images (WSI can distinguish novice interpreters from experts, informing assessments of competency progression during training and across the physician-learning continuum. A pathologist with fellowship training in breast pathology interpreted digital WSI of breast tissue and marked the region of highest diagnostic relevance (dROI. These same images were then evaluated using computer vision techniques to identify visually salient regions of interest (vROI without diagnostic relevance. A non-invasive eye tracking system recorded pathologists' (N = 7 visual behavior during image interpretation, and we measured differential viewing of vROIs versus dROIs according to their level of expertise. Pathologists with relatively low expertise in interpreting breast pathology were more likely to fixate on, and subsequently return to, diagnostically irrelevant vROIs relative to experts. Repeatedly fixating on the distracting vROI showed limited value in predicting diagnostic failure. These preliminary results suggest that eye movements occurring during digital slide interpretation can characterize expertise development by demonstrating differential attraction to diagnostically relevant versus visually distracting image regions. These results carry both theoretical implications and potential for monitoring and evaluating student progress and providing automated feedback and scanning guidance in educational settings.

  1. When vision guides movement: a functional imaging study of the monkey brain.

    Science.gov (United States)

    Gregoriou, Georgia G; Savaki, Helen E

    2003-07-01

    Goal-directed reaching requires a precise neural representation of the arm position and the target location. Parietal and frontal cortical areas rely on visual, somatosensory, and motor signals to guide the reaching arm to the desired position in space. To dissociate the regions processing these signals, we applied the quantitative [(14)C]-deoxyglucose method on monkeys reaching either in the light or in the dark. Nonvisual (somatosensory and memory-related) guidance of the arm, during reaching in the dark, induced activation of discrete regions in the parietal, premotor, and motor cortices. These included the dorsal part of the medial bank of the intraparietal sulcus, the ventral premotor area F4, the dorsal premotor area F2 below the superior precentral dimple, and the primary somatosensory and motor cortices. Additional parietal and premotor regions comprising the ventral intraparietal cortex, ventral premotor area F5, and the ventral part of dorsal premotor area F2 were activated by visual guidance of the arm during reaching in the light. This study provides evidence that different regions of the parieto-premotor circuit process the visual, somatosensory, and motor-memory-related signals which guide the moving arm.

  2. Visualization of bed material movement in a simulated fluidized bed heat exchanger by neutron radiography

    International Nuclear Information System (INIS)

    Umekawa, Hisashi; Ozawa, Mamoru; Takenaka, Nobuyuki; Matsubayashi, Masahito

    1999-01-01

    The bulk movement of fluidized bed material was visualized by neutron radiography by introducing tracers into the bed materials. The simulated fluidized bed consisted of aluminum plates, and the bed material was sand of 99.7% SiO 2 (mean diameter: 0.218 mm, density: 2555 kg/m 3 ). Both materials were almost transparent to neutrons. Then the sand was colored by the contamination of the sand coated by CdSO 4 . Tracer particles of about 2 mm diameter were made by the B 4 C, bonded by the vinyl resin. The tracer was about ten times as large as the particle of fluidized bed material, but the traceability was enough to observe the bed-material bulk movement owing to the large effective viscosity of the fluidized bed. The visualized images indicated that the bubbles and/or wakes were important mechanism of the behavior of the fluidized bed movement

  3. Documentation and user's guide for DOSTOMAN: a pathways computer model of radionuclide movement

    International Nuclear Information System (INIS)

    Root, R.W. Jr.

    1980-01-01

    This report documents the mathematical development and the computer implementation of the Savannah River Laboratory computer code used to simulate radonuclide movement in the environment. The user's guide provides all the necessary information for the prospective user to input the required data, execute the computer program, and display the results

  4. Segmentation of dance movement: Effects of expertise, visual familiarity, motor experience and music

    Directory of Open Access Journals (Sweden)

    Bettina E. Bläsing

    2015-01-01

    Full Text Available According to event segmentation theory, action perception depends on sensory cues and prior knowledge, and the segmentation of observed actions is crucial for understanding and memorizing these actions. While most activities in everyday life are characterized by external goals and interaction with objects or persons, this does not necessarily apply to dance-like actions. We investigated to what extent visual familiarity of the observed movement and accompanying music influence the segmentation of a dance phrase in dancers of different skill level and non-dancers. In Experiment 1, dancers and non-dancers repeatedly watched a video clip showing a dancer performing a choreographed dance phrase and indicated segment boundaries by key press. Dancers generally defined less segment boundaries than non-dancers, specifically in the first trials in which visual familiarity with the phrase was low. Music increased the number of segment boundaries in the non-dancers and decreased it in the dancers. The results suggest that dance expertise reduces the number of perceived segment boundaries in an observed dance phrase, and that the ways visual familiarity and music affect movement segmentation are modulated by dance expertise. In a second experiment, motor experience was added as factor, based on empirical evidence suggesting that action perception is modified by visual and motor expertise in different ways. In Experiment 2, the same task as in Experiment 1 was performed by dance amateurs, and was repeated by the same participants after they had learned to dance the presented dance phrase. Less segment boundaries were defined in the middle trials after participants had learned to dance the phrase, and music reduced the number of segment boundaries before learning. The results suggest that specific motor experience of the observed movement influences its perception and anticipation and makes segmentation broader, but not to the same degree as dance expertise

  5. Selective weighting of action-related feature dimensions in visual working memory.

    Science.gov (United States)

    Heuer, Anna; Schubö, Anna

    2017-08-01

    Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.

  6. Comparison of accuracies of an intraoral spectrophotometer and conventional visual method for shade matching using two shade guide systems.

    Science.gov (United States)

    Parameswaran, Vidhya; Anilkumar, S; Lylajam, S; Rajesh, C; Narayan, Vivek

    2016-01-01

    This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome.

  7. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

    OpenAIRE

    Jesse, A.; McQueen, J.

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker...

  8. Influence of the Perspectives on the Movement of One-Leg Lifting in an Interactive-Visual Virtual Environment: A Pilot Study.

    Directory of Open Access Journals (Sweden)

    Chien-Hua Huang

    Full Text Available Numerous studies have confirmed the feasibility of active video games for clinical rehabilitation. To maximize training effectiveness, a personal program is necessary; however, little evidence is available to guide individualized game design for rehabilitation. This study assessed the perspectives and kinematic and temporal parameters of a participant's postural control in an interactive-visual virtual environment.Twenty-four healthy participants performed one-leg standing by leg lifting when a posture frame appeared either in a first- or third-person perspective of a virtual environment. A foot force plate was used to detect the displacement of the center of pressure. A three-way mixed factor design was applied, where the perspective was the between-participant factor, and the leg-lifting times (0.7 and 2.7 seconds and leg-lifting angles (30°and 90° were the within-participant factors. The reaction time, accuracy of the movement, and ability to shift weight were the dependent variables.Regarding the reaction time and accuracy of the movement, there were no significant main effects of the perspective, leg-lifting time, or angle. For the ability to shift weight, however, both the perspective and time exerted significant main effects, F(1,22 = 6.429 and F(1,22 = 13.978, respectively.Participants could shift their weight more effectively in the third-person perspective of the virtual environment. The results can serve as a reference for future designs of interactive-visual virtual environment as applied to rehabilitation.

  9. Action Planning Mediates Guidance of Visual Attention from Working Memory.

    Science.gov (United States)

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2015-01-01

    Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.

  10. Spatial constancy of attention across eye movements is mediated by the presence of visual objects.

    Science.gov (United States)

    Lisi, Matteo; Cavanagh, Patrick; Zorzi, Marco

    2015-05-01

    Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results.

  11. A closer look at visually guided saccades in autism and Asperger’s disorder

    Directory of Open Access Journals (Sweden)

    Beth eJohnson

    2012-11-01

    Full Text Available Motor impairments have been found to be a significant clinical feature associated with autism and Asperger’s disorder (AD in addition to core symptoms of communication and social cognition deficits. Motor deficits in high-functioning autism (HFA and AD may differentiate these disorders, particularly with respect to the role of the cerebellum in motor functioning. Current neuroimaging and behavioural evidence suggests greater disruption of the cerebellum in HFA than AD. Investigations of ocular motor functioning have previously been used in clinical populations to assess the integrity of the cerebellar networks, through examination of saccade accuracy and the integrity of saccade dynamics. Previous investigations of visually guided saccades in HFA and AD have only assessed basic saccade metrics, such as latency, amplitude and gain, as well as peak velocity. We used a simple visually guided saccade paradigm to further characterize the profile of visually guided saccade metrics and dynamics in HFA and AD. It was found that children with HFA, but not AD, were more inaccurate across both small (5° and large (10° target amplitudes, and final eye position was hypometric at 10°. These findings suggest greater functional disturbance of the cerebellum in HFA than AD, and suggest fundamental difficulties with visual error monitoring in HFA.

  12. Target position uncertainty during visually guided deep-inspiration breath-hold radiotherapy in locally advanced lung cancer

    DEFF Research Database (Denmark)

    Rydhog, Jonas Scherman; de Blanck, Steen Riisgaard; Josipovic, Mirjana

    2017-01-01

    Purpose: The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath hold (DISH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients.Methods: Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course...... of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm.Results: A mean...... small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH. (C) 2017 Elsevier B.V. All rights reserved....

  13. Does Visual Attention Span Relate to Eye Movements during Reading and Copying?

    Science.gov (United States)

    Bosse, Marie-Line; Kandel, Sonia; Prado, Chloé; Valdois, Sylviane

    2014-01-01

    This research investigated whether text reading and copying involve visual attention-processing skills. Children in grades 3 and 5 read and copied the same text. We measured eye movements while reading and the number of gaze lifts (GL) during copying. The children were also administered letter report tasks that constitute an estimation of the…

  14. Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience.

    Science.gov (United States)

    Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang

    2017-04-01

    In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. The Impact of Visual Guided Order Picking on Ocular Comfort, Ocular Surface and Tear Function.

    Directory of Open Access Journals (Sweden)

    Angelika Klein-Theyer

    Full Text Available We investigated the effects of a visual picking system on ocular comfort, the ocular surface and tear function compared to those of a voice guided picking solution.Prospective, observational, cohort study.Institutional.A total of 25 young asymptomatic volunteers performed commissioning over 10 hours on two consecutive days.The operators were guided in the picking process by two different picking solutions, either visually or by voice while their subjective symptoms and ocular surface and tear function parameters were recorded.The visual analogue scale (VAS values, according to subjective dry eye symptoms, in the visual condition were significantly higher at the end of the commissioning than the baseline measurements. In the voice condition, the VAS values remained stable during the commissioning. The tear break-up time (BUT values declined significantly in the visual condition (pre-task: 16.6 sec and post-task: 9.6 sec in the right eyes, that were exposed to the displays, the left eyes in the visual condition showed only a minor decline, whereas the BUT values in the voice condition remained constant (right eyes or even increased (left eyes over the time. No significant differences in the tear meniscus height values before and after the commissioning were observed in either condition.In our study, the use of visually guided picking solutions was correlated with post-task subjective symptoms and tear film instability.

  16. Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings

    Directory of Open Access Journals (Sweden)

    Neil M. Dundon

    2015-07-01

    Full Text Available Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: compensation and restoration. Existing methods can be classified into three groups: Visual Scanning Training (VST, Audio-Visual Scanning Training (AViST and Vision Restoration Training (VRT. VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local within-system interactions (i.e., relying on plasticity within peri-lesional spared tissue and changes in more global between-system networks (i.e., recruiting alternative visual pathways contribute to both vision restoration and compensatory rehabilitation that ultimately have implications for the rehabilitation of cognitive functions.

  17. Macular degeneration affects eye movement behaviour during visual search

    Directory of Open Access Journals (Sweden)

    Stefan eVan Der Stigchel

    2013-09-01

    Full Text Available Patients with a scotoma in their central vision (e.g. due to macular degeneration, MD commonly adopt a strategy to direct the eyes such that the image falls onto a peripheral location on the retina. This location is referred to as the preferred retinal locus (PRL. Although previous research has investigated the characteristics of this PRL, it is unclear whether eye movement metrics are modulated by peripheral viewing with a PRL as measured during a visual search paradigm. To this end, we tested four MD patients in a visual search paradigm and contrasted their performance with a healthy control group and a healthy control group performing the same experiment with a simulated scotoma. The experiment contained two conditions. In the first condition the target was an unfilled circle hidden among c-shaped distractors (serial condition and in the second condition the target was a filled circle (pop-out condition. Saccadic search latencies for the MD group were significantly longer in both conditions compared to both control groups. Results of a subsequent experiment indicated that this difference between the MD and the control groups could not be explained by a difference in target selection sensitivity. Furthermore, search behaviour of MD patients was associated with saccades with smaller amplitudes towards the scotoma, an increased intersaccadic interval and an increased number of eye movements necessary to locate the target. Some of these characteristics, such as the increased intersaccadic interval, were also observed in the simulation group, which indicate that these characteristics are related to the peripheral viewing itself. We suggest that the combination of the central scotoma and peripheral viewing can explain the altered search behaviour and no behavioural evidence was found for a possible reorganization of the visual system associated with the use of a PRL. Thus the switch from a fovea-based to a PRL-based reference frame impairs search

  18. Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2014-05-27

    Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.

  19. Reliability of Visual and Somatosensory Feedback in Skilled Movement: The Role of the Cerebellum.

    Science.gov (United States)

    Mizelle, J C; Oparah, Alexis; Wheaton, Lewis A

    2016-01-01

    The integration of vision and somatosensation is required to allow for accurate motor behavior. While both sensory systems contribute to an understanding of the state of the body through continuous updating and estimation, how the brain processes unreliable sensory information remains to be fully understood in the context of complex action. Using functional brain imaging, we sought to understand the role of the cerebellum in weighting visual and somatosensory feedback by selectively reducing the reliability of each sense individually during a tool use task. We broadly hypothesized upregulated activation of the sensorimotor and cerebellar areas during movement with reduced visual reliability, and upregulated activation of occipital brain areas during movement with reduced somatosensory reliability. As specifically compared to reduced somatosensory reliability, we expected greater activations of ipsilateral sensorimotor cerebellum for intact visual and somatosensory reliability. Further, we expected that ipsilateral posterior cognitive cerebellum would be affected with reduced visual reliability. We observed that reduced visual reliability results in a trend towards the relative consolidation of sensorimotor activation and an expansion of cerebellar activation. In contrast, reduced somatosensory reliability was characterized by the absence of cerebellar activations and a trend towards the increase of right frontal, left parietofrontal activation, and temporo-occipital areas. Our findings highlight the role of the cerebellum for specific aspects of skillful motor performance. This has relevance to understanding basic aspects of brain functions underlying sensorimotor integration, and provides a greater understanding of cerebellar function in tool use motor control.

  20. Humans use visual and remembered information about object location to plan pointing movements

    NARCIS (Netherlands)

    Brouwer, A.-M.; Knill, D.C.

    2009-01-01

    We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to

  1. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    Science.gov (United States)

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.

    Science.gov (United States)

    Albert, Richard N.

    Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…

  3. Visually and memory-guided grasping: aperture shaping exhibits a time-dependent scaling to Weber's law.

    Science.gov (United States)

    Holmes, Scott A; Mulla, Ali; Binsted, Gordon; Heath, Matthew

    2011-09-01

    The 'just noticeable difference' (JND) represents the minimum amount by which a stimulus must change to produce a noticeable variation in one's perceptual experience and is related to initial stimulus magnitude (i.e., Weber's law). The goal of the present study was to determine whether aperture shaping for visually derived and memory-guided grasping elicit a temporally dependent or temporally independent adherence to Weber's law. Participants were instructed to grasp differently sized objects (20, 30, 40, 50 and 60mm) in conditions wherein vision of the grasping environment was available throughout the response (i.e., closed-loop), when occluded at movement onset (i.e., open-loop), and when occluded for a brief (i.e., 0ms) or longer (i.e., 2000ms) delay in advance of movement onset. Within-participant standard deviations of grip aperture (i.e., the JNDs) computed at decile increments of normalized grasping time were used to determine participant's sensitivity to detecting changes in object size. Results showed that JNDs increased linearly with increasing object size from 10% to 40% of grasping time; that is, the trial-to-trial stability (i.e., visuomotor certainty) of grip aperture (i.e., the comparator) decreased with increasing object size (i.e., the initial stimulus). However, a null JND/object size scaling was observed during the middle and late stages of the response (i.e., >50% of grasping time). Most notably, the temporal relationship between JNDs and object size scaling was similar across the different visual conditions used here. Thus, our results provide evidence that aperture shaping elicits a time-dependent early, but not late, adherence to the psychophysical principles of Weber's law. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Action Planning Mediates Guidance of Visual Attention from Working Memory

    Directory of Open Access Journals (Sweden)

    Tobias Feldmann-Wüstefeld

    2015-01-01

    Full Text Available Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles, thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.

  5. Eye Movements Affect Postural Control in Young and Older Females.

    Science.gov (United States)

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  6. High contrast sensitivity for visually guided flight control in bumblebees.

    Science.gov (United States)

    Chakravarthi, Aravin; Kelber, Almut; Baird, Emily; Dacke, Marie

    2017-12-01

    Many insects rely on vision to find food, to return to their nest and to carefully control their flight between these two locations. The amount of information available to support these tasks is, in part, dictated by the spatial resolution and contrast sensitivity of their visual systems. Here, we investigate the absolute limits of these visual properties for visually guided position and speed control in Bombus terrestris. Our results indicate that the limit of spatial vision in the translational motion detection system of B. terrestris lies at 0.21 cycles deg -1 with a peak contrast sensitivity of at least 33. In the perspective of earlier findings, these results indicate that bumblebees have higher contrast sensitivity in the motion detection system underlying position control than in their object discrimination system. This suggests that bumblebees, and most likely also other insects, have different visual thresholds depending on the behavioral context.

  7. Functional Asymmetries Revealed in Visually Guided Saccades: An fMRI Study

    Energy Technology Data Exchange (ETDEWEB)

    Petit, L.; Zago, L.; Vigneau, M.; Crivello, F.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N. [Centre for Imaging, Neurosciences and Applications to Pathologies, UMR6232 CNRS CEA (France); Mazoyer, B. [Centre Hospitalier Universitaire, Caen (France); Andersson, F. [Institut Federatif de Recherche 135, Imagerie fonctionnelle, Tours (France); Mazoyer, B. [Institut Universitaire de France, Paris (France)

    2009-07-01

    Because eye movements are a fundamental tool for spatial exploration, we hypothesized that the neural bases of these movements in humans should be under right cerebral dominance, as already described for spatial attention. We used functional magnetic resonance imaging in 27 right-handed participants who alternated central fixation with either large or small visually guided saccades (VGS), equally performed in both directions. Hemispheric functional asymmetry was analyzed to identify whether brain regions showing VGS activation elicited hemispheric asymmetries. Hemispheric anatomical asymmetry was also estimated to assess its influence on the VGS functional lateralization. Right asymmetrical activations of a saccadic/attentional system were observed in the lateral frontal eye fields (FEF), the anterior part of the intra-parietal sulcus (aIPS), the posterior third of the superior temporal sulcus (STS), the occipito-temporal junction (MT/V5 area), the middle occipital gyrus, and medially along the calcarine fissure (V1). The present rightward functional asymmetries were not related to differences in gray matter (GM) density/sulci positions between right and left hemispheres in the pre-central, intra-parietal, superior temporal, and extrastriate regions. Only V1 asymmetries were explained for almost 20% of the variance by a difference in the position of the right and left calcarine fissures. Left asymmetrical activations of a saccadic motor system were observed in the medial FEF and in the motor strip eye field along the Rolando sulcus. They were not explained by GM asymmetries. We suggest that the leftward saccadic motor asymmetry is part of a general dominance of the left motor cortex in right-handers, which must include an effect of sighting dominance. Our results demonstrate that, although bilateral by nature, the brain network involved in the execution of VGSs, irrespective of their direction, presented specific right and left asymmetries that were not related to

  8. Functional Asymmetries Revealed in Visually Guided Saccades: An fMRI Study

    International Nuclear Information System (INIS)

    Petit, L.; Zago, L.; Vigneau, M.; Crivello, F.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.; Mazoyer, B.; Andersson, F.; Mazoyer, B.

    2009-01-01

    Because eye movements are a fundamental tool for spatial exploration, we hypothesized that the neural bases of these movements in humans should be under right cerebral dominance, as already described for spatial attention. We used functional magnetic resonance imaging in 27 right-handed participants who alternated central fixation with either large or small visually guided saccades (VGS), equally performed in both directions. Hemispheric functional asymmetry was analyzed to identify whether brain regions showing VGS activation elicited hemispheric asymmetries. Hemispheric anatomical asymmetry was also estimated to assess its influence on the VGS functional lateralization. Right asymmetrical activations of a saccadic/attentional system were observed in the lateral frontal eye fields (FEF), the anterior part of the intra-parietal sulcus (aIPS), the posterior third of the superior temporal sulcus (STS), the occipito-temporal junction (MT/V5 area), the middle occipital gyrus, and medially along the calcarine fissure (V1). The present rightward functional asymmetries were not related to differences in gray matter (GM) density/sulci positions between right and left hemispheres in the pre-central, intra-parietal, superior temporal, and extrastriate regions. Only V1 asymmetries were explained for almost 20% of the variance by a difference in the position of the right and left calcarine fissures. Left asymmetrical activations of a saccadic motor system were observed in the medial FEF and in the motor strip eye field along the Rolando sulcus. They were not explained by GM asymmetries. We suggest that the leftward saccadic motor asymmetry is part of a general dominance of the left motor cortex in right-handers, which must include an effect of sighting dominance. Our results demonstrate that, although bilateral by nature, the brain network involved in the execution of VGSs, irrespective of their direction, presented specific right and left asymmetries that were not related to

  9. The Use of Music to Promote Purposeful Movement in Children with Visual Impairments

    Science.gov (United States)

    Coleman, Jeremy

    2017-01-01

    Music plays a major role in the education and development of all children. Although the use of music in the education process may seem obvious to most professionals, there are only a few studies that discuss the effect of music on the purposeful movement of students with visual impairments (DePountis, Cady, & Hallak, 2013; Desrochers, Oshlag,…

  10. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system.

    Science.gov (United States)

    Mender, Bedeho M W; Stringer, Simon M

    2015-01-01

    We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.

  11. Posterior α EEG Dynamics Dissociate Current from Future Goals in Working Memory-Guided Visual Search

    NARCIS (Netherlands)

    de Vries, I.E.J.; van Driel, J.; Olivers, C.N.L.

    2017-01-01

    Current models of visual search assume that search is guided by an active visual working memory representation of what we are currently looking for. This attentional template for currently relevant stimuli can be dissociated from accessory memory representations that are only needed prospectively,

  12. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving

    Directory of Open Access Journals (Sweden)

    Mingbo Du

    2016-01-01

    Full Text Available This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.

  14. Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.

    Science.gov (United States)

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-15

    This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.

  15. Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving

    Science.gov (United States)

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-01

    This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203

  16. Proprioception contributes to the sense of agency during visual observation of hand movements: evidence from temporal judgments of action

    DEFF Research Database (Denmark)

    Balslev, Daniela; Cole, Jonathan; Miall, R Chris

    2007-01-01

    The ability to recognize visually one's own movement is important for motor control and, through attribution of agency, for social interactions. Agency of actions may be decided by comparisons of visual feedback, efferent signals, and proprioceptive inputs. Because the ability to identify one's own...

  17. Beyond scene gist: Objects guide search more than scene background.

    Science.gov (United States)

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Learning of Temporal and Spatial Movement Aspects: A Comparison of Four Types of Haptic Control and Concurrent Visual Feedback.

    Science.gov (United States)

    Rauter, Georg; Sigrist, Roland; Riener, Robert; Wolf, Peter

    2015-01-01

    In literature, the effectiveness of haptics for motor learning is controversially discussed. Haptics is believed to be effective for motor learning in general; however, different types of haptic control enhance different movement aspects. Thus, in dependence on the movement aspects of interest, one type of haptic control may be effective whereas another one is not. Therefore, in the current work, it was investigated if and how different types of haptic controllers affect learning of spatial and temporal movement aspects. In particular, haptic controllers that enforce active participation of the participants were expected to improve spatial aspects. Only haptic controllers that provide feedback about the task's velocity profile were expected to improve temporal aspects. In a study on learning a complex trunk-arm rowing task, the effect of training with four different types of haptic control was investigated: position control, path control, adaptive path control, and reactive path control. A fifth group (control) trained with visual concurrent augmented feedback. As hypothesized, the position controller was most effective for learning of temporal movement aspects, while the path controller was most effective in teaching spatial movement aspects of the rowing task. Visual feedback was also effective for learning temporal and spatial movement aspects.

  19. Assessment of atherosclerotic luminal narrowing of coronary arteries based on morphometrically generated visual guides.

    Science.gov (United States)

    Barth, Rolf F; Kellough, David A; Allenby, Patricia; Blower, Luke E; Hammond, Scott H; Allenby, Greg M; Buja, L Maximilian

    Determination of the degree of stenosis of atherosclerotic coronary arteries is an important part of postmortem examination of the heart, but, unfortunately, estimation of the degree of luminal narrowing can be imprecise and tends to be approximations. Visual guides can be useful to assess this, but earlier attempts to develop such guides did not employ digital technology. Using this approach, we have developed two computer-generated morphometric guides to estimate the degree of luminal narrowing of atherosclerotic coronary arteries. The first is based on symmetric or eccentric circular or crescentic narrowing of the vessel lumen and the second on either slit-like or irregularly shaped narrowing of the vessel lumens. Using the Aperio ScanScope XT at a magnification of 20× we created digital whole-slide images of 20 representative microscopic cross sections of the left anterior descending (LAD) coronary artery, stained with either hematoxylin and eosin (H&E) or Movat's pentachrome stain. These cross sections illustrated a variety of luminal profiles and degrees of stenosis. Three representative types of images were selected and a visual guide was constructed with Adobe Photoshop CS5. Using the "Scale" and "Measurement" tools, we created a series of representations of stenosis with luminal cross sections depicting 20%, 40%, 60%, 70%, 80%, and 90% occlusion of the LAD branch. Four pathologists independently reviewed and scored the degree of atherosclerotic luminal narrowing based on our visual guides. In addition, digital technology was employed to determine the degree of narrowing by measuring the cross-sectional area of the 20 microscopic sections of the vessels, first assuming no narrowing and then comparing this to the percent of narrowing determined by precise measurement. Two of the observers were very experienced general autopsy pathologists, one was a first-year pathology resident on his first rotation on the autopsy service, and the fourth observer was a

  20. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    Directory of Open Access Journals (Sweden)

    Jeff A Tracey

    Full Text Available Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  1. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    Science.gov (United States)

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  2. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    Science.gov (United States)

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  3. Vision-guided ocular growth in a mutant chicken model with diminished visual acuity.

    Science.gov (United States)

    Ritchey, Eric R; Zelinka, Christopher; Tang, Junhua; Liu, Jun; Code, Kimberly A; Petersen-Jones, Simon; Fischer, Andy J

    2012-09-01

    Visual experience is known to guide ocular growth. We tested the hypothesis that vision-guided ocular growth is disrupted in a model system with diminished visual acuity. We examine whether ocular elongation is influenced by form-deprivation (FD) and lens-imposed defocus in the Retinopathy, Globe Enlarged (RGE) chicken. Young RGE chicks have poor visual acuity, without significant retinal pathology, resulting from a mutation in guanine nucleotide-binding protein β3 (GNB3), also known as transducin β3 or Gβ3. The mutation in GNB3 destabilizes the protein and causes a loss of Gβ3 from photoreceptors and ON-bipolar cells (Ritchey et al., 2010). FD increased ocular elongation in RGE eyes in a manner similar to that seen in wild-type (WT) eyes. By comparison, the excessive ocular elongation that results from hyperopic defocus was increased, whereas myopic defocus failed to significantly decrease ocular elongation in RGE eyes. Brief daily periods of unrestricted vision interrupting FD prevented ocular elongation in RGE chicks in a manner similar to that seen in WT chicks. Glucagonergic amacrine cells differentially expressed the immediate early gene Egr1 in response to growth-guiding stimuli in RGE retinas, but the defocus-dependent up-regulation of Egr1 was lesser in RGE retinas compared to that of WT retinas. We conclude that high visual acuity, and the retinal signaling mediated by Gβ3, is not required for emmetropization and the excessive ocular elongation caused by FD and hyperopic defocus. However, the loss of acuity and Gβ3 from RGE retinas causes enhanced responses to hyperopic defocus and diminished responses to myopic defocus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. USING THE SELECTIVE FUNCTIONAL MOVEMENT ASSESSMENT AND REGIONAL INTERDEPENDENCE THEORY TO GUIDE TREATMENT OF AN ATHLETE WITH BACK PAIN: A CASE REPORT.

    Science.gov (United States)

    Goshtigian, Gabriella R; Swanson, Brian T

    2016-08-01

    Despite the multidirectional quality of human movement, common measurement procedures used in physical therapy examination are often uni-planar and lack the ability to assess functional complexities involved in daily activities. Currently, there is no widely accepted, validated standard to assess movement quality. The Selective Functional Movement Assessment (SFMA) is one possible system to objectively assess complex functional movements. The purpose of this case report is to illustrate the application of the SFMA as a guide to the examination, evaluation, and management of a patient with non-specific low back pain (LBP). An adolescent male athlete with LBP was evaluated using the SFMA. It was determined that the patient had mobility limitations remote to the site of pain (thoracic spine and hips) which therapists hypothesized were leading to compensatory hypermobility at the lumbar spine. Guided by the SFMA, initial interventions focused on local (lumbar) symptom management, progressing to remote mobility deficits, and then addressing the local stability deficit. All movement patterns became functional/non-painful except the right upper extremity medial rotation-extension pattern. At discharge, the patient demonstrated increased soft tissue extensibility of hip musculature and joint mobility of the thoracic spine along with normalization of lumbopelvic motor control. Improvements in pain exceeded minimal clinically important differences, from 2-7/10 on a verbal analog scale at initial exam to 0-2/10 at discharge. Developing and progressing a plan of care for an otherwise healthy and active adolescent with non-specific LBP can be challenging. Human movement is a collaborative effort of muscle groups that are interdependent; the use of a movement-based assessment model can help identify weak links affecting overall function. The SFMA helped guide therapists to dysfunctional movements not seen with more conventional examination procedures. Level 4.

  5. Effect of rectal enema on intrafraction prostate movement during image-guided radiotherapy.

    Science.gov (United States)

    Choi, Youngmin; Kwak, Dong-Won; Lee, Hyung-Sik; Hur, Won-Joo; Cho, Won-Yeol; Sung, Gyung Tak; Kim, Tae-Hyo; Kim, Soo-Dong; Yun, Seong-Guk

    2015-04-01

    Rectal volume and movement are major factors that influence prostate location. The aim of this study was to assess the effect of a rectal enema on intrafraction prostate motion. The data from 12 patients with localised prostate cancer were analysed. Each patient underwent image-guided radiotherapy (RT), receiving a total dose of 70 Gy in 28 fractions. Rectal enemas were administered to all of the patients before each RT fraction. The location of the prostate was determined by implanting three fiducial markers under the guidance of transrectal ultrasound. Each patient underwent preparation for IGRT twice before an RT fraction and in the middle of the fraction. The intrafraction displacement of the prostate was calculated by comparing fiducial marker locations before and in the middle of an RT fraction. The rectal enemas were well tolerated by patients. The mean intrafraction prostate movement in 336 RT fractions was 1.11 ± 0.77 mm (range 0.08-7.20 mm). Intrafraction motions of 1, 2 and 3 mm were observed in 56.0%, 89.0% and 97.6% of all RT fractions, respectively. The intrafraction movements on supero-inferior and anteroposterior axes were larger than on the right-to-left axes (P movement, calculated using the van Herk formula (2.5Σ + 0.7σ), was 1.50 mm. A daily rectal enema before each RT fraction was tolerable and yielded little intrafraction prostate displacement. We think the use of rectal enemas is a feasible method to reduce prostate movement during RT. © 2014 The Royal Australian and New Zealand College of Radiologists.

  6. Effect of rectal enema on intrafraction prostate movement during image-guided radiotherapy

    International Nuclear Information System (INIS)

    Choi, Youngmin; Kwak, Dong-Won; Lee, Hyung-Sik; Hur, Won-Jooh; Cho, Won-Yeol; Sung, Gyung Tak; Kim, Tae-Hyo; Kim, Soo-Dong; Yun, Seong-Guk

    2015-01-01

    Rectal volume and movement are major factors that influence prostate location. The aim of this study was to assess the effect of a rectal enema on intrafraction prostate motion. The data from 12 patients with localised prostate cancer were analysed. Each patient underwent image-guided radiotherapy (RT), receiving a total dose of 70 Gy in 28 fractions. Rectal enemas were administered to all of the patients before each RT fraction. The location of the prostate was determined by implanting three fiducial markers under the guidance of transrectal ultrasound. Each patient underwent preparation for IGRT twice before an RT fraction and in the middle of the fraction. The intrafraction displacement of the prostate was calculated by comparing fiducial marker locations before and in the middle of an RT fraction. The rectal enemas were well tolerated by patients. The mean intrafraction prostate movement in 336 RT fractions was 1.11 ± 0.77 mm (range 0.08–7.20 mm). Intrafraction motions of 1, 2 and 3 mm were observed in 56.0%, 89.0% and 97.6% of all RT fractions, respectively. The intrafraction movements on supero-inferior and anteroposterior axes were larger than on the right-to-left axes (P < 0.05). The CTV-to-PTV margin necessary to allow for movement, calculated using the van Herk formula (2.5Σ + 0.7σ), was 1.50 mm. A daily rectal enema before each RT fraction was tolerable and yielded little intrafraction prostate displacement. We think the use of rectal enemas is a feasible method to reduce prostate movement during RT.

  7. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke.

    Science.gov (United States)

    Secoli, Riccardo; Milot, Marie-Helene; Rosati, Giulio; Reinkensmeyer, David J

    2011-04-23

    Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated

  8. Modeling eye movements in visual agnosia with a saliency map approach: bottom-up guidance or top-down strategy?

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2011-08-01

    Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept.

    Science.gov (United States)

    Roosink, Meyke; Robitaille, Nicolas; McFadyen, Bradford J; Hébert, Luc J; Jackson, Philip L; Bouyer, Laurent J; Mercier, Catherine

    2015-01-05

    Virtual reality (VR) provides interactive multimodal sensory stimuli and biofeedback, and can be a powerful tool for physical and cognitive rehabilitation. However, existing systems have generally not implemented realistic full-body avatars and/or a scaling of visual movement feedback. We developed a "virtual mirror" that displays a realistic full-body avatar that responds to full-body movements in all movement planes in real-time, and that allows for the scaling of visual feedback on movements in real-time. The primary objective of this proof-of-concept study was to assess the ability of healthy subjects to detect scaled feedback on trunk flexion movements. The "virtual mirror" was developed by integrating motion capture, virtual reality and projection systems. A protocol was developed to provide both augmented and reduced feedback on trunk flexion movements while sitting and standing. The task required reliance on both visual and proprioceptive feedback. The ability to detect scaled feedback was assessed in healthy subjects (n = 10) using a two-alternative forced choice paradigm. Additionally, immersion in the VR environment and task adherence (flexion angles, velocity, and fluency) were assessed. The ability to detect scaled feedback could be modelled using a sigmoid curve with a high goodness of fit (R2 range 89-98%). The point of subjective equivalence was not significantly different from 0 (i.e. not shifted), indicating an unbiased perception. The just noticeable difference was 0.035 ± 0.007, indicating that subjects were able to discriminate different scaling levels consistently. VR immersion was reported to be good, despite some perceived delays between movements and VR projections. Movement kinematic analysis confirmed task adherence. The new "virtual mirror" extends existing VR systems for motor and pain rehabilitation by enabling the use of realistic full-body avatars and scaled feedback. Proof-of-concept was demonstrated for the assessment of

  10. Visual Sample Plan Version 7.0 User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Matzke, Brett D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Newburn, Lisa LN [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hathaway, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bramer, Lisa M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wilson, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dowson, Scott T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sego, Landon H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Pulsipher, Brent A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-03-01

    User's guide for VSP 7.0 This user's guide describes Visual Sample Plan (VSP) Version 7.0 and provides instructions for using the software. VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 7.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sites suspected of contamination. The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (XP, Vista, Windows 7, and Windows 8). Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem/rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for unexploded ordnance (UXO) identification.

  11. The association of visually-assessed quality of movement during jump-landing with ankle dorsiflexion range-of-motion and hip abductor muscle strength among healthy female athletes.

    Science.gov (United States)

    Rabin, Alon; Einstein, Ofira; Kozol, Zvi

    2018-05-01

    To explore the association between ankle dorsiflexion (DF) range of motion (ROM), and hip abductor muscle strength, to visually-assessed quality of movement during jump-landing. Cross-sectional. Gymnasium of participating teams. 37 female volleyball players. Quality of movement in the frontal-plane, sagittal-plane, and overall (both planes) was visually rated as "good/moderate" or "poor". Weight-bearing Ankle DF ROM and hip abductor muscle strength were compared between participants with differing quality of movement. Weight-bearing DF ROM on both sides was decreased among participants with "poor" sagittal-plane quality of movement (dominant side: 50.8° versus 43.6°, P = .02; non-dominant side: 54.6° versus 45.9°, P = .01), as well as among participants with an overall "poor" quality of movement (dominant side: 51.8° versus 44.0°, P movement (53.9° versus 46.0°, P = .02). No differences in hip abductor muscle strength were noted between participants with differing quality of movement. Visual assessment of jump-landing can detect differences in quality of movement that are associated with ankle DF ROM. Clinicians observing a poor quality of movement may wish to assess ankle DF ROM. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Visually guided male urinary catheterization: a feasibility study.

    Science.gov (United States)

    Willette, Paul A; Banks, Kevin; Shaffer, Lynn

    2013-01-01

    Ten percent to 15% of urinary catheterizations involve complications. New techniques to reduce risks and pain are indicated. This study examines the feasibility and safety of male urinary catheterization by nursing personnel using a visually guided device in a clinical setting. The device, a 0.6-mm fiber-optic bundle inside a 14F triple-lumen flexible urinary catheter with a lubricious coating, irrigation port, and angled tip, connects to a camera, allowing real-time viewing of progress on a color monitor. Two emergency nurses were trained to use the device. Male patients 18 years or older presenting to the emergency department with an indication for urinary catheterization using a standard Foley or Coudé catheter were eligible to participate in the study. Exclusion criteria were a current suprapubic tube or gross hematuria prior to the procedure. Twenty-five patients were enrolled. Data collected included success of placement, total procedure time, pre-procedure pain and maximum pain during the procedure, gross hematuria, abnormalities or injuries identified if catheterization failed, occurrence of and reason for equipment failures, and number of passes required for placement. All catheters were successfully placed. The median number of passes required was 1. For all but one patient, procedure time was ≤ 17 minutes. A median increase in pain scores of 1 point from baseline to the maximum was reported. Gross hematuria was observed in 2 patients. The success rate for placement of a Foley catheter with the visually guided device was 100%, indicating its safety, accuracy, and feasibility in a clinical setting. Minimal pain was associated with the procedure. Copyright © 2013 Emergency Nurses Association. Published by Mosby, Inc. All rights reserved.

  13. A Somatic Movement Approach to Fostering Emotional Resiliency through Laban Movement Analysis

    Directory of Open Access Journals (Sweden)

    Rachelle P. Tsachor

    2017-09-01

    Full Text Available Although movement has long been recognized as expressing emotion and as an agent of change for emotional state, there was a dearth of scientific evidence specifying which aspects of movement influence specific emotions. The recent identification of clusters of Laban movement components which elicit and enhance the basic emotions of anger, fear, sadness and happiness indicates which types of movements can affect these emotions (Shafir et al., 2016, but not how best to apply this knowledge. This perspective paper lays out a conceptual groundwork for how to effectively use these new findings to support emotional resiliency through voluntary choice of one's posture and movements. We suggest that three theoretical principles from Laban Movement Analysis (LMA can guide the gradual change in movement components in one's daily movements to somatically support shift in affective state: (A Introduce new movement components in developmental order; (B Use LMA affinities-among-components to guide the expansion of expressive movement range and (C Sequence change among components based on Laban's Space Harmony theory to support the gradual integration of that new range. The methods postulated in this article have potential to foster resiliency and provide resources for self-efficacy by expanding our capacity to adapt emotionally to challenges through modulating our movement responses.

  14. Visual Analytics of Complex Genomics Data to Guide Effective Treatment Decisions

    Directory of Open Access Journals (Sweden)

    Quang Vinh Nguyen

    2016-09-01

    Full Text Available In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, when building a framework for personalised treatment, the complexity of the genome must be captured in meaningful and actionable ways. This paper presents a novel visual analytics framework that enables effective analysis of large and complex genomics data. By providing interactive visualisations from the overview of the entire patient cohort to the detail view of individual genes, our work potentially guides effective treatment decisions for childhood cancer patients. The framework consists of multiple components enabling the complete analytics supporting personalised medicines, including similarity space construction, automated analysis, visualisation, gene-to-gene comparison and user-centric interaction and exploration based on feature selection. In addition to the traditional way to visualise data, we utilise the Unity3D platform for developing a smooth and interactive visual presentation of the information. This aims to provide better rendering, image quality, ergonomics and user experience to non-specialists or young users who are familiar with 3D gaming environments and interfaces. We illustrate the effectiveness of our approach through case studies with datasets from childhood cancers, B-cell Acute Lymphoblastic Leukaemia (ALL and Rhabdomyosarcoma (RMS patients, on how to guide the effective treatment decision in the cohort.

  15. Lateralization of visually guided detour behaviour in the common chameleon, Chamaeleo chameleon, a reptile with highly independent eye movements.

    Science.gov (United States)

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2013-11-01

    Chameleons (Chamaeleonidae, reptilia), in common with most ectotherms, show full optic nerve decussation and sparse inter-hemispheric commissures. Chameleons are unique in their capacity for highly independent, large-amplitude eye movements. We address the question: Do common chameleons, Chamaeleo chameleon, during detour, show patterns of lateralization of motion and of eye use that differ from those shown by other ectotherms? To reach a target (prey) in passing an obstacle in a Y-maze, chameleons were required to make a left or a right detour. We analyzed the direction of detours and eye use and found that: (i) individuals differed in their preferred detour direction, (ii) eye use was lateralized at the group level, with significantly longer durations of viewing the target with the right eye, compared with the left eye, (iii) during left side, but not during right side, detours the durations of viewing the target with the right eye were significantly longer than the durations with the left eye. Thus, despite the uniqueness of chameleons' visual system, they display patterns of lateralization of motion and of eye use, typical of other ectotherms. These findings are discussed in relation to hemispheric functions. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements.

    Science.gov (United States)

    Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R

    2017-07-01

    Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Rapid Eye Movements (REMs) and visual dream recall in both congenitally blind and sighted subjects

    Science.gov (United States)

    Bértolo, Helder; Mestre, Tiago; Barrio, Ana; Antona, Beatriz

    2017-08-01

    Our objective was to evaluate rapid eye movements (REMs) associated with visual dream recall in sighted subjects and congenital blind. During two consecutive nights polysomnographic recordings were performed at subjects home. REMs were detected by visual inspection on both EOG channels (EOG-H, EOG-V) and further classified as occurring isolated or in bursts. Dream recall was defined by the existence of a dream report. The two groups were compared using t-test and also the two-way ANOVA and a post-hoc Fisher test (for the features diagnosis (blind vs. sighted) and dream recall (yes or no) as a function of time). The average of REM awakenings per subject and the recall ability were identical in both groups. CB had a lower REM density than CS; the same applied to REM bursts and isolated eye movements. In the two-way ANOVA, REM bursts and REM density were significantly different for positive dream recall, mainly for the CB group and for diagnosis; furthermore for both features significant results were obtained for the interaction of time, recall and diagnosis; the interaction of recall and time was however, stronger. In line with previous findings the data show that blind have lower REMs density. However the ability of dream recall in congenitally blind and sighted controls is identical. In both groups visual dream recall is associated with an increase in REM bursts and density. REM bursts also show differences in the temporal profile. REM visual dream recall is associated with increased REMs activity.

  18. The Role of Clarity and Blur in Guiding Visual Attention in Photographs

    Science.gov (United States)

    Enns, James T.; MacDonald, Sarah C.

    2013-01-01

    Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…

  19. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    Directory of Open Access Journals (Sweden)

    Reinkensmeyer David J

    2011-04-01

    Full Text Available Abstract Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for

  20. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    Science.gov (United States)

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories.

    Directory of Open Access Journals (Sweden)

    Suzanne A E Nooij

    Full Text Available This study investigated the role of vection (i.e., a visually induced sense of self-motion, optokinetic nystagmus (OKN, and inadvertent head movements in visually induced motion sickness (VIMS, evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent, and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent. In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48. Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS.

  2. The Impact of Task Demands on Fixation-Related Brain Potentials during Guided Search.

    Directory of Open Access Journals (Sweden)

    Anthony J Ries

    Full Text Available Recording synchronous data from EEG and eye-tracking provides a unique methodological approach for measuring the sensory and cognitive processes of overt visual search. Using this approach we obtained fixation related potentials (FRPs during a guided visual search task specifically focusing on the lambda and P3 components. An outstanding question is whether the lambda and P3 FRP components are influenced by concurrent task demands. We addressed this question by obtaining simultaneous eye-movement and electroencephalographic (EEG measures during a guided visual search task while parametrically modulating working memory load using an auditory N-back task. Participants performed the guided search task alone, while ignoring binaurally presented digits, or while using the auditory information in a 0, 1, or 2-back task. The results showed increased reaction time and decreased accuracy in both the visual search and N-back tasks as a function of auditory load. Moreover, high auditory task demands increased the P3 but not the lambda latency while the amplitude of both lambda and P3 was reduced during high auditory task demands. The results show that both early and late stages of visual processing indexed by FRPs are significantly affected by concurrent task demands imposed by auditory working memory.

  3. An assessment of auditory-guided locomotion in an obstacle circumvention task.

    Science.gov (United States)

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2016-06-01

    This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.

  4. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  5. Context-dependent neural activation: internally and externally guided rhythmic lower limb movement in individuals with and without neurodegenerative disease

    Directory of Open Access Journals (Sweden)

    Madeleine Eve Hackney

    2015-12-01

    Full Text Available Parkinson’s Disease (PD is a neurodegenerative disorder that has received considerable attention in allopathic medicine over the past decades. However, it is clear that, to date, pharmacological and surgical interventions do not fully address symptoms of PD and patients’ quality of life. As both an alternative therapy and as an adjuvant to conventional approaches, several types of rhythmic movement (e.g., movement strategies, dance, tandem biking, tai chi have shown improvements to motor symptoms, lower limb control and postural stability in people with PD (Amano, Nocera, Vallabhajosula, Juncos, Gregor, Waddell et al., 2013; Earhart, 2009; M. E. Hackney & Earhart, 2008; Kadivar, Corcos, Foto, & Hondzinski, 2011; Morris, Iansek, & Kirkwood, 2009; Ridgel, Vitek, & Alberts, 2009. However, while these programs are increasing in number, still little is known about the neural mechanisms underlying motor improvements attained with such interventions. Studying limb motor control under task specific contexts can help determine the mechanisms of rehabilitation effectiveness. Both internally guided (IG and externally guided (EG movement strategies have evidence to support their use in rehabilitative programs. However, there appears to be a degree of differentiation in the neural substrates involved in IG versus EG designs. Because of the potential task specific benefits of rhythmic training within a rehabilitative context, this report will consider the use of IG and EG movement strategies, and observations produced by functional magnetic resonance imaging (fMRI and other imaging techniques. This review will present findings from lower limb imaging studies, under IG and EG conditions for populations with and without movement disorders. We will discuss how these studies might inform movement disorders rehabilitation (in the form of rhythmic, music-based movement training and highlight research gaps. We believe better understanding of lower limb neural

  6. Augmented visual feedback of movement performance to enhance walking recovery after stroke: study protocol for a pilot randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Thikey Heather

    2012-09-01

    Full Text Available Abstract Background Increasing evidence suggests that use of augmented visual feedback could be a useful approach to stroke rehabilitation. In current clinical practice, visual feedback of movement performance is often limited to the use of mirrors or video. However, neither approach is optimal since cognitive and self-image issues can distract or distress patients and their movement can be obscured by clothing or limited viewpoints. Three-dimensional motion capture has the potential to provide accurate kinematic data required for objective assessment and feedback in the clinical environment. However, such data are currently presented in numerical or graphical format, which is often impractical in a clinical setting. Our hypothesis is that presenting this kinematic data using bespoke visualisation software, which is tailored for gait rehabilitation after stroke, will provide a means whereby feedback of movement performance can be communicated in a more meaningful way to patients. This will result in increased patient understanding of their rehabilitation and will enable progress to be tracked in a more accessible way. Methods The hypothesis will be assessed using an exploratory (phase II randomised controlled trial. Stroke survivors eligible for this trial will be in the subacute stage of stroke and have impaired walking ability (Functional Ambulation Classification of 1 or more. Participants (n = 45 will be randomised into three groups to compare the use of the visualisation software during overground physical therapy gait training against an intensity-matched and attention-matched placebo group and a usual care control group. The primary outcome measure will be walking speed. Secondary measures will be Functional Ambulation Category, Timed Up and Go, Rivermead Visual Gait Assessment, Stroke Impact Scale-16 and spatiotemporal parameters associated with walking. Additional qualitative measures will be used to assess the participant

  7. A guide to the visual analysis and communication of biomolecular structural data.

    Science.gov (United States)

    Johnson, Graham T; Hertig, Samuel

    2014-10-01

    Biologists regularly face an increasingly difficult task - to effectively communicate bigger and more complex structural data using an ever-expanding suite of visualization tools. Whether presenting results to peers or educating an outreach audience, a scientist can achieve maximal impact with minimal production time by systematically identifying an audience's needs, planning solutions from a variety of visual communication techniques and then applying the most appropriate software tools. A guide to available resources that range from software tools to professional illustrators can help researchers to generate better figures and presentations tailored to any audience's needs, and enable artistically inclined scientists to create captivating outreach imagery.

  8. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    Science.gov (United States)

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  9. Eye movements reveal epistemic curiosity in human observers.

    Science.gov (United States)

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Visual Soccer Analytics: Understanding the Characteristics of Collective Team Movement Based on Feature-Driven Analysis and Abstraction

    Directory of Open Access Journals (Sweden)

    Manuel Stein

    2015-10-01

    Full Text Available With recent advances in sensor technologies, large amounts of movement data have become available in many application areas. A novel, promising application is the data-driven analysis of team sport. Specifically, soccer matches comprise rich, multivariate movement data at high temporal and geospatial resolution. Capturing and analyzing complex movement patterns and interdependencies between the players with respect to various characteristics is challenging. So far, soccer experts manually post-analyze game situations and depict certain patterns with respect to their experience. We propose a visual analysis system for interactive identification of soccer patterns and situations being of interest to the analyst. Our approach builds on a preliminary system, which is enhanced by semantic features defined together with a soccer domain expert. The system includes a range of useful visualizations to show the ranking of features over time and plots the change of game play situations, both helping the analyst to interpret complex game situations. A novel workflow includes improving the analysis process by a learning stage, taking into account user feedback. We evaluate our approach by analyzing real-world soccer matches, illustrate several use cases and collect additional expert feedback. The resulting findings are discussed with subject matter experts.

  11. ConnectomeExplorer: Query-guided visual analysis of large volumetric neuroscience data

    KAUST Repository

    Beyer, Johanna

    2013-12-01

    This paper presents ConnectomeExplorer, an application for the interactive exploration and query-guided visual analysis of large volumetric electron microscopy (EM) data sets in connectomics research. Our system incorporates a knowledge-based query algebra that supports the interactive specification of dynamically evaluated queries, which enable neuroscientists to pose and answer domain-specific questions in an intuitive manner. Queries are built step by step in a visual query builder, building more complex queries from combinations of simpler queries. Our application is based on a scalable volume visualization framework that scales to multiple volumes of several teravoxels each, enabling the concurrent visualization and querying of the original EM volume, additional segmentation volumes, neuronal connectivity, and additional meta data comprising a variety of neuronal data attributes. We evaluate our application on a data set of roughly one terabyte of EM data and 750 GB of segmentation data, containing over 4,000 segmented structures and 1,000 synapses. We demonstrate typical use-case scenarios of our collaborators in neuroscience, where our system has enabled them to answer specific scientific questions using interactive querying and analysis on the full-size data for the first time. © 1995-2012 IEEE.

  12. Comparative evaluation of toric intraocular lens alignment and visual quality with image-guided surgery and conventional three-step manual marking.

    Science.gov (United States)

    Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit Ms

    2018-01-01

    To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p =0.005). Postoperative refractive cylinder was -0.89±0.35 D in group I and -0.64±0.36 D in group II ( p =0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio ( p image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) ( p Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment.

  13. Abnormal externally guided movement preparation in recent-onset schizophrenia is associated with impaired selective attention to external input.

    Science.gov (United States)

    Smid, Henderikus G O M; Westenbroek, Joanna M; Bruggeman, Richard; Knegtering, Henderikus; Van den Bosch, Robert J

    2009-11-30

    Several theories propose that the primary cognitive impairment in schizophrenia concerns a deficit in the processing of external input information. There is also evidence, however, for impaired motor preparation in schizophrenia. This provokes the question whether the impaired motor preparation in schizophrenia is a secondary consequence of disturbed (selective) processing of the input needed for that preparation, or an independent primary deficit. The aim of the present study was to discriminate between these hypotheses, by investigating externally guided movement preparation in relation to selective stimulus processing. The sample comprised 16 recent-onset schizophrenia patients and 16 controls who performed a movement-precuing task. In this task, a precue delivered information about one, two or no parameters of a movement summoned by a subsequent stimulus. Performance measures and measures derived from the electroencephalogram showed that patients yielded smaller benefits from the precues and showed less cue-based preparatory activity in advance of the imperative stimulus than the controls, suggesting a response preparation deficit. However, patients also showed less activity reflecting selective attention to the precue. We therefore conclude that the existing evidence for an impairment of externally guided motor preparation in schizophrenia is most likely due to a deficit in selective attention to the external input, which lends support to theories proposing that the primary cognitive deficit in schizophrenia concerns the processing of input information.

  14. Subthalamic nucleus detects unnatural android movement.

    Science.gov (United States)

    Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi

    2017-12-19

    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.

  15. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders.

    Science.gov (United States)

    Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John

    2017-06-10

    Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.

  16. MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.

    Science.gov (United States)

    Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T

    1999-06-01

    We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.

  17. Priming and the guidance by visual and categorical templates in visual search

    NARCIS (Netherlands)

    Wilschut, A.M.; Theeuwes, J.; Olivers, C.N.L.

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual

  18. FUNdamental Movement in Early Childhood.

    Science.gov (United States)

    Campbell, Linley

    2001-01-01

    Noting that the development of fundamental movement skills is basic to children's motor development, this booklet provides a guide for early childhood educators in planning movement experiences for children between 4 and 8 years. The booklet introduces a wide variety of appropriate practices to promote movement skill acquisition and increased…

  19. CUE: counterfeit-resistant usable eye movement-based authentication via oculomotor plant characteristics and complex eye movement patterns

    Science.gov (United States)

    Komogortsev, Oleg V.; Karpov, Alexey; Holland, Corey D.

    2012-06-01

    The widespread use of computers throughout modern society introduces the necessity for usable and counterfeit-resistant authentication methods to ensure secure access to personal resources such as bank accounts, e-mail, and social media. Current authentication methods require tedious memorization of lengthy pass phrases, are often prone to shouldersurfing, and may be easily replicated (either by counterfeiting parts of the human body or by guessing an authentication token based on readily available information). This paper describes preliminary work toward a counterfeit-resistant usable eye movement-based (CUE) authentication method. CUE does not require any passwords (improving the memorability aspect of the authentication system), and aims to provide high resistance to spoofing and shoulder-surfing by employing the combined biometric capabilities of two behavioral biometric traits: 1) oculomotor plant characteristics (OPC) which represent the internal, non-visible, anatomical structure of the eye; 2) complex eye movement patterns (CEM) which represent the strategies employed by the brain to guide visual attention. Both OPC and CEM are extracted from the eye movement signal provided by an eye tracking system. Preliminary results indicate that the fusion of OPC and CEM traits is capable of providing a 30% reduction in authentication error when compared to the authentication accuracy of individual traits.

  20. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.

    Science.gov (United States)

    Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2011-04-01

    Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.

  1. Seeing your way to health: the visual pedagogy of Bess Mensendieck's physical culture system.

    Science.gov (United States)

    Veder, Robin

    2011-01-01

    This essay examines the images and looking practices central to Bess M. Mensendieck's (c.1866-1959) 'functional exercise' system, as documented in physical culture treatises published in Germany and the United States between 1906 and 1937. Believing that muscular realignment could not occur without seeing how the body worked, Mensendieck taught adult non-athletes to see skeletal alignment and muscular movement in their own and others' bodies. Three levels of looking practices are examined: didactic sequences; penetrating inspection and appreciation of physiological structures; and ideokinetic visual metaphors for guiding movement. With these techniques, Mensendieck's work bridged the body cultures of German Nacktkultur (nudism), American labour efficiency and the emerging physical education profession. This case study demonstrates how sport historians could expand their analyses to include practices of looking as well as questions of visual representation.

  2. Using eye movement analysis to study auditory effects on visual memory recall.

    Science.gov (United States)

    Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat

    2014-01-01

    Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants' EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in "with sound" stage was significantly reduced as compared with "without sound" stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process.

  3. Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall

    Directory of Open Access Journals (Sweden)

    Ramtin Zargari Marandi

    2014-02-01

    Full Text Available Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task’s purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process.

  4. The Bauhaus movement and its influence in graphic design, visual communication and architecture in Greece

    OpenAIRE

    Konstantinos Kyriakopoulos

    2016-01-01

    This paper attempts to present the elements defining the philosophical approach, the characteristics and the style of the Bauhaus movement. More specific it presents the social background of the period during which this school was established and the vision of its main representatives. It analyzes the way it influenced graphic design, visual communication and architecture in Greece. A comparison has been made between typical Bauhaus works and works of contemporary graphics aiming to find how ...

  5. Image-guided robotic surgery.

    Science.gov (United States)

    Marescaux, Jacques; Solerc, Luc

    2004-06-01

    Medical image processing leads to an improvement in patient care by guiding the surgical gesture. Three-dimensional models of patients that are generated from computed tomographic scans or magnetic resonance imaging allow improved surgical planning and surgical simulation that offers the opportunity for a surgeon to train the surgical gesture before performing it for real. These two preoperative steps can be used intra-operatively because of the development of augmented reality, which consists of superimposing the preoperative three-dimensional model of the patient onto the real intraoperative view. Augmented reality provides the surgeon with a view of the patient in transparency and can also guide the surgeon, thanks to the real-time tracking of surgical tools during the procedure. When adapted to robotic surgery, this tool tracking enables visual serving with the ability to automatically position and control surgical robotic arms in three dimensions. It is also now possible to filter physiologic movements such as breathing or the heart beat. In the future, by combining augmented reality and robotics, these image-guided robotic systems will enable automation of the surgical procedure, which will be the next revolution in surgery.

  6. Revisiting the link between body and agency: visual movement congruency enhances intentional binding but is not body-specific.

    Science.gov (United States)

    Zopf, Regine; Polito, Vince; Moore, James

    2018-01-09

    Embodiment and agency are key aspects of how we perceive ourselves that have typically been associated with independent mechanisms. Recent work, however, has suggested that these mechanisms are related. The sense of agency arises from recognising a causal influence on the external world. This influence is typically realised through bodily movements and thus the perception of the bodily self could also be crucial for agency. We investigated whether a key index of agency - intentional binding - was modulated by body-specific information. Participants judged the interval between pressing a button and a subsequent tone. We used virtual reality to manipulate two aspects of movement feedback. First, form: participants viewed a virtual hand or sphere. Second, movement congruency: the viewed object moved congruently or incongruently with the participant's hidden hand. Both factors, form and movement congruency, significantly influenced embodiment. However, only movement congruency influenced intentional binding. Binding was increased for congruent compared to incongruent movement feedback irrespective of form. This shows that the comparison between viewed and performed movements provides an important cue for agency, whereas body-specific visual form does not. We suggest that embodiment and agency mechanisms both depend on comparisons across sensorimotor signals but that they are influenced by distinct factors.

  7. Motor imagery beyond the motor repertoire: Activity in the primary visual cortex during kinesthetic motor imagery of difficult whole body movements.

    Science.gov (United States)

    Mizuguchi, N; Nakata, H; Kanosue, K

    2016-02-19

    To elucidate the neural substrate associated with capabilities for kinesthetic motor imagery of difficult whole-body movements, we measured brain activity during a trial involving both kinesthetic motor imagery and action observation as well as during a trial with action observation alone. Brain activity was assessed with functional magnetic resonance imaging (fMRI). Nineteen participants imagined three types of whole-body movements with the horizontal bar: the giant swing, kip, and chin-up during action observation. No participant had previously tried to perform the giant swing. The vividness of kinesthetic motor imagery as assessed by questionnaire was highest for the chin-up, less for the kip and lowest for the giant swing. Activity in the primary visual cortex (V1) during kinesthetic motor imagery with action observation minus that during action observation alone was significantly greater in the giant swing condition than in the chin-up condition within participants. Across participants, V1 activity of kinesthetic motor imagery of the kip during action observation minus that during action observation alone was negatively correlated with vividness of the kip imagery. These results suggest that activity in V1 is dependent upon the capability of kinesthetic motor imagery for difficult whole-body movements. Since V1 activity is likely related to the creation of a visual image, we speculate that visual motor imagery is recruited unintentionally for the less vivid kinesthetic motor imagery of difficult whole-body movements. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes.

    Science.gov (United States)

    Azizi, Elham; Abel, Larry A; Stainer, Matthew J

    2017-02-01

    Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.

  9. Dynamic Stimuli And Active Processing In Human Visual Perception

    Science.gov (United States)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  10. Effect of 4-Horizontal Rectus Muscle Tenotomy on Visual Function and Eye Movement Records in Patients with Infantile Nystagmus Syndrome without Abnormal Head Posture and Strabismus: A Prospective Study

    Directory of Open Access Journals (Sweden)

    Ahmad Ameri

    2013-10-01

    Full Text Available Purpose: To evaluate the effect of tenotomy on visual function and eye movement records in patients with infantile nystagmus syndrome (INS without abnormal head posture (AHP and strabismusMethods: A prospective interventional case-series of patients with INS with no AHP or strabismus. Patients underwent 4-horizontal muscle tenotomy. Best corrected visual acuity (BCVA and eye movement recordings were compared pre and postoperatively.Results: Eight patients were recruited in this study with 3 to 15.5 months of follow-up. Patients showed significant improvement in their visual function. Overall nystagmus amplitude and velocity was decreased 30.7% and 19.8%, respectively. Improvements were more marked at right and left gazes. Conclusion: Tenotomy improves both visual function and eye movement records in INS with no strabismus and eccentric null point. The procedure has more effect on lateral gazes with worse waveforms, thus can broaden area with better visual function. We recommend this surgery in patients with INS but no associated AHP or strabismus.

  11. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    Science.gov (United States)

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  12. The Space-Time Cube as part of a GeoVisual Analytics Environment to support the understanding of movement data

    DEFF Research Database (Denmark)

    Kveladze, Irma; Kraak, M. J.; van Elzakker, C. P. J. M.

    2015-01-01

    This paper reports the results of an empirical usability experiment on the performance of the space-time cube in a GeoVisual analytics environment. It was developed to explore movement data based on the requirements of human geographers. The interactive environment consists of multiple coordinated...

  13. Visualizing the Impacts of Movement Infrastructures on Social Inclusion: Graph-Based Methods for Observing Community Formations in Contrasting Geographic Contexts

    Directory of Open Access Journals (Sweden)

    Jamie O'Brien

    2017-12-01

    Full Text Available In this article we describe some innovative methods for observing the possible impacts of roads, junctions and pathways (movement infrastructures, on community life in terms of their affordances and hindrances for social connectivity. In seeking to observe these impacts, we combined a range of visualization research methods, based on qualitative points-data mapping, graphic representation and urban morphological analysis at local and global geographic scales. Our overall aim in this study was to develop exploratory methods for combining and visualizing various kinds of data that relate to urban community formations in contrasting urban contexts. We focused our enquiry on the perspectives of adolescents in two urban contexts: Liverpool, UK, and Medellín, Colombia. While they contrast in their geo-political and cultural characteristics, these two cities each present polarized socio-economic inequalities across distinctive spatial patterns. We found that adolescents in these cities offer generally localized, pedestrian perspectives of their local areas, and unique insights into the opportunities and challenges for place-making in their local community spaces. We gathered the communities’ local perspectives through map-making workshops, in which participants used given iconographic symbols to select and weight the social and structural assets that they deemed to be significant features of their community spaces. We then sampled and visualized these selective points data to observe ways in which local community assets relate to infrastructural affordances for movement (in terms of network integration. This analysis was based on the theory and method of Space Syntax, which provides a model of affordances for movement across the urban network over various scales of network configuration. In particular, we sought to determine how city-scale movement infrastructures interact with local-scale infrastructures, and to develop methods for observing ways

  14. Entropic Movement Complexity Reflects Subjective Creativity Rankings of Visualized Hand Motion Trajectories

    Science.gov (United States)

    Peng, Zhen; Braun, Daniel A.

    2015-01-01

    In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896

  15. Eye movement accuracy determines natural interception strategies.

    Science.gov (United States)

    Fooken, Jolande; Yeo, Sang-Hoon; Pai, Dinesh K; Spering, Miriam

    2016-11-01

    Eye movements aid visual perception and guide actions such as reaching or grasping. Most previous work on eye-hand coordination has focused on saccadic eye movements. Here we show that smooth pursuit eye movement accuracy strongly predicts both interception accuracy and the strategy used to intercept a moving object. We developed a naturalistic task in which participants (n = 42 varsity baseball players) intercepted a moving dot (a "2D fly ball") with their index finger in a designated "hit zone." Participants were instructed to track the ball with their eyes, but were only shown its initial launch (100-300 ms). Better smooth pursuit resulted in more accurate interceptions and determined the strategy used for interception, i.e., whether interception was early or late in the hit zone. Even though early and late interceptors showed equally accurate interceptions, they may have relied on distinct tactics: early interceptors used cognitive heuristics, whereas late interceptors' performance was best predicted by pursuit accuracy. Late interception may be beneficial in real-world tasks as it provides more time for decision and adjustment. Supporting this view, baseball players who were more senior were more likely to be late interceptors. Our findings suggest that interception strategies are optimally adapted to the proficiency of the pursuit system.

  16. Inhibition in movement plan competition: reach trajectories curve away from remembered and task-irrelevant present but not from task-irrelevant past visual stimuli.

    Science.gov (United States)

    Moehler, Tobias; Fiehler, Katja

    2017-11-01

    The current study investigated the role of automatic encoding and maintenance of remembered, past, and present visual distractors for reach movement planning. The previous research on eye movements showed that saccades curve away from locations actively kept in working memory and also from task-irrelevant perceptually present visual distractors, but not from task-irrelevant past distractors. Curvature away has been associated with an inhibitory mechanism resolving the competition between multiple active movement plans. Here, we examined whether reach movements underlie a similar inhibitory mechanism and thus show systematic modulation of reach trajectories when the location of a previously presented distractor has to be (a) maintained in working memory or (b) ignored, or (c) when the distractor is perceptually present. Participants performed vertical reach movements on a computer monitor from a home to a target location. Distractors appeared laterally and near or far from the target (equidistant from central fixation). We found that reaches curved away from the distractors located close to the target when the distractor location had to be memorized and when it was perceptually present, but not when the past distractor had to be ignored. Our findings suggest that automatically encoding present distractors and actively maintaining the location of past distractors in working memory evoke a similar response competition resolved by inhibition, as has been previously shown for saccadic eye movements.

  17. Context-dependent effects of substantia nigra stimulation on eye movements.

    Science.gov (United States)

    Basso, Michele A; Liu, Ping

    2007-06-01

    In a series of now classic experiments, an output structure of the basal ganglia (BG)--the substantia nigra pars reticulata (SNr)--was shown to be involved in the generation of saccades made in particular behavioral contexts, such as when memory was required for guidance. Recent electrophysiological experiments, however, call this original hypothesis into question. Here we test the hypothesis that the SNr is involved preferentially in nonvisually guided saccades using electrical stimulation. Monkeys performed visually guided and memory-guided saccades to locations throughout the visual field. On 50% of the trials, electrical stimulation of the SNr occurred. Stimulation of the SNr altered the direction, amplitude, latency, and probability of saccades. Visually guided saccades tended to be rotated toward the field contralateral to the side of stimulation, whereas memory-guided saccades tended to be rotated toward the hemifield ipsilateral to the side of stimulation. Overall, the changes in saccade vector direction were larger for memory-guided than for visually guided saccades. Both memory- and visually guided saccades were hypometric during stimulation trials, but the stimulation preferentially affected the length of memory-guided saccades. Electrical stimulation of the SNr produced decreases in visually guided saccades bilaterally. In contrast, memory-guided saccades often had increases in saccade latency bilaterally. Finally, we found approximately 10% reduction in the probability of memory-guided saccades bilaterally. Visually guided saccade probability was unaltered. Taken together the results are consistent with the hypothesis that SNr primarily influences nonvisually guided saccades. The pattern of stimulation effects suggests that SNr influence is widespread, altering the pattern of activity bilaterally across the superior colliculus map of saccades.

  18. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept

    NARCIS (Netherlands)

    Roosink, M.; Robitaille, N.; McFadyen, B.J.; Hebert, L.J.; Jackson, P.L.; Bouyer, L.J.; Mercier, C.

    2015-01-01

    BACKGROUND: Virtual reality (VR) provides interactive multimodal sensory stimuli and biofeedback, and can be a powerful tool for physical and cognitive rehabilitation. However, existing systems have generally not implemented realistic full-body avatars and/or a scaling of visual movement feedback.

  19. Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition.

    Science.gov (United States)

    Jesse, Alexandra; McQueen, James M

    2014-01-01

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

  20. Synchronizing the tracking eye movements with the motion of a visual target: Basic neural processes.

    Science.gov (United States)

    Goffart, Laurent; Bourrelly, Clara; Quinet, Julie

    2017-01-01

    In primates, the appearance of an object moving in the peripheral visual field elicits an interceptive saccade that brings the target image onto the foveae. This foveation is then maintained more or less efficiently by slow pursuit eye movements and subsequent catch-up saccades. Sometimes, the tracking is such that the gaze direction looks spatiotemporally locked onto the moving object. Such a spatial synchronism is quite spectacular when one considers that the target-related signals are transmitted to the motor neurons through multiple parallel channels connecting separate neural populations with different conduction speeds and delays. Because of the delays between the changes of retinal activity and the changes of extraocular muscle tension, the maintenance of the target image onto the fovea cannot be driven by the current retinal signals as they correspond to past positions of the target. Yet, the spatiotemporal coincidence observed during pursuit suggests that the oculomotor system is driven by a command estimating continuously the current location of the target, i.e., where it is here and now. This inference is also supported by experimental perturbation studies: when the trajectory of an interceptive saccade is experimentally perturbed, a correction saccade is produced in flight or after a short delay, and brings the gaze next to the location where unperturbed saccades would have landed at about the same time, in the absence of visual feedback. In this chapter, we explain how such correction can be supported by previous visual signals without assuming "predictive" signals encoding future target locations. We also describe the basic neural processes which gradually yield the synchronization of eye movements with the target motion. When the process fails, the gaze is driven by signals related to past locations of the target, not by estimates to its upcoming locations, and a catch-up is made to reinitiate the synchronization. © 2017 Elsevier B.V. All rights

  1. Visual Guided Navigation

    National Research Council Canada - National Science Library

    Banks, Martin

    1999-01-01

    .... Similarly, the problem of visual navigation is the recovery of an observer's self-motion with respect to the environment from the moving pattern of light reaching the eyes and the complex of extra...

  2. Separating timing, movement conditions and individual differences in the analysis of human movement

    DEFF Research Database (Denmark)

    Raket, Lars Lau; Grimme, Britta; Schöner, Gregor

    2016-01-01

    mixed-effects models as viable alternatives to conventional analysis frameworks. The model is then combined with a novel factor-analysis model that estimates the low-dimensional subspace within which movements vary when the task demands vary. Our framework enables us to visualize different dimensions......A central task in the analysis of human movement behavior is to determine systematic patterns and differences across experimental conditions, participants and repetitions. This is possible because human movement is highly regular, being constrained by invariance principles. Movement timing...

  3. Unfolding Visual Lexical Decision in Time

    Science.gov (United States)

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  4. Rapid steroid influences on visually guided sexual behavior in male goldfish

    Science.gov (United States)

    Lord, Louis-David; Bond, Julia; Thompson, Richmond R.

    2013-01-01

    The ability of steroid hormones to rapidly influence cell physiology through nongenomic mechanisms raises the possibility that these molecules may play a role in the dynamic regulation of social behavior, particularly in species in which social stimuli can rapidly influence circulating steroid levels. We therefore tested if testosterone (T), which increases in male goldfish in response to sexual stimuli, can rapidly influence approach responses towards females. Injections of T stimulated approach responses towards the visual cues of females 30–45 min after the injection but did not stimulate approach responses towards stimulus males or affect general activity, indicating that the effect is stimulus-specific and not a secondary consequence of increased arousal. Estradiol produced the same effect 30–45 min and even 10–25 min after administration, and treatment with the aromatase inhibitor fadrozole blocked exogenous T’s behavioral effect, indicating that T’s rapid stimulation of visual approach responses depends on aromatization. We suggest that T surges induced by sexual stimuli, including preovulatory pheromones, rapidly prime males to mate by increasing sensitivity within visual pathways that guide approach responses towards females and/or by increasing the motivation to approach potential mates through actions within traditional limbic circuits. PMID:19751737

  5. Octopus vulgaris uses visual information to determine the location of its arm.

    Science.gov (United States)

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Integration of intraoperative stereovision imaging for brain shift visualization during image-guided cranial procedures

    Science.gov (United States)

    Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Roberts, David W.; Paulsen, Keith D.; Simon, David A.

    2014-03-01

    Dartmouth and Medtronic Navigation have established an academic-industrial partnership to develop, validate, and evaluate a multi-modality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. A stereovision system has been developed and optimized for intraoperative use through integration with a surgical microscope and an image-guided surgery system. The microscope optics and stereovision CCD sensors are localized relative to the surgical field using optical tracking and can efficiently acquire stereo image pairs from which a localized 3D profile of the exposed surface is reconstructed. This paper reports the first demonstration of intraoperative acquisition, reconstruction and visualization of 3D stereovision surface data in the context of an industry-standard image-guided surgery system. The integrated system is capable of computing and presenting a stereovision-based update of the exposed cortical surface in less than one minute. Alternative methods for visualization of high-resolution, texture-mapped stereovision surface data are also investigated with the objective of determining the technical feasibility of direct incorporation of intraoperative stereo imaging into future iterations of Medtronic's navigation platform.

  7. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    Directory of Open Access Journals (Sweden)

    Ester Martinez-Martin

    2014-01-01

    Full Text Available Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity generated from the egocentric representation of the visual information (image coordinates. In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching. The approach’s performance is evaluated through experiments on both simulated and real data.

  9. Visual Impairment Screening Assessment (VISA) tool: pilot validation.

    Science.gov (United States)

    Rowe, Fiona J; Hepworth, Lauren R; Hanna, Kerry L; Howard, Claire

    2018-03-06

    To report and evaluate a new Vision Impairment Screening Assessment (VISA) tool intended for use by the stroke team to improve identification of visual impairment in stroke survivors. Prospective case cohort comparative study. Stroke units at two secondary care hospitals and one tertiary centre. 116 stroke survivors were screened, 62 by naïve and 54 by non-naïve screeners. Both the VISA screening tool and the comprehensive specialist vision assessment measured case history, visual acuity, eye alignment, eye movements, visual field and visual inattention. Full completion of VISA tool and specialist vision assessment was achieved for 89 stroke survivors. Missing data for one or more sections typically related to patient's inability to complete the assessment. Sensitivity and specificity of the VISA screening tool were 90.24% and 85.29%, respectively; the positive and negative predictive values were 93.67% and 78.36%, respectively. Overall agreement was significant; k=0.736. Lowest agreement was found for screening of eye movement and visual inattention deficits. This early validation of the VISA screening tool shows promise in improving detection accuracy for clinicians involved in stroke care who are not specialists in vision problems and lack formal eye training, with potential to lead to more prompt referral with fewer false positives and negatives. Pilot validation indicates acceptability of the VISA tool for screening of visual impairment in stroke survivors. Sensitivity and specificity were high indicating the potential accuracy of the VISA tool for screening purposes. Results of this study have guided the revision of the VISA screening tool ahead of full clinical validation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Encoding of movement in near extrapersonal space in primate area VIP

    Directory of Open Access Journals (Sweden)

    Frank eBremmer

    2013-02-01

    Full Text Available Many neurons in the macaque ventral intraparietal area (VIP are multimodal, i.e., they respond not only to visual but also to tactile, auditory and vestibular stimulation. Anatomical studies have shown distinct projections between area VIP and a region of premotor cortex controlling head movements. A specific function of area VIP could be to guide movements in order to head for and/or to avoid objects in near extra-personal space. This behavioral role would require a consistent representation of visual motion within 3-D space and enhanced activity for nearby motion signals. Accordingly, in our present study we investigated whether neurons in area VIP are sensitive to moving visual stimuli containing depth signals from horizontal disparity. We recorded single unit activity from area VIP of two awake behaving monkeys (M. mulatta fixating a central target on a projection screen. Sensitivity of neurons to horizontal disparity was assessed by presenting large field moving images (random dot fields stereoscopically to the two eyes by means of LCD shutter goggles synchronized with the stimulus computer. During an individual trial, stimuli had one of seven different disparity values ranging from 3 degrees uncrossed- (far to 3 degrees crossed- (near disparity in 1 degree steps. Stimuli moved at constant speed in all simulated depth planes. Different disparity values were presented across trials in pseudo-randomized order. 61% percent of the motion sensitive cells had a statistically significant selectivity for the horizontal disparity of the stimulus (p<0.05, distribution free ANOVA. 75% of them preferred crossed-disparity values, i.e. moving stimuli in near space, with the highest mean activity for the nearest stimulus. At the population level, preferred direction of visual stimulus motion was not affected by horizontal disparity. Thus, our findings are in agreement with the behavioral role of area VIP in the representation of movement in near extra

  11. Visual Attention to Movement and Color in Children with Cortical Visual Impairment

    Science.gov (United States)

    Cohen-Maitre, Stacey Ann; Haerich, Paul

    2005-01-01

    This study investigated the ability of color and motion to elicit and maintain visual attention in a sample of children with cortical visual impairment (CVI). It found that colorful and moving objects may be used to engage children with CVI, increase their motivation to use their residual vision, and promote visual learning.

  12. Neural correlates of tactile perception during pre-, peri-, and post-movement.

    Science.gov (United States)

    Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte

    2016-05-01

    Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.

  13. Contextual effects on motion perception and smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  14. Eye-movements and ongoing task processing.

    Science.gov (United States)

    Burke, David T; Meleger, Alec; Schneider, Jeffrey C; Snyder, Jim; Dorvlo, Atsu S S; Al-Adawi, Samir

    2003-06-01

    This study tests the relation between eye-movements and thought processing. Subjects were given specific modality tasks (visual, gustatory, kinesthetic) and assessed on whether they responded with distinct eye-movements. Some subjects' eye-movements reflected ongoing thought processing. Instead of a universal pattern, as suggested by the neurolinguistic programming hypothesis, this study yielded subject-specific idiosyncratic eye-movements across all modalities. Included is a discussion of the neurolinguistic programming hypothesis regarding eye-movements and its implications for the eye-movement desensitization and reprocessing theory.

  15. Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.

    Science.gov (United States)

    Von Mühlenen, Adrian; Müller, Hermann J

    1999-07-01

    In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).

  16. Preliminary study of ergonomic behavior during simulated ultrasound-guided regional anesthesia using a head-mounted display.

    Science.gov (United States)

    Udani, Ankeet D; Harrison, T Kyle; Howard, Steven K; Kim, T Edward; Brock-Utne, John G; Gaba, David M; Mariano, Edward R

    2012-08-01

    A head-mounted display provides continuous real-time imaging within the practitioner's visual field. We evaluated the feasibility of using head-mounted display technology to improve ergonomics in ultrasound-guided regional anesthesia in a simulated environment. Two anesthesiologists performed an equal number of ultrasound-guided popliteal-sciatic nerve blocks using the head-mounted display on a porcine hindquarter, and an independent observer assessed each practitioner's ergonomics (eg, head turning, arching, eye movements, and needle manipulation) and the overall block quality based on the injectate spread around the target nerve for each procedure. Both practitioners performed their procedures without directly viewing the ultrasound monitor, and neither practitioner showed poor ergonomic behavior. Head-mounted display technology may offer potential advantages during ultrasound-guided regional anesthesia.

  17. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  18. Neural circuits of eye movements during performance of the visual exploration task, which is similar to the responsive search score task, in schizophrenia patients and normal subjects

    International Nuclear Information System (INIS)

    Nemoto, Yasundo; Matsuda, Tetsuya; Matsuura, Masato

    2004-01-01

    Abnormal exploratory eye movements have been studied as a biological marker for schizophrenia. Using functional MRI (fMRI), we investigated brain activations of 12 healthy and 8 schizophrenic subjects during performance of a visual exploration task that is similar to the responsive search score task to clarify the neural basis of the abnormal exploratory eye movement. Performance data, such as the number of eye movements, the reaction time, and the percentage of correct answers showed no significant differences between the two groups. Only the normal subjects showed activations at the bilateral thalamus and the left anterior medial frontal cortex during the visual exploration tasks. In contrast, only the schizophrenic subjects showed activations at the right anterior cingulate gyms during the same tasks. The activation at the different locations between the two groups, the left anterior medial frontal cortex in normal subjects and the right anterior cingulate gyrus in schizophrenia subjects, was explained by the feature of the visual tasks. Hypoactivation at the bilateral thalamus supports a dysfunctional filtering theory of schizophrenia. (author)

  19. Dynamic representations of human body movement.

    Science.gov (United States)

    Kourtzi, Z; Shiffrar, M

    1999-01-01

    Psychophysical and neurophysiological studies suggest that human body motions can be readily recognized. Human bodies are highly articulated and can move in a nonrigid manner. As a result, we perceive highly dissimilar views of the human form in motion. How does the visual system integrate multiple views of a human body in motion so that we can perceive human movement as a continuous event? The results of a set of priming experiments suggest that motion can readily facilitate the linkage of different views of a moving human. Positive priming was found for novel views of a human body that fell within the path of human movement. However, no priming was observed for novel views outside the path of motion. Furthermore, priming was restricted to those views that satisfied the biomechanical constraints of human movement. These results suggest that visual representation of human movement may be based upon the movement limitations of the human body and may reflect a dynamic interaction of motion and object-recognition processes.

  20. Brain circuits underlying visual stability across eye movements - converging evidence for a neuro-computational model of area LIP

    Directory of Open Access Journals (Sweden)

    Arnold eZiesche

    2014-03-01

    Full Text Available The understanding of the subjective experience of a visually stable world despite the occurrence of an observer's eye movements has been the focus of extensive research for over 20 years. These studies have revealed fundamental mechanisms such as anticipatory receptive field shifts and the saccadic suppression of stimulus displacements, yet there currently exists no single explanatory framework for these observations. We show that a previously presented neuro-computational model of peri-saccadic mislocalization accounts for the phenomenon of predictive remapping and for the observation of saccadic suppression of displacement (SSD. This converging evidence allows us to identify the potential ingredients of perceptual stability that generalize beyond different data sets in a formal physiology-based model. In particular we propose that predictive remapping stabilizes the visual world across saccades by introducing a feedback loop and, as an emergent result, small displacements of stimuli are not noticed by the visual system. The model provides a link from neural dynamics, to neural mechanism and finally to behavior, and thus offers a testable comprehensive framework of visual stability.

  1. Seeing emotion with your ears: emotional prosody implicitly guides visual attention to faces.

    Directory of Open Access Journals (Sweden)

    Simon Rigoulot

    Full Text Available Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality and emotional speech prosody (auditory modality which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms] were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect, although this effect was often emotion-specific (with greatest effects for fear. Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.

  2. Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Science.gov (United States)

    Rigoulot, Simon; Pell, Marc D.

    2012-01-01

    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454

  3. Reward guides vision when it's your thing: trait reward-seeking in reward-mediated visual priming.

    Directory of Open Access Journals (Sweden)

    Clayton Hickey

    Full Text Available Reward-related mesolimbic dopamine is thought to play an important role in guiding animal behaviour, biasing approach towards potentially beneficial environmental stimuli and away from objects unlikely to garner positive outcome. This is considered to result in part from an impact on perceptual and attentional processes: dopamine initiates a series of cognitive events that result in the priming of reward-associated perceptual features. We have provided behavioural and electrophysiological evidence that this mechanism guides human vision in search, an effect we refer to as reward priming. We have also demonstrated that there is substantial individual variability in this effect. Here we show that behavioural differences in reward priming are predicted remarkably well by a personality index that captures the degree to which a person's behaviour is driven by reward outcome. Participants with reward-seeking personalities are found to be those who allocate visual resources to objects characterized by reward-associated visual features. These results add to a rapidly developing literature demonstrating the crucial role reward plays in attentional control. They additionally illustrate the striking impact personality traits can have on low-level cognitive processes like perception and selective attention.

  4. The cost of making an eye movement: A direct link between visual working memory and saccade execution.

    Science.gov (United States)

    Schut, Martijn J; Van der Stoep, Nathan; Postma, Albert; Van der Stigchel, Stefan

    2017-06-01

    To facilitate visual continuity across eye movements, the visual system must presaccadically acquire information about the future foveal image. Previous studies have indicated that visual working memory (VWM) affects saccade execution. However, the reverse relation, the effect of saccade execution on VWM load is less clear. To investigate the causal link between saccade execution and VWM, we combined a VWM task and a saccade task. Participants were instructed to remember one, two, or three shapes and performed either a No Saccade-, a Single Saccade- or a Dual (corrective) Saccade-task. The results indicate that items stored in VWM are reported less accurately if a single saccade-or a dual saccade-task is performed next to retaining items in VWM. Importantly, the loss of response accuracy for items retained in VWM by performing a saccade was similar to committing an extra item to VWM. In a second experiment, we observed no cost of executing a saccade for auditory working memory performance, indicating that executing a saccade exclusively taxes the VWM system. Our results suggest that the visual system presaccadically stores the upcoming retinal image, which has a similar VWM load as committing one extra item to memory and interferes with stored VWM content. After the saccade, the visual system can retrieve this item from VWM to evaluate saccade accuracy. Our results support the idea that VWM is a system which is directly linked to saccade execution and promotes visual continuity across saccades.

  5. Visual quality analysis of femtosecond LASIK and iris location guided mechanical SBK for high myopia

    Directory of Open Access Journals (Sweden)

    Hong-Su Jiang

    2015-07-01

    Full Text Available AIM: To make a analysis of visual quality of iris location guided femtosecond laser assisted in situ keratomi(LASIKand iris location guided mechanical sub-bowman keratomileusis(SBKfor high myopia treatment. METHODS:Femtosecond LASIK(study groupwas performed in 102 eyes of 51 patients with high myopia and 70 eyes of 35 patients were received mechanical SBK(control groupfrom January to October 2013. The spherical refraction of all the patients was from -6.00~-9.50D. Best corrected visual acuity(BCVAof the patients was ≥1.0. Uncorrected visual acuity(UCVA, BCVA, thickness of cornea flap, contrast sensitivity function(CSFand senior ocular aberration were examined in these patients and follow-up was 1a. RESULTS: At 1a after surgery 94.1% UCVA in study group reached ≥1.0 and there was 94.3% in control group. There was no significant difference between two groups(P>0.05. Residual refraction of study group was -0.08±0.10 D and control group was -0.10±0.07 D. There was no significant difference of residual refraction between two groups(P>0.05. C12, C8 of senior ocular aberration and RMSH in study group was less than control group, amplification: 0.1642±0.0519 and 0.2229±0.0382(t=8.077, Pt=0.556, P>0.05. C8 was 0.0950±0.069 and 0.1858±0.095(t=7.261, Pt=12.801, PP>0.05.CONCLUSION: Femtosecond LASIK and mechanical SBK is effective for high myopia. Compared to mechanical SBK, femtosecond LASIK shows more advantages in the senior ocular aberration and visual quality. The cornea flap is more regular from central to peripheral area with femtosecond laser.

  6. Attending and Inhibiting Stimuli That Match the Contents of Visual Working Memory: Evidence from Eye Movements and Pupillometry (2015 GDR Vision meeting)

    OpenAIRE

    Mathôt, Sebastiaan; Heusden, Elle van; Stigchel, Stefan Van der

    2015-01-01

    Slides for: Mathôt, S., & Van Heusden, E., & Van der Stigchel, S. (2015, Dec). Attending and Inhibiting Stimuli That Match the Contents of Visual Working Memory: Evidence from Eye Movements and Pupillometry. Talk presented at the GDR Vision Meething, Grenoble, France.

  7. Eye movements in depth to visual illusions

    NARCIS (Netherlands)

    Wismeijer, D.A.

    2009-01-01

    We perceive the three-dimensional (3D) environment that surrounds us with deceptive effortlessness. In fact, we are far from comprehending how the visual system provides us with this stable perception of the (3D) world around us. This thesis will focus on the interplay between visual perception of

  8. Online visual feedback during error-free channel trials leads to active unlearning of movement dynamics: evidence for adaptation to trajectory prediction errors.

    Directory of Open Access Journals (Sweden)

    Angel Lago-Rodriguez

    2016-09-01

    Full Text Available Prolonged exposure to movement perturbations leads to creation of motor memories which decay towards previous states when the perturbations are removed. However, it remains unclear whether this decay is due only to a spontaneous and passive recovery of the previous state. It has recently been reported that activation of reinforcement-based learning mechanisms delays the onset of the decay. This raises the question whether other motor learning mechanisms may also contribute to the retention and/or decay of the motor memory. Therefore, we aimed to test whether mechanisms of error-based motor adaptation are active during the decay of the motor memory. Forty-five right-handed participants performed point-to-point reaching movements under an external dynamic perturbation. We measured the expression of the motor memory through error-clamped (EC trials, in which lateral forces constrained movements to a straight line towards the target. We found greater and faster decay of the motor memory for participants who had access to full online visual feedback during these EC trials (Cursor group, when compared with participants who had no EC feedback regarding movement trajectory (Arc group. Importantly, we did not find between-group differences in adaptation to the external perturbation. In addition, we found greater decay of the motor memory when we artificially increased feedback errors through the manipulation of visual feedback (Augmented-Error group. Our results then support the notion of an active decay of the motor memory, suggesting that adaptive mechanisms are involved in correcting for the mismatch between predicted movement trajectories and actual sensory feedback, which leads to greater and faster decay of the motor memory.

  9. Getting a grip: different actions and visual guidance of the thumb and finger in precision grasping.

    Science.gov (United States)

    Melmoth, Dean R; Grant, Simon

    2012-10-01

    We manipulated the visual information available for grasping to examine what is visually guided when subjects get a precision grip on a common class of object (upright cylinders). In Experiment 1, objects (2 sizes) were placed at different eccentricities to vary the relative proximity to the participant's (n = 6) body of their thumb and finger contact positions in the final grip orientations, with vision available throughout or only for movement programming. Thumb trajectories were straighter and less variable than finger paths, and the thumb normally made initial contact with the objects at a relatively invariant landing site, but consistent thumb first-contacts were disrupted without visual guidance. Finger deviations were more affected by the object's properties and increased when vision was unavailable after movement onset. In Experiment 2, participants (n = 12) grasped 'glow-in-the-dark' objects wearing different luminous gloves in which the whole hand was visible or the thumb or the index finger was selectively occluded. Grip closure times were prolonged and thumb first-contacts disrupted when subjects could not see their thumb, whereas occluding the finger resulted in wider grips at contact because this digit remained distant from the object. Results were together consistent with visual feedback guiding the thumb in the period just prior to contacting the object, with the finger more involved in opening the grip and avoiding collision with the opposite contact surface. As people can overtly fixate only one object contact point at a time, we suggest that selecting one digit for online guidance represents an optimal strategy for initial grip placement. Other grasping tasks, in which the finger appears to be used for this purpose, are discussed.

  10. Op art and visual perception.

    Science.gov (United States)

    Wade, N J

    1978-01-01

    An attempt is made to list the visual phenomena exploited in op art. These include moire frinlude moiré fringes, afterimages, Hermann grid effects, Gestalt grouping principles, blurring and movement due to astigmatic fluctuations in accommodation, scintillation and streaming possibly due to eye movements, and visual persistence. The historical origins of these phenomena are also noted.

  11. The role of eye movement driven attention in functional strabismic amblyopia.

    Science.gov (United States)

    Wang, Hao; Crewther, Sheila Gillard; Yin, Zheng Qin

    2015-01-01

    Strabismic amblyopia "blunt vision" is a developmental anomaly that affects binocular vision and results in lowered visual acuity. Strabismus is a term for a misalignment of the visual axes and is usually characterized by impaired ability of the strabismic eye to take up fixation. Such impaired fixation is usually a function of the temporally and spatially impaired binocular eye movements that normally underlie binocular shifts in visual attention. In this review, we discuss how abnormal eye movement function in children with misaligned eyes influences the development of normal binocular visual attention and results in deficits in visual function such as depth perception. We also discuss how eye movement function deficits in adult amblyopia patients can also lead to other abnormalities in visual perception. Finally, we examine how the nonamblyopic eye of an amblyope is also affected in strabismic amblyopia.

  12. The Role of Eye Movement Driven Attention in Functional Strabismic Amblyopia

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2015-01-01

    Full Text Available Strabismic amblyopia “blunt vision” is a developmental anomaly that affects binocular vision and results in lowered visual acuity. Strabismus is a term for a misalignment of the visual axes and is usually characterized by impaired ability of the strabismic eye to take up fixation. Such impaired fixation is usually a function of the temporally and spatially impaired binocular eye movements that normally underlie binocular shifts in visual attention. In this review, we discuss how abnormal eye movement function in children with misaligned eyes influences the development of normal binocular visual attention and results in deficits in visual function such as depth perception. We also discuss how eye movement function deficits in adult amblyopia patients can also lead to other abnormalities in visual perception. Finally, we examine how the nonamblyopic eye of an amblyope is also affected in strabismic amblyopia.

  13. Modulation of neuronal responses during covert search for visual feature conjunctions.

    Science.gov (United States)

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  14. Visual Basic 2012 programmer's reference

    CERN Document Server

    Stephens, Rod

    2012-01-01

    The comprehensive guide to Visual Basic 2012 Microsoft Visual Basic (VB) is the most popular programming language in the world, with millions of lines of code used in businesses and applications of all types and sizes. In this edition of the bestselling Wrox guide, Visual Basic expert Rod Stephens offers novice and experienced developers a comprehensive tutorial and reference to Visual Basic 2012. This latest edition introduces major changes to the Visual Studio development platform, including support for developing mobile applications that can take advantage of the Windows 8 operating system

  15. Visual abilities in two raptors with different ecology.

    Science.gov (United States)

    Potier, Simon; Bonadonna, Francesco; Kelber, Almut; Martin, Graham R; Isard, Pierre-François; Dulaurent, Thomas; Duriez, Olivier

    2016-09-01

    Differences in visual capabilities are known to reflect differences in foraging behaviour even among closely related species. Among birds, the foraging of diurnal raptors is assumed to be guided mainly by vision but their foraging tactics include both scavenging upon immobile prey and the aerial pursuit of highly mobile prey. We studied how visual capabilities differ between two diurnal raptor species of similar size: Harris's hawks, Parabuteo unicinctus, which take mobile prey, and black kites, Milvus migrans, which are primarily carrion eaters. We measured visual acuity, foveal characteristics and visual fields in both species. Visual acuity was determined using a behavioural training technique; foveal characteristics were determined using ultra-high resolution spectral-domain optical coherence tomography (OCT); and visual field parameters were determined using an ophthalmoscopic reflex technique. We found that these two raptors differ in their visual capacities. Harris's hawks have a visual acuity slightly higher than that of black kites. Among the five Harris's hawks tested, individuals with higher estimated visual acuity made more horizontal head movements before making a decision. This may reflect an increase in the use of monocular vision. Harris's hawks have two foveas (one central and one temporal), while black kites have only one central fovea and a temporal area. Black kites have a wider visual field than Harris's hawks. This may facilitate the detection of conspecifics when they are scavenging. These differences in the visual capabilities of these two raptors may reflect differences in the perceptual demands of their foraging behaviours. © 2016. Published by The Company of Biologists Ltd.

  16. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

    International Nuclear Information System (INIS)

    Grimson, W.E.L.; Lozano-Perez, T.; White, S.J.; Wells, W.M. III; Kikinis, R.

    1996-01-01

    There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images

  17. Sensory Agreement Guides Kinetic Energy Optimization of Arm Movements during Object Manipulation.

    Directory of Open Access Journals (Sweden)

    Ali Farshchiansadegh

    2016-04-01

    Full Text Available The laws of physics establish the energetic efficiency of our movements. In some cases, like locomotion, the mechanics of the body dominate in determining the energetically optimal course of action. In other tasks, such as manipulation, energetic costs depend critically upon the variable properties of objects in the environment. Can the brain identify and follow energy-optimal motions when these motions require moving along unfamiliar trajectories? What feedback information is required for such optimal behavior to occur? To answer these questions, we asked participants to move their dominant hand between different positions while holding a virtual mechanical system with complex dynamics (a planar double pendulum. In this task, trajectories of minimum kinetic energy were along curvilinear paths. Our findings demonstrate that participants were capable of finding the energy-optimal paths, but only when provided with veridical visual and haptic information pertaining to the object, lacking which the trajectories were executed along rectilinear paths.

  18. The Effects of Mirror Feedback during Target Directed Movements on Ipsilateral Corticospinal Excitability

    Directory of Open Access Journals (Sweden)

    Mathew Yarossi

    2017-05-01

    Full Text Available Mirror visual feedback (MVF training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1 excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror and presence of a visual target (target present, target absent for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs in the untrained first dorsal interosseous (FDI and abductor digiti minimi (ADM muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4. Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability.

  19. Hawk eyes II: diurnal raptors differ in head movement strategies when scanning from perches.

    Science.gov (United States)

    O'Rourke, Colleen T; Pitlik, Todd; Hoover, Melissa; Fernández-Juricic, Esteban

    2010-09-22

    Relatively little is known about the degree of inter-specific variability in visual scanning strategies in species with laterally placed eyes (e.g., birds). This is relevant because many species detect prey while perching; therefore, head movement behavior may be an indicator of prey detection rate, a central parameter in foraging models. We studied head movement strategies in three diurnal raptors belonging to the Accipitridae and Falconidae families. We used behavioral recording of individuals under field and captive conditions to calculate the rate of two types of head movements and the interval between consecutive head movements. Cooper's Hawks had the highest rate of regular head movements, which can facilitate tracking prey items in the visually cluttered environment they inhabit (e.g., forested habitats). On the other hand, Red-tailed Hawks showed long intervals between consecutive head movements, which is consistent with prey searching in less visually obstructed environments (e.g., open habitats) and with detecting prey movement from a distance with their central foveae. Finally, American Kestrels have the highest rates of translational head movements (vertical or frontal displacements of the head keeping the bill in the same direction), which have been associated with depth perception through motion parallax. Higher translational head movement rates may be a strategy to compensate for the reduced degree of eye movement of this species. Cooper's Hawks, Red-tailed Hawks, and American Kestrels use both regular and translational head movements, but to different extents. We conclude that these diurnal raptors have species-specific strategies to gather visual information while perching. These strategies may optimize prey search and detection with different visual systems in habitat types with different degrees of visual obstruction.

  20. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  1. Teach yourself visually Apple Watch

    CERN Document Server

    Hart-Davis, Guy

    2015-01-01

    Master your new smartwatch quickly and easily with this highly visual guide Teach Yourself VISUALLY Apple Watch is a practical, accessible guide to mastering the powerful features and functionality of your new smartwatch. For Apple devotees and new users alike, this easy-to-follow guide features visually rich tutorials and step-by-step instructions that show you how to take advantage of all of the Apple watch's capabilities. You'll learn how to track your health, control household devices, download and install apps, sync your music, sync other Apple devices, and efficiently use the current O

  2. Measuring miniature eye movements by means of a SQUID magnetometer

    NARCIS (Netherlands)

    Peters, M.J.; Dunajski, Z.; Meijzssen, T.E.M.; Breukink, E.W.; Wevers-Henke, J.J.

    1982-01-01

    A new technique to measure small eye movements is reported. The precise recording of human eye movements is necessary for research on visual fatigue induced by visual display units.1 So far all methods used have disadvantages: especially those which are sensitive or are rather painful.2,3 Our method

  3. Visual intelligence Microsoft tools and techniques for visualizing data

    CERN Document Server

    Stacey, Mark; Jorgensen, Adam

    2013-01-01

    Go beyond design concepts and learn to build state-of-the-art visualizations The visualization experts at Microsoft's Pragmatic Works have created a full-color, step-by-step guide to building specific types of visualizations. The book thoroughly covers the Microsoft toolset for data analysis and visualization, including Excel, and explores best practices for choosing a data visualization design, selecting tools from the Microsoft stack, and building a dynamic data visualization from start to finish. You'll examine different types of visualizations, their strengths and weaknesses, a

  4. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    Science.gov (United States)

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  5. Posterior α EEG Dynamics Dissociate Current from Future Goals in Working Memory-Guided Visual Search.

    Science.gov (United States)

    de Vries, Ingmar E J; van Driel, Joram; Olivers, Christian N L

    2017-02-08

    Current models of visual search assume that search is guided by an active visual working memory representation of what we are currently looking for. This attentional template for currently relevant stimuli can be dissociated from accessory memory representations that are only needed prospectively, for a future task, and that should be prevented from guiding current attention. However, it remains unclear what electrophysiological mechanisms dissociate currently relevant (serving upcoming selection) from prospectively relevant memories (serving future selection). We measured EEG of 20 human subjects while they performed two consecutive visual search tasks. Before the search tasks, a cue instructed observers which item to look for first (current template) and which second (prospective template). During the delay leading up to the first search display, we found clear suppression of α band (8-14 Hz) activity in regions contralateral to remembered items, comprising both local power and interregional phase synchronization within a posterior parietal network. Importantly, these lateralization effects were stronger when the memory item was currently relevant (i.e., for the first search) compared with when it was prospectively relevant (i.e., for the second search), consistent with current templates being prioritized over future templates. In contrast, event-related potential analysis revealed that the contralateral delay activity was similar for all conditions, suggesting no difference in storage. Together, these findings support the idea that posterior α oscillations represent a state of increased processing or excitability in task-relevant cortical regions, and reflect enhanced cortical prioritization of memory representations that serve as a current selection filter. SIGNIFICANCE STATEMENT Our days are filled with looking for relevant objects while ignoring irrelevant visual information. Such visual search activity is thought to be driven by current goals activated in

  6. Proprioceptive deafferentation slows down the processing of visual hand feedback

    DEFF Research Database (Denmark)

    Balslev, Daniela; Miall, R Chris; Cole, Jonathan

    2007-01-01

    During visually guided movements both vision and proprioception inform the brain about the position of the hand, so interaction between these two modalities is presumed. Current theories suggest that this interaction occurs by sensory information from both sources being fused into a more reliable...... proprioception facilitates the processing of visual information during motor control. Subjects used a computer mouse to move a cursor to a screen target. In 28% of the trials, pseudorandomly, the cursor was rotated or the target jumped. Reaction time for the trajectory correction in response to this perturbation......, multimodal, percept of hand location. In the literature on perception, however, there is evidence that different sensory modalities interact in the allocation of attention, so that a stimulus in one modality facilitates the processing of a stimulus in a different modality. We investigated whether...

  7. Lateral information transfer across saccadic eye movements.

    Science.gov (United States)

    Jüttner, M; Röhler, R

    1993-02-01

    Our perception of the visual world remains stable and continuous despite the disruptions caused by retinal image displacements during saccadic eye movements. The problem of visual stability is closely related to the question of whether information is transferred across such eye movements--and if so, what sort of information is transferred. We report experiments carried out to investigate how presaccadic signals at the location of the saccade goal influence the visibility of postsaccadic test signals presented at the fovea. The signals were Landolt rings of different orientations. If the orientations of pre- and postsaccadic Landolt rings were different, the thresholds of the test signals were elevated by about 20%-25% relative to those at the static control condition. When the orientations were identical, no such elevation occurred. This selective threshold elevation effect proved to be a phenomenon different from ordinary saccadic suppression, although it was closely related to the execution of the saccadic eye movement. The consequences for visual stability are discussed.

  8. Learning rational temporal eye movement strategies.

    Science.gov (United States)

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  9. Evaluation of Sports Visualization Based on Wearable Devices

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2017-12-01

    Full Text Available In order to visualize the physical education classroom in school, we create a visualized movement management system, which records the student's exercise data efficiently and stores data in the database that enables virtual reality client to call. Each individual's exercise data are gathered as the source material to study the law of group movement, playing a strategic role in managing physical education. Through the combination of wearable devices, virtual reality and network technology, the student movement data (time, space, rate, etc. are collected in real time to drive the role model in virtual scenes, which visualizes the movement data. Moreover, the Markov chain based algorithm is used to predict the movement state. The test results show that this method can quantize the student movement data. Therefore, the application of this system in PE classes can help teacher to observe the students’ real-time movement amount and state, so as to improve the teaching quality.

  10. Analysis of exploratory eye movements in patients with schizophrenia during visual scanning of projective tests' figures Análise dos movimentos oculares de pacientes com esquizofrenia durante a exploração visual de figuras de testes projetivos

    Directory of Open Access Journals (Sweden)

    Katerina Lukasova

    2010-01-01

    Full Text Available OBJECTIVE: Compare pattern of exploratory eye movements during visual scanning of the Rorschach and TAT test cards in people with schizophrenia and controls. METHOD: 10 participants with schizophrenia and 10 controls matched by age, schooling and intellectual level participated in the study. Severity of symptoms was evaluated with the Positive and Negative Syndrome Scale. Test cards were divided into three groups: TAT cards with scenes content, TAT cards with interaction content (TAT-faces, and Rorschach cards with abstract images. Eye movements were analyzed for: total number, duration and location of fixation; and length of saccadic movements. RESULTS: Different pattern of eye movement was found, with schizophrenia participants showing lower number of fixations but longer fixation duration in Rorschach cards and TAT-faces. The biggest difference was observed in Rorschach, followed by TAT-faces and TAT-scene cards. CONCLUSIONS: Results suggest alteration in visual exploration mechanisms possibly related to integration of abstract visual information.OBJETIVO: Comparar o padrão do movimento ocular durante a inspeção dos cartões dos testes projetivos Rorschach e TAT em pessoas com esquizofrenia e controles. MÉTODO: Participaram 10 sujeitos com esquizofrenia e 10 controles, pareados por idade, escolaridade e nível de inteligência. A severidade dos sintomas foi avaliada com a Escala das Síndromes Positiva e Negativa. Os cartões dos testes foram divididos em três grupos: figuras do TAT representando ambientes, figuras do TAT com faces humanas e figuras abstratas do teste Rorschach. Os movimentos oculares foram analisados para: número total, duração e localização das fixações e comprimento dos movimentos sacádicos. RESULTADOS: Foram encontradas diferenças no padrão do movimento ocular, e pessoas com esquizofrenia apresentaram menos fixações nos cartões do teste Rorschach e TAT-faces. Maiores diferenças foram observadas nos cart

  11. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  12. Study of Movement Speeds Down Stairs

    CERN Document Server

    Hoskins, Bryan L

    2013-01-01

    The Study of Movement Speeds Down Stairs closely examines forty-three unique case studies on movement patterns down stairwells. These studies include observations made during evacuation drills, others made during normal usage, interviews with people after fire evacuations, recommendations made from compiled studies, and detailed results from laboratory studies. The methodology used in each study for calculating density and movement speed, when known, are also presented, and this book identifies an additional seventeen variables linked to altering movement speeds. The Study of Movement Speeds Down Stairs is intended for researchers as a reference guide for evaluating pedestrian evacuation dynamics down stairwells. Practitioners working in a related field may also find this book invaluable.

  13. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    Science.gov (United States)

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic

  14. Novel names extend for how long preschool children sample visual information.

    Science.gov (United States)

    Carvalho, Paulo F; Vales, Catarina; Fausey, Caitlin M; Smith, Linda B

    2018-04-01

    Known words can guide visual attention, affecting how information is sampled. How do novel words, those that do not provide any top-down information, affect preschoolers' visual sampling in a conceptual task? We proposed that novel names can also change visual sampling by influencing how long children look. We investigated this possibility by analyzing how children sample visual information when they hear a sentence with a novel name versus without a novel name. Children completed a match-to-sample task while their moment-to-moment eye movements were recorded using eye-tracking technology. Our analyses were designed to provide specific information on the properties of visual sampling that novel names may change. Overall, we found that novel words prolonged the duration of each sampling event but did not affect sampling allocation (which objects children looked at) or sampling organization (how children transitioned from one object to the next). These results demonstrate that novel words change one important dynamic property of gaze: Novel words can entrain the cognitive system toward longer periods of sustained attention early in development. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. An Indoor Navigation System for the Visually Impaired

    Directory of Open Access Journals (Sweden)

    Luis A. Guerrero

    2012-06-01

    Full Text Available Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user’s trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  16. An indoor navigation system for the visually impaired.

    Science.gov (United States)

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  17. Interactive Sonification of Spontaneous Movement of Children - Cross-modal Mapping and the Perception of Body Movement Qualities through Sound

    Directory of Open Access Journals (Sweden)

    Emma Frid

    2016-11-01

    Full Text Available In this paper we present three studies focusing on the effect of different sound models ininteractive sonification of bodily movement. We hypothesized that a sound model characterizedby continuous smooth sounds would be associated with other movement characteristics thana model characterized by abrupt variation in amplitude and that these associations could bereflected in spontaneous movement characteristics. Three subsequent studies were conductedto investigate the relationship between properties of bodily movement and sound: (1 a motioncapture experiment involving interactive sonification of a group of children spontaneously movingin a room, (2 an experiment involving perceptual ratings of sonified movement data and (3an experiment involving matching between sonified movements and their visualizations in theform of abstract drawings. In (1 we used a system constituting of 17 IR cameras trackingpassive reflective markers. The head positions in the horizontal plane of 3-4 children weresimultaneously tracked and sonified, producing 3-4 sound sources spatially displayed throughan 8-channel loudspeaker system. We analyzed children’s spontaneous movement in termsof energy-, smoothness- and directness index. Despite large inter-participant variability andgroup-specific effects caused by interaction among children when engaging in the spontaneousmovement task, we found a small but significant effect of sound model. Results from (2 indicatethat different sound models can be rated differently on a set of motion-related perceptual scales(e.g. expressivity and fluidity. Also, results imply that audio-only stimuli can evoke strongerperceived properties of movement (e.g. energetic, impulsive than stimuli involving both audioand video representations. Findings in (3 suggest that sounds portraying bodily movementcan be represented using abstract drawings in a meaningful way. We argue that the resultsfrom these studies support the existence of a cross

  18. Visualize This The FlowingData Guide to Design, Visualization, and Statistics

    CERN Document Server

    Yau, Nathan

    2011-01-01

    Practical data design tips from a data visualization expert of the modern age Data doesn?t decrease; it is ever-increasing and can be overwhelming to organize in a way that makes sense to its intended audience. Wouldn?t it be wonderful if we could actually visualize data in such a way that we could maximize its potential and tell a story in a clear, concise manner? Thanks to the creative genius of Nathan Yau, we can. With this full-color book, data visualization guru and author Nathan Yau uses step-by-step tutorials to show you how to visualize and tell stories with data. He explains how to ga

  19. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  20. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound.

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  1. Interactive Sonification of Spontaneous Movement of Children—Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound

    Science.gov (United States)

    Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig

    2016-01-01

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a

  2. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    Science.gov (United States)

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  3. fMRI evidence of improved visual function in patients with progressive retinitis pigmentosa by eye-movement training.

    Science.gov (United States)

    Yoshida, Masako; Origuchi, Maki; Urayama, Shin-Ichi; Takatsuki, Akira; Kan, Shigeyuki; Aso, Toshihiko; Shiose, Takayuki; Sawamoto, Nobukatsu; Miyauchi, Satoru; Fukuyama, Hidenao; Seiyama, Akitoshi

    2014-01-01

    To evaluate changes in the visual processing of patients with progressive retinitis pigmentosa (RP) who acquired improved reading capability by eye-movement training (EMT), we performed functional magnetic resonance imaging (fMRI) before and after EMT. Six patients with bilateral concentric contraction caused by pigmentary degeneration of the retina and 6 normal volunteers were recruited. Patients were given EMT for 5 min every day for 8-10 months. fMRI data were acquired on a 3.0-Tesla scanner while subjects were performing reading tasks. In separate experiments (before fMRI scanning), visual performances for readings were measured by the number of letters read correctly in 5 min. Before EMT, activation areas of the primary visual cortex of patients were 48.8% of those of the controls. The number of letters read correctly in 5 min was 36.6% of those by the normal volunteers. After EMT, the activation areas of patients were not changed or slightly decreased; however, reading performance increased in 5 of 6 patients, which was 46.6% of that of the normal volunteers (p< 0.05). After EMT, increased activity was observed in the frontal eye fields (FEFs) of all patients; however, increases in the activity of the parietal eye fields (PEFs) were observed only in patients who showed greater improvement in reading capability. The improvement in reading ability of the patients after EMT is regarded as an effect of the increased activity of FEF and PEF, which play important roles in attention and working memory as well as the regulation of eye movements.

  4. fMRI evidence of improved visual function in patients with progressive retinitis pigmentosa by eye-movement training

    Directory of Open Access Journals (Sweden)

    Masako Yoshida

    2014-01-01

    Full Text Available To evaluate changes in the visual processing of patients with progressive retinitis pigmentosa (RP who acquired improved reading capability by eye-movement training (EMT, we performed functional magnetic resonance imaging (fMRI before and after EMT. Six patients with bilateral concentric contraction caused by pigmentary degeneration of the retina and 6 normal volunteers were recruited. Patients were given EMT for 5 min every day for 8–10 months. fMRI data were acquired on a 3.0-Tesla scanner while subjects were performing reading tasks. In separate experiments (before fMRI scanning, visual performances for readings were measured by the number of letters read correctly in 5 min. Before EMT, activation areas of the primary visual cortex of patients were 48.8% of those of the controls. The number of letters read correctly in 5 min was 36.6% of those by the normal volunteers. After EMT, the activation areas of patients were not changed or slightly decreased; however, reading performance increased in 5 of 6 patients, which was 46.6% of that of the normal volunteers (p< 0.05. After EMT, increased activity was observed in the frontal eye fields (FEFs of all patients; however, increases in the activity of the parietal eye fields (PEFs were observed only in patients who showed greater improvement in reading capability. The improvement in reading ability of the patients after EMT is regarded as an effect of the increased activity of FEF and PEF, which play important roles in attention and working memory as well as the regulation of eye movements.

  5. Saccadic updating of object orientation for grasping movements

    NARCIS (Netherlands)

    Selen, L.P.J.; Medendorp, W.P.

    2011-01-01

    Reach and grasp movements are a fundamental part of our daily interactions with the environment. This spatially-guided behavior is often directed to memorized objects because of intervening eye movements that caused them to disappear from sight. How does the brain store and maintain the spatial

  6. Impaired Visual Motor Coordination in Obese Adults.

    LENUS (Irish Health Repository)

    Gaul, David

    2016-09-01

    Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly (p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability (p < 0.05), and a larger amplitude (p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.

  7. The selective disruption of spatial working memory by eye movements.

    Science.gov (United States)

    Postle, Bradley R; Idzikowski, Christopher; Sala, Sergio Della; Logie, Robert H; Baddeley, Alan D

    2006-01-01

    In the late 1970s/early 1980s, Baddeley and colleagues conducted a series of experiments investigating the role of eye movements in visual working memory. Although only described briefly in a book, these studies have influenced a remarkable number of empirical and theoretical developments in fields ranging from experimental psychology to human neuropsychology to nonhuman primate electrophysiology. This paper presents, in full detail, three critical studies from this series, together with a recently performed study that includes a level of eye movement measurement and control that was not available for the older studies. Together, the results demonstrate several facts about the sensitivity of visuospatial working memory to eye movements. First, it is eye movement control, not movement per se, that produces the disruptive effects. Second, these effects are limited to working memory for locations and do not generalize to visual working memory for shapes. Third, they can be isolated to the storage/maintenance components of working memory (e.g., to the delay period of the delayed-recognition task). These facts have important implications for models of visual working memory.

  8. Accessing forgotten memory traces from long-term memory via visual movements

    Directory of Open Access Journals (Sweden)

    Estela eCamara

    2014-11-01

    Full Text Available Because memory retrieval often requires overt responses, it is difficult to determine to what extend forgetting occurs as a problem in explicit accessing of long-term memory traces. In this study, we used eye-tracking measures in combination with a behavioural task that favoured high forgetting rates to investigate the existence of memory traces from long-term memory in spite of failure in accessing them consciously. In 2 experiments, participants were encouraged to encode a large set of sound-picture-location associations. In a later test, sounds were presented and participants were instructed to visually scan, before a verbal memory report, for the correct location of the associated pictures in an empty screen. We found the reactivation of associated memories by sound cues at test biased oculomotor behaviour towards locations congruent with memory representations, even when participants failed to consciously provide a memory report of it. These findings reveal the emergence of a memory-guided behaviour that can be used to map internal representations of forgotten memories from long-term memory.

  9. Control of aperture closure initiation during reach-to-grasp movements under manipulations of visual feedback and trunk involvement in Parkinson's disease.

    Science.gov (United States)

    Rand, Miya Kato; Lemay, Martin; Squire, Linda M; Shimansky, Yury P; Stelmach, George E

    2010-03-01

    The present project was aimed at investigating how two distinct and important difficulties (coordination difficulty and pronounced dependency on visual feedback) in Parkinson's disease (PD) affect each other for the coordination between hand transport toward an object and the initiation of finger closure during reach-to-grasp movement. Subjects with PD and age-matched healthy subjects made reach-to-grasp movements to a dowel under conditions in which the target object and/or the hand were either visible or not visible. The involvement of the trunk in task performance was manipulated by positioning the target object within or beyond the participant's outstretched arm to evaluate the effects of increasing the complexity of intersegmental coordination under different conditions related to the availability of visual feedback in subjects with PD. General kinematic characteristics of the reach-to-grasp movements of the subjects with PD were altered substantially by the removal of target object visibility. Compared with the controls, the subjects with PD considerably lengthened transport time, especially during the aperture closure period, and decreased peak velocity of wrist and trunk movement without target object visibility. Most of these differences were accentuated when the trunk was involved. In contrast, these kinematic parameters did not change depending on the visibility of the hand for both groups. The transport-aperture coordination was assessed in terms of the control law according to which the initiation of aperture closure during the reach occurred when the hand distance-to-target crossed a hand-target distance threshold for grasp initiation that is a function of peak aperture, hand velocity and acceleration, trunk velocity and acceleration, and trunk-target distance at the time of aperture closure initiation. When the hand or the target object was not visible, both groups increased the hand-target distance threshold for grasp initiation compared to its

  10. A newly developed technique of wireless remote controlled visual inspection system for neutron guides of cold neutron research facilities at HANARO

    International Nuclear Information System (INIS)

    Huh, Hyung; Cho, Yeong Garp; Kim, Jong In

    2012-01-01

    KAERI developed a neutron guide system for cold neutron research facilities at HANARO from 2003 to 2010. In 2008, the old plug shutter and instruments were removed, and a new plug and primary shutter were installed as the first cold neutron delivery system at HANARO. At the beginning of 2010, all the neutron guides and accessories had been successfully installed as well. The neutron guide system of HANARO consists of the in pile plug assembly with in pile guides, the primary shutter with in shutter guides, the neutron guides in the guide shielding room with secondary shutter, and the neutron guides in the neutron guide hall. Three kinds of glass materials were selected with optimum lengths by considering their lifetime, shielding, maintainability and cost as well. Radiation damage of the guides can occur on the coating and glass by neutron capturing in the glass. It is a big challenge to inspect a guide failure because of the difficult surrounding environment, such as high level radiation, limited working space, and massive hard work for removing and reinstalling the shielding blocks as shown in Fig 1. Therefore, KAERI has developed a wireless remote controlled visual inspection system for neutron guides using an infrared light camera mounted on the vehicle moving in the guide

  11. Towards real-time cardiovascular magnetic resonance-guided transarterial aortic valve implantation: In vitro evaluation and modification of existing devices

    Directory of Open Access Journals (Sweden)

    Ladd Mark E

    2010-10-01

    Full Text Available Abstract Background Cardiovascular magnetic resonance (CMR is considered an attractive alternative for guiding transarterial aortic valve implantation (TAVI featuring unlimited scan plane orientation and unsurpassed soft-tissue contrast with simultaneous device visualization. We sought to evaluate the CMR characteristics of both currently commercially available transcatheter heart valves (Edwards SAPIEN™, Medtronic CoreValve® including their dedicated delivery devices and of a custom-built, CMR-compatible delivery device for the Medtronic CoreValve® prosthesis as an initial step towards real-time CMR-guided TAVI. Methods The devices were systematically examined in phantom models on a 1.5-Tesla scanner using high-resolution T1-weighted 3D FLASH, real-time TrueFISP and flow-sensitive phase-contrast sequences. Images were analyzed for device visualization quality, device-related susceptibility artifacts, and radiofrequency signal shielding. Results CMR revealed major susceptibility artifacts for the two commercial delivery devices caused by considerable metal braiding and precluding in vivo application. The stainless steel-based Edwards SAPIEN™ prosthesis was also regarded not suitable for CMR-guided TAVI due to susceptibility artifacts exceeding the valve's dimensions and hindering an exact placement. In contrast, the nitinol-based Medtronic CoreValve® prosthesis was excellently visualized with delineation even of small details and, thus, regarded suitable for CMR-guided TAVI, particularly since reengineering of its delivery device toward CMR-compatibility resulted in artifact elimination and excellent visualization during catheter movement and valve deployment on real-time TrueFISP imaging. Reliable flow measurements could be performed for both stent-valves after deployment using phase-contrast sequences. Conclusions The present study shows that the Medtronic CoreValve® prosthesis is potentially suited for real-time CMR-guided placement

  12. Analysis of EEG Related Saccadic Eye Movement

    Science.gov (United States)

    Funase, Arao; Kuno, Yoshiaki; Okuma, Shigeru; Yagi, Tohru

    Our final goal is to establish the model for saccadic eye movement that connects the saccade and the electroencephalogram(EEG). As the first step toward this goal, we recorded and analyzed the saccade-related EEG. In the study recorded in this paper, we tried detecting a certain EEG that is peculiar to the eye movement. In these experiments, each subject was instructed to point their eyes toward visual targets (LEDs) or the direction of the sound sources (buzzers). In the control cases, the EEG was recorded in the case of no eye movemens. As results, in the visual experiments, we found that the potential of EEG changed sharply on the occipital lobe just before eye movement. Furthermore, in the case of the auditory experiments, similar results were observed. In the case of the visual experiments and auditory experiments without eye movement, we could not observed the EEG changed sharply. Moreover, when the subject moved his/her eyes toward a right-side target, a change in EEG potential was found on the right occipital lobe. On the contrary, when the subject moved his/her eyes toward a left-side target, a sharp change in EEG potential was found on the left occipital lobe.

  13. The innate responses of bumble bees to flower patterns: separating the nectar guide from the nectary changes bee movements and search time

    Science.gov (United States)

    Goodale, Eben; Kim, Edward; Nabors, Annika; Henrichon, Sara; Nieh, James C.

    2014-06-01

    Nectar guides can enhance pollinator efficiency and plant fitness by allowing pollinators to more rapidly find and remember the location of floral nectar. We tested if a radiating nectar guide around a nectary would enhance the ability of naïve bumble bee foragers to find nectar. Most experiments that test nectar guide efficacy, specifically radiating linear guides, have used guides positioned around the center of a radially symmetric flower, where nectaries are often found. However, the flower center may be intrinsically attractive. We therefore used an off-center guide and nectary and compared "conjunct" feeders with a nectar guide surrounding the nectary to "disjunct" feeders with a nectar guide separated from the nectary. We focused on the innate response of novice bee foragers that had never previously visited such feeders. We hypothesized that a disjunct nectar guide would conflict with the visual information provided by the nectary and negatively affect foraging. Approximately, equal numbers of bumble bees ( Bombus impatiens) found nectar on both feeder types. On disjunct feeders, however, unsuccessful foragers spent significantly more time (on average 1.6-fold longer) searching for nectar than any other forager group. Successful foragers on disjunct feeders approached these feeders from random directions unlike successful foragers on conjunct feeders, which preferentially approached the combined nectary and nectar guide. Thus, the nectary and a surrounding nectar guide can be considered a combination of two signals that attract naïve foragers even when not in the floral center.

  14. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Eye movements and serial memory for visual-spatial information: does time spent fixating contribute to recall?

    Science.gov (United States)

    Saint-Aubin, Jean; Tremblay, Sébastien; Jalbert, Annie

    2007-01-01

    This research investigated the nature of encoding and its contribution to serial recall for visual-spatial information. In order to do so, we examined the relationship between fixation duration and recall performance. Using the dot task--a series of seven dots spatially distributed on a monitor screen is presented sequentially for immediate recall--performance and eye-tracking data were recorded during the presentation of the to-be-remembered items. When participants were free to move their eyes at their will, both fixation durations and probability of correct recall decreased as a function of serial position. Furthermore, imposing constant durations of fixation across all serial positions had a beneficial impact (though relatively small) on item but not order recall. Great care was taken to isolate the effect of fixation duration from that of presentation duration. Although eye movement at encoding contributes to immediate memory, it is not decisive in shaping serial recall performance. Our results also provide further evidence that the distinction between item and order information, well-established in the verbal domain, extends to visual-spatial information.

  16. Continuous Auditory Feedback of Eye Movements: An Exploratory Study toward Improving Oculomotor Control

    Directory of Open Access Journals (Sweden)

    Eric O. Boyer

    2017-04-01

    Full Text Available As eye movements are mostly automatic and overtly generated to attain visual goals, individuals have a poor metacognitive knowledge of their own eye movements. We present an exploratory study on the effects of real-time continuous auditory feedback generated by eye movements. We considered both a tracking task and a production task where smooth pursuit eye movements (SPEM can be endogenously generated. In particular, we used a visual paradigm which enables to generate and control SPEM in the absence of a moving visual target. We investigated whether real-time auditory feedback of eye movement dynamics might improve learning in both tasks, through a training protocol over 8 days. The results indicate that real-time sonification of eye movements can actually modify the oculomotor behavior, and reinforce intrinsic oculomotor perception. Nevertheless, large inter-individual differences were observed preventing us from reaching a strong conclusion on sensorimotor learning improvements.

  17. Are the surgeon's movements repeatable? An analysis of the feasibility and expediency of implementing support procedures guiding the surgical tools and increasing motion accuracy during the performance of stereotypical movements by the surgeon.

    Science.gov (United States)

    Podsędkowski, Leszek Robert; Moll, Jacek; Moll, Maciej; Frącczak, Łukasz

    2014-03-01

    The developments in surgical robotics suggest that it will be possible to entrust surgical robots with a wider range of tasks. So far, it has not been possible to automate the surgery procedures related to soft tissue. Thus, the objective of the conducted studies was to confirm the hypothesis that the surgery telemanipulator can be equipped with certain routines supporting the surgeon in leading the surgical tools and increasing motion accuracy during stereotypical movements. As the first step in facilitating the surgery, an algorithm will be developed which will concurrently provide automation and allow the surgeon to maintain full control over the slave robot. The algorithm will assist the surgeon in performing typical movement sequences. This kind of support must, however, be preceded by determining the reference points for accurately defining the position of the stitched tissue. It is in relation to these points that the tool's trajectory will be created, along which the master manipulator will guide the surgeon's hand. The paper presents the first stage, concerning the selection of movements for which the support algorithm will be used. The work also contains an analysis of surgical movement repeatability. The suturing movement was investigated in detail by experimental research in order to determine motion repeatability and verify the position of the stitched tissue. Tool trajectory was determined by a motion capture stereovision system. The study has demonstrated that the suturing movement could be considered as repeatable; however, the trajectories performed by different surgeons exhibit some individual characteristics.

  18. Analysis and visualization of animal movement

    NARCIS (Netherlands)

    Shamoun-Baranes, J.; van Loon, E.E.; Purves, R.S.; Speckmann, B.; Weiskopf, D.; Camphuysen, C.J.

    2012-01-01

    The interdisciplinary workshop ‘Analysis and Visualization of Moving Objects’ was held at the Lorentz Centre in Leiden, The Netherlands, from 27 June to 1 July 2011. It brought together international specialists from ecology, computer science and geographical information science actively involved in

  19. Eye Movements When Viewing Advertisements

    Directory of Open Access Journals (Sweden)

    Emily eHiggins

    2014-03-01

    Full Text Available In this selective review, we examine key findings on eye movements when viewing advertisements. We begin with a brief, general introduction to the properties and neural underpinnings of saccadic eye movements. Next, we provide an overview of eye movement behavior during reading, scene perception, and visual search, since each of these activities is, at various times, involved in viewing ads. We then review the literature on eye movements when viewing print ads and warning labels (of the kind that appear on alcohol and tobacco ads, before turning to a consideration of advertisements in dynamic media (television and the Internet. Finally, we propose topics and methodological approaches that may prove to be useful in future research.

  20. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions.

    Science.gov (United States)

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-08-04

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.

  1. Quantification of retinal tangential movement in epiretinal membranes

    DEFF Research Database (Denmark)

    Kofod, Mads; la Cour, Morten

    2012-01-01

    To describe a technique of quantifying retinal vessel movement in eyes with epiretinal membrane (ERM) and correlate the retinal vessel movement with changes in best-corrected visual acuity (BCVA), central macular thickness (CMT), and patients' subjective reports about experienced symptoms (sympto...

  2. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.

    Science.gov (United States)

    Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer

    2017-01-01

    Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p  = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and

  3. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion

    Directory of Open Access Journals (Sweden)

    Daniel S. Harvie

    2017-02-01

    Full Text Available Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17. That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The Mo

  4. Attractive Flicker--Guiding Attention in Dynamic Narrative Visualizations.

    Science.gov (United States)

    Waldner, Manuela; Le Muzic, Mathieu; Bernhard, Matthias; Purgathofer, Werner; Viola, Ivan

    2014-12-01

    Focus+context techniques provide visual guidance in visualizations by giving strong visual prominence to elements of interest while the context is suppressed. However, finding a visual feature to enhance for the focus to pop out from its context in a large dynamic scene, while leading to minimal visual deformation and subjective disturbance, is challenging. This paper proposes Attractive Flicker, a novel technique for visual guidance in dynamic narrative visualizations. We first show that flicker is a strong visual attractor in the entire visual field, without distorting, suppressing, or adding any scene elements. The novel aspect of our Attractive Flicker technique is that it consists of two signal stages: The first "orientation stage" is a short but intensive flicker stimulus to attract the attention to elements of interest. Subsequently, the intensive flicker is reduced to a minimally disturbing luminance oscillation ("engagement stage") as visual support to keep track of the focus elements. To find a good trade-off between attraction effectiveness and subjective annoyance caused by flicker, we conducted two perceptual studies to find suitable signal parameters. We showcase Attractive Flicker with the parameters obtained from the perceptual statistics in a study of molecular interactions. With Attractive Flicker, users were able to easily follow the narrative of the visualization on a large display, while the flickering of focus elements was not disturbing when observing the context.

  5. Agreement Between Visual Assessment and 2-Dimensional Analysis During Jump Landing Among Healthy Female Athletes.

    Science.gov (United States)

    Rabin, Alon; Einstein, Ofira; Kozol, Zvi

    2018-04-01

      Altered movement patterns, including increased frontal-plane knee movement and decreased sagittal-plane hip and knee movement, have been associated with several knee disorders. Nevertheless, the ability of clinicians to visually detect such altered movement patterns during high-speed athletic tasks is relatively unknown.   To explore the association between visual assessment and 2-dimensional (2D) analysis of frontal-plane knee movement and sagittal-plane hip and knee movement during a jump-landing task among healthy female athletes.   Cross-sectional study.   Gymnasiums of participating volleyball teams.   A total of 39 healthy female volleyball players (age = 21.0 ± 5.2 years, height = 172.0 ± 8.6 cm, mass = 64.2 ± 7.2 kg) from Divisions I and II of the Israeli Volleyball Association.   Frontal-plane knee movement and sagittal-plane hip and knee movement during jump landing were visually rated as good, moderate, or poor based on previously established criteria. Frontal-plane knee excursion and sagittal-plane hip and knee excursions were measured using free motion-analysis software and compared among athletes with different visual ratings of the corresponding movements.   Participants with different visual ratings of frontal-plane knee movement displayed differences in 2D frontal-plane knee excursion ( P < .01), whereas participants with different visual ratings of sagittal-plane hip and knee movement displayed differences in 2D sagittal-plane hip and knee excursions ( P < .01).   Visual ratings of frontal-plane knee movement and sagittal-plane hip and knee movement were associated with differences in the corresponding 2D hip and knee excursions. Visual rating of these movements may serve as an initial screening tool for detecting altered movement patterns during jump landings.

  6. The Influence of Visual and Auditory Information on the Perception of Speech and Non-Speech Oral Movements in Patients with Left Hemisphere Lesions

    Science.gov (United States)

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-01-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands…

  7. The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization

    Science.gov (United States)

    Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.

    2003-12-01

    The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.

  8. Exploring the impact of visual and movement based priming on a motor intervention in the acute phase post-stroke in persons with severe hemiparesis of the upper extremity

    Science.gov (United States)

    Patel, Jigna; Qiu, Qinyin; Yarossi, Mathew; Merians, Alma; Massood, Supriya; Tunik, Eugene; Adamovich, Sergei; Fluet, Gerard

    2016-01-01

    Purpose Explore the potential benefits of using priming methods prior to an active hand task in the acute phase post-stroke in persons with severe upper extremity hemiparesis. Methods Five individuals were trained using priming techniques including virtual reality (VR) based visual mirror feedback and contralaterally controlled passive movement strategies prior to training with an active pinch force modulation task. Clinical, kinetic, and neurophysiological measurements were taken pre and post the training period. Clinical measures were taken at six months post training. Results The two priming simulations and active training were well tolerated early after stroke. Priming effects were suggested by increased maximal pinch force immediately after visual and movement based priming. Despite having no clinically observable movement distally, the subjects were able to volitionally coordinate isometric force and muscle activity (EMG) in a pinch tracing task. The Root Mean Square Error (RMSE) of force during the pinch trace task gradually decreased over the training period suggesting learning may have occurred. Changes in motor cortical neurophysiology were seen in the unaffected hemisphere using Transcranial Magnetic Stimulation (TMS) mapping. Significant improvements in motor recovery as measured by the Action Research Arm Test (ARAT) and the Upper Extremity Fugl Meyer Assessment (UEFMA) were demonstrated at six months post training by three of the five subjects. Conclusion This study suggests that an early hand-based intervention using visual and movement based priming activities and a scaled motor task allows participation by persons without the motor control required for traditionally presented rehabilitation and testing. PMID:27636200

  9. Lip Movement Exaggerations during Infant-Directed Speech

    Science.gov (United States)

    Green, Jordan R.; Nip, Ignatius S. B.; Wilson, Erin M.; Mefferd, Antje S.; Yunusova, Yana

    2010-01-01

    Purpose: Although a growing body of literature has identified the positive effects of visual speech on speech and language learning, oral movements of infant-directed speech (IDS) have rarely been studied. This investigation used 3-dimensional motion capture technology to describe how mothers modify their lip movements when talking to their…

  10. Origins of superior dynamic visual acuity in baseball players: superior eye movements or superior image processing.

    Directory of Open Access Journals (Sweden)

    Yusuke Uchida

    Full Text Available Dynamic visual acuity (DVA is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions, whereas in the second they were required to fixate on a fixation target (fixation conditions. The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina.

  11. Origins of superior dynamic visual acuity in baseball players: superior eye movements or superior image processing.

    Science.gov (United States)

    Uchida, Yusuke; Kudoh, Daisuke; Murakami, Akira; Honda, Masaaki; Kitazawa, Shigeru

    2012-01-01

    Dynamic visual acuity (DVA) is defined as the ability to discriminate the fine parts of a moving object. DVA is generally better in athletes than in non-athletes, and the better DVA of athletes has been attributed to a better ability to track moving objects. In the present study, we hypothesized that the better DVA of athletes is partly derived from better perception of moving images on the retina through some kind of perceptual learning. To test this hypothesis, we quantitatively measured DVA in baseball players and non-athletes using moving Landolt rings in two conditions. In the first experiment, the participants were allowed to move their eyes (free-eye-movement conditions), whereas in the second they were required to fixate on a fixation target (fixation conditions). The athletes displayed significantly better DVA than the non-athletes in the free-eye-movement conditions. However, there was no significant difference between the groups in the fixation conditions. These results suggest that the better DVA of athletes is primarily due to an improved ability to track moving targets with their eyes, rather than to improved perception of moving images on the retina.

  12. Dual effects of guide-based guidance on pedestrian evacuation

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Yi, E-mail: yima23-c@my.cityu.edu.hk; Lee, Eric Wai Ming; Shi, Meng

    2017-06-15

    This study investigates the effects of guide-based guidance on the pedestrian evacuation under limited visibility via the simulations based on an extended social force model. The results show that the effects of guides on the pedestrian evacuation under limited visibility are dual, and related to the neighbor density within the visual field. On the one hand, in many cases, the effects of guides are positive, particularly when the neighbor density within the visual field is moderate; in this case, a few guides can already assist the evacuation effectively and efficiently. However, when the neighbor density within the visual field is particularly small or large, the effects of guides may be adverse and make the evacuation time longer. Our results not only provide a new insight into the effects of guides on the pedestrian evacuation under limited visibility, but also give some practical suggestions as to how to assign guides to assist the evacuation under different evacuation conditions. - Highlights: • Extended social force model is used to simulate guided pedestrian evacuation. • Effects of guides on pedestrian evacuation under limited visibility are dual. • Effects of guides on pedestrian evacuation under limited visibility are related to neighbor density within visual field.

  13. Segregation of Form, Color, Movement, and Depth: Anatomy, Physiology, and Perception

    Science.gov (United States)

    Livingstone, Margaret; Hubel, David

    1988-05-01

    Anatomical and physiological observations in monkeys indicate that the primate visual system consists of several separate and independent subdivisions that analyze different aspects of the same retinal image: cells in cortical visual areas 1 and 2 and higher visual areas are segregated into three interdigitating subdivisions that differ in their selectivity for color, stereopsis, movement, and orientation. The pathways selective for form and color seem to be derived mainly from the parvocellular geniculate subdivisions, the depth- and movement-selective components from the magnocellular. At lower levels, in the retina and in the geniculate, cells in these two subdivisions differ in their color selectivity, contrast sensitivity, temporal properties, and spatial resolution. These major differences in the properties of cells at lower levels in each of the subdivisions led to the prediction that different visual functions, such as color, depth, movement, and form perception, should exhibit corresponding differences. Human perceptual experiments are remarkably consistent with these predictions. Moreover, perceptual experiments can be designed to ask which subdivisions of the system are responsible for particular visual abilities, such as figure/ground discrimination or perception of depth from perspective or relative movement--functions that might be difficult to deduce from single-cell response properties.

  14. Visual problems associated with traumatic brain injury.

    Science.gov (United States)

    Armstrong, Richard A

    2018-02-28

    Traumatic brain injury (TBI) and its associated concussion are major causes of disability and death. All ages can be affected but children, young adults and the elderly are particularly susceptible. A decline in mortality has resulted in many more individuals living with a disability caused by TBI including those affecting vision. This review describes: (1) the major clinical and pathological features of TBI; (2) the visual signs and symptoms associated with the disorder; and (3) discusses the assessment of quality of life and visual rehabilitation of the patient. Defects in primary vision such as visual acuity and visual fields, eye movement including vergence, saccadic and smooth pursuit movements, and in more complex aspects of vision involving visual perception, motion vision ('akinopsia'), and visuo-spatial function have all been reported in TBI. Eye movement dysfunction may be an early sign of TBI. Hence, TBI can result in a variety of visual problems, many patients exhibiting multiple visual defects in combination with a decline in overall health. Patients with chronic dysfunction following TBI may require occupational, vestibular, cognitive and other forms of physical therapy. Such patients may also benefit from visual rehabilitation, including reading-related oculomotor training and the prescribing of spectacles with a variety of tints and prism combinations. © 2018 Optometry Australia.

  15. Memory and Culture in Social Movements

    DEFF Research Database (Denmark)

    Doerr, Nicole

    2014-01-01

    on psychoanalytical, visual, and historical approaches. Movement scholars who focused on narrative, discourse, framing, and performance show how activists actively construct and mobilize collective memory. We know much less, however, about interactions between multiple layers and forms of remembering stored in images......, stories, or performances, or discursive forms. How do conflicting or contradictory memories about the past inside movement groups condition activists’ ability to speak, write, and even think about the future? While previous work conceived of memory in movements as a subcategory of narrative, discourse...

  16. Ventromedial Frontal Cortex Is Critical for Guiding Attention to Reward-Predictive Visual Features in Humans.

    Science.gov (United States)

    Vaidya, Avinash R; Fellows, Lesley K

    2015-09-16

    Adaptively interacting with our environment requires extracting information that will allow us to successfully predict reward. This can be a challenge, particularly when there are many candidate cues, and when rewards are probabilistic. Recent work has demonstrated that visual attention is allocated to stimulus features that have been associated with reward on previous trials. The ventromedial frontal lobe (VMF) has been implicated in learning in dynamic environments of this kind, but the mechanism by which this region influences this process is not clear. Here, we hypothesized that the VMF plays a critical role in guiding attention to reward-predictive stimulus features based on feedback. We tested the effects of VMF damage in human subjects on a visual search task in which subjects were primed to attend to task-irrelevant colors associated with different levels of reward, incidental to the search task. Consistent with previous work, we found that distractors had a greater influence on reaction time when they appeared in colors associated with high reward in the previous trial compared with colors associated with low reward in healthy control subjects and patients with prefrontal damage sparing the VMF. However, this reward modulation of attentional priming was absent in patients with VMF damage. Thus, an intact VMF is necessary for directing attention based on experience with cue-reward associations. We suggest that this region plays a role in selecting reward-predictive cues to facilitate future learning. There has been a swell of interest recently in the ventromedial frontal cortex (VMF), a brain region critical to associative learning. However, the underlying mechanism by which this region guides learning is not well understood. Here, we tested the effects of damage to this region in humans on a task in which rewards were linked incidentally to visual features, resulting in trial-by-trial attentional priming. Controls and subjects with prefrontal damage

  17. Comparison and analysis of FDA reported visual outcomes of the three latest platforms for LASIK: wavefront guided Visx iDesign, topography guided WaveLight Allegro Contoura, and topography guided Nidek EC-5000 CATz

    Directory of Open Access Journals (Sweden)

    Moshirfar M

    2017-01-01

    , respectively. Conclusion: FDA data for the three platforms shows all three were excellent with respect to efficacy, safety, accuracy, and stability. However, there are some differences between the platforms with certain outcome measurements. Overall, patients using all three lasers showed significant improvements in primary and secondary visual outcomes after LASIK surgery. Keywords: wavefront-guided, topography-guided, LASIK, wavefront optimized

  18. Location memory biases reveal the challenges of coordinating visual and kinesthetic reference frames

    Science.gov (United States)

    Simmering, Vanessa R.; Peterson, Clayton; Darling, Warren; Spencer, John P.

    2008-01-01

    Five experiments explored the influence of visual and kinesthetic/proprioceptive reference frames on location memory. Experiments 1 and 2 compared visual and kinesthetic reference frames in a memory task using visually-specified locations and a visually-guided response. When the environment was visible, results replicated previous findings of biases away from the midline symmetry axis of the task space, with stability for targets aligned with this axis. When the environment was not visible, results showed some evidence of bias away from a kinesthetically-specified midline (trunk anterior–posterior [a–p] axis), but there was little evidence of stability when targets were aligned with body midline. This lack of stability may reflect the challenges of coordinating visual and kinesthetic information in the absence of an environmental reference frame. Thus, Experiments 3–5 examined kinesthetic guidance of hand movement to kinesthetically-defined targets. Performance in these experiments was generally accurate with no evidence of consistent biases away from the trunk a–p axis. We discuss these results in the context of the challenges of coordinating reference frames within versus between multiple sensori-motor systems. PMID:17703284

  19. Teach yourself visually complete Excel

    CERN Document Server

    McFedries, Paul

    2013-01-01

    Get the basics of Excel and then go beyond with this new instructional visual guide While many users need Excel just to create simple worksheets, many businesses and professionals rely on the advanced features of Excel to handle things like database creation and data analysis. Whatever project you have in mind, this visual guide takes you step by step through what each step should look like. Veteran author Paul McFedries first presents the basics and then gradually takes it further with his coverage of designing worksheets, collaborating between worksheets, working with visual data

  20. Recognition of dance-like actions: memory for static posture or dynamic movement?

    Science.gov (United States)

    Vicary, Staci A; Robbins, Rachel A; Calvo-Merino, Beatriz; Stevens, Catherine J

    2014-07-01

    Dance-like actions are complex visual stimuli involving multiple changes in body posture across time and space. Visual perception research has demonstrated a difference between the processing of dynamic body movement and the processing of static body posture. Yet, it is unclear whether this processing dissociation continues during the retention of body movement and body form in visual working memory (VWM). When observing a dance-like action, it is likely that static snapshot images of body posture will be retained alongside dynamic images of the complete motion. Therefore, we hypothesized that, as in perception, posture and movement would differ in VWM. Additionally, if body posture and body movement are separable in VWM, as form- and motion-based items, respectively, then differential interference from intervening form and motion tasks should occur during recognition. In two experiments, we examined these hypotheses. In Experiment 1, the recognition of postures and movements was tested in conditions in which the formats of the study and test stimuli matched (movement-study to movement-test, posture-study to posture-test) or mismatched (movement-study to posture-test, posture-study to movement-test). In Experiment 2, the recognition of postures and movements was compared after intervening form and motion tasks. These results indicated that (1) the recognition of body movement based only on posture is possible, but it is significantly poorer than recognition based on the entire movement stimulus, and (2) form-based interference does not impair memory for movements, although motion-based interference does. We concluded that, whereas static posture information is encoded during the observation of dance-like actions, body movement and body posture differ in VWM.

  1. Recognition of dance-like actions: memory for static posture or dynamic movement?

    OpenAIRE

    Vicary, S.A.; Robbins, R.A.; Calvo-Merino, B.; Stevens, C.J.

    2014-01-01

    Dance-like actions are complex visual stimuli involving multiple changes in body posture across time and space. Visual perception research has demonstrated a difference between the processing of dynamic body movement and the processing of static body posture. Yet, it is unclear whether this processing dissociation continues during the retention of body movement and body form in visual working memory (VWM). When observing a dance-like action, it is likely that static snapshot images of body po...

  2. Directive and Non-Directive Movement in Child Therapy.

    Science.gov (United States)

    Krason, Katarzyna; Szafraniec, Grazyna

    1999-01-01

    Presents a new authorship method of child therapy based on visualization through motion. Maintains that this method stimulates motor development and musical receptiveness, and promotes personality development. Suggests that improvised movement to music facilitates the projection mechanism and that directed movement starts the channeling phase.…

  3. Eye movement during retrieval of emotional autobiographical memories.

    Science.gov (United States)

    El Haj, Mohamad; Nandrino, Jean-Louis; Antoine, Pascal; Boucart, Muriel; Lenoble, Quentin

    2017-03-01

    This study assessed whether specific eye movement patterns are observed during emotional autobiographical retrieval. Participants were asked to retrieve positive, negative and neutral memories while their scan path was recorded by an eye-tracker. Results showed that positive and negative emotional memories triggered more fixations and saccades but shorter fixation duration than neutral memories. No significant differences were observed between emotional and neutral memories for duration and amplitude of saccades. Positive and negative retrieval triggered similar eye movement (i.e., similar number of fixations and saccades, fixation duration, duration of saccades, and amplitude of saccades). Interestingly, the participants reported higher visual imagery for emotional memories than for neutral memories. The findings demonstrate similarities and differences in eye movement during retrieval of neutral and emotional memories. Eye movement during autobiographical retrieval seems to be triggered by the creation of visual mental images as the latter are indexed by autobiographical reconstruction. Copyright © 2016. Published by Elsevier B.V.

  4. Visual Field Preferences of Object Analysis for Grasping with One Hand

    Directory of Open Access Journals (Sweden)

    Ada eLe

    2014-10-01

    Full Text Available When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al. 2007; Rice et al. 2007. However, it is unclear whether visual object analysis for grasp control relies more on inputs (a from the contralateral than the ipsilateral visual field, (b from one dominant visual field regardless of the grasping hand, or (c from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier 2013a, 2013b, consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2013. But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields.

  5. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    Science.gov (United States)

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  6. Lack of multisensory integration in hemianopia: no influence of visual stimuli on aurally guided saccades to the blind hemifield.

    Directory of Open Access Journals (Sweden)

    Antonia F Ten Brink

    Full Text Available In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal, or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal. For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone. In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia.

  7. Suppression of Face Perception during Saccadic Eye Movements

    Directory of Open Access Journals (Sweden)

    Mehrdad Seirafi

    2014-01-01

    Full Text Available Lack of awareness of a stimulus briefly presented during saccadic eye movement is known as saccadic omission. Studying the reduced visibility of visual stimuli around the time of saccade—known as saccadic suppression—is a key step to investigate saccadic omission. To date, almost all studies have been focused on the reduced visibility of simple stimuli such as flashes and bars. The extension of the results from simple stimuli to more complex objects has been neglected. In two experimental tasks, we measured the subjective and objective awareness of a briefly presented face stimuli during saccadic eye movement. In the first task, we measured the subjective awareness of the visual stimuli and showed that in most of the trials there is no conscious awareness of the faces. In the second task, we measured objective sensitivity in a two-alternative forced choice (2AFC face detection task, which demonstrated chance-level performance. Here, we provide the first evidence of complete suppression of complex visual stimuli during the saccadic eye movement.

  8. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  9. The Coding and Effector Transfer of Movement Sequences

    Science.gov (United States)

    Kovacs, Attila J.; Muhlbauer, Thomas; Shea, Charles H.

    2009-01-01

    Three experiments utilizing a 14-element arm movement sequence were designed to determine if reinstating the visual-spatial coordinates, which require movements to the same spatial locations utilized during acquisition, results in better effector transfer than reinstating the motor coordinates, which require the same pattern of homologous muscle…

  10. Interacting noise sources shape patterns of arm movement variability in three-dimensional space.

    Science.gov (United States)

    Apker, Gregory A; Darling, Timothy K; Buneo, Christopher A

    2010-11-01

    Reaching movements are subject to noise in both the planning and execution phases of movement production. The interaction of these noise sources during natural movements is not well understood, despite its importance for understanding movement variability in neurologically intact and impaired individuals. Here we examined the interaction of planning and execution related noise during the production of unconstrained reaching movements. Subjects performed sequences of two movements to targets arranged in three vertical planes separated in depth. The starting position for each sequence was also varied in depth with the target plane; thus required movement sequences were largely contained within the vertical plane of the targets. Each final target in a sequence was approached from two different directions, and these movements were made with or without visual feedback of the moving hand. These combined aspects of the design allowed us to probe the interaction of execution and planning related noise with respect to reach endpoint variability. In agreement with previous studies, we found that reach endpoint distributions were highly anisotropic. The principal axes of movement variability were largely aligned with the depth axis, i.e., the axis along which visual planning related noise would be expected to dominate, and were not generally well aligned with the direction of the movement vector. Our results suggest that visual planning-related noise plays a dominant role in determining anisotropic patterns of endpoint variability in three-dimensional space, with execution noise adding to this variability in a movement direction-dependent manner.

  11. Exploration of spatio-temporal patterns of students' movement in field trip by visualizing the log data

    Science.gov (United States)

    Cho, Nahye; Kang, Youngok

    2018-05-01

    A numerous log data in addition to user input data are being generated as mobile and web users continue to increase recently, and the studies in order to explore the patterns and meanings of various movement activities by making use of these log data are also rising rapidly. On the other hand, in the field of education, people have recognized the importance of field trip as the creative education is highlighted. Also, the examples which utilize the mobile devices in the field trip in accordance to the development of information technology are growing. In this study, we try to explore the patterns of student's activity by visualizing the log data generated from high school students' field trip with mobile device.

  12. The Effects of Visual Beats on Prosodic Prominence: Acoustic Analyses, Auditory Perception and Visual Perception

    Science.gov (United States)

    Krahmer, Emiel; Swerts, Marc

    2007-01-01

    Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…

  13. Noisy visual feedback training impairs detection of self-generated movement error: implications for anosognosia for hemiplegia

    Directory of Open Access Journals (Sweden)

    Catherine ePreston

    2014-06-01

    Full Text Available Anosognosia for hemiplegia (AHP is characterised as a disorder in which patients are unaware of their contralateral motor deficit. Many current theories for unawareness in AHP are based on comparator model accounts of the normal experience of agency. According to such models, while small mismatches between predicted and actual feedback allow unconscious fine-tuning of normal actions, mismatches that surpass an inherent threshold reach conscious awareness and inform judgements of agency (whether a given movement is produced by the self or another agent. This theory depends on a threshold for consciousness that is greater than the intrinsic noise in the system to reduce the occurrence of incorrect rejections of self-generated movements and maintain a fluid experience of agency. Pathological increases to this threshold could account for reduced motor awareness following brain injury, including AHP. The current experiment tested this hypothesis in healthy controls by exposing them to training in which noise was applied the visual feedback of their normal reaches. Subsequent self/other attribution tasks without noise revealed a decrease in the ability to detect manipulated (other feedback compared to training without noise. This suggests a slackening of awareness thresholds in the comparator model that may help to explain clinical observations of decreased action awareness following stroke.

  14. Implied Movement in Static Images Reveals Biological Timing Processing

    Directory of Open Access Journals (Sweden)

    Francisco Carlos Nather

    2015-08-01

    Full Text Available Visual perception is adapted toward a better understanding of our own movements than those of non-conspecifics. The present study determined whether time perception is affected by pictures of different species by considering the evolutionary scale. Static (“S” and implied movement (“M” images of a dog, cheetah, chimpanzee, and man were presented to undergraduate students. S and M images of the same species were presented in random order or one after the other (S-M or M-S for two groups of participants. Movement, Velocity, and Arousal semantic scales were used to characterize some properties of the images. Implied movement affected time perception, in which M images were overestimated. The results are discussed in terms of visual motion perception related to biological timing processing that could be established early in terms of the adaptation of humankind to the environment.

  15. Visualization of the sequence of a couple splitting outside shop

    DEFF Research Database (Denmark)

    2015-01-01

    Visualization of tracks of couple walking together before splitting and one goes into shop the other waits outside. The visualization represents the sequence described in figure 7 in the publication 'Taking the temperature of pedestrian movement in public spaces'......Visualization of tracks of couple walking together before splitting and one goes into shop the other waits outside. The visualization represents the sequence described in figure 7 in the publication 'Taking the temperature of pedestrian movement in public spaces'...

  16. Exploring the impact of visual and movement based priming on a motor intervention in the acute phase post-stroke in persons with severe hemiparesis of the upper extremity.

    Science.gov (United States)

    Patel, Jigna; Qiu, Qinyin; Yarossi, Mathew; Merians, Alma; Massood, Supriya; Tunik, Eugene; Adamovich, Sergei; Fluet, Gerard

    2017-07-01

    Explore the potential benefits of using priming methods prior to an active hand task in the acute phase post-stroke in persons with severe upper extremity hemiparesis. Five individuals were trained using priming techniques including virtual reality (VR) based visual mirror feedback and contralaterally controlled passive movement strategies prior to training with an active pinch force modulation task. Clinical, kinetic, and neurophysiological measurements were taken pre and post the training period. Clinical measures were taken at six months post training. The two priming simulations and active training were well tolerated early after stroke. Priming effects were suggested by increased maximal pinch force immediately after visual and movement based priming. Despite having no clinically observable movement distally, the subjects were able to volitionally coordinate isometric force and muscle activity (EMG) in a pinch tracing task. The Root Mean Square Error (RMSE) of force during the pinch trace task gradually decreased over the training period suggesting learning may have occurred. Changes in motor cortical neurophysiology were seen in the unaffected hemisphere using Transcranial Magnetic Stimulation (TMS) mapping. Significant improvements in motor recovery as measured by the Action Research Arm Test (ARAT) and the Upper Extremity Fugl Meyer Assessment (UEFMA) were demonstrated at six months post training by three of the five subjects. This study suggests that an early hand-based intervention using visual and movement based priming activities and a scaled motor task allows participation by persons without the motor control required for traditionally presented rehabilitation and testing. Implications for Rehabilitation Rehabilitation of individuals with severely paretic upper extremities after stroke is challenging due to limited movement capacity and few options for therapeutic training. Long-term functional recovery of the arm after stroke depends on early return

  17. Signature movements lead to efficient search for threatening actions.

    Directory of Open Access Journals (Sweden)

    Jeroen J A van Boxtel

    Full Text Available The ability to find and evade fighting persons in a crowd is potentially life-saving. To investigate how the visual system processes threatening actions, we employed a visual search paradigm with threatening boxer targets among emotionally-neutral walker distractors, and vice versa. We found that a boxer popped out for both intact and scrambled actions, whereas walkers did not. A reverse correlation analysis revealed that observers' responses clustered around the time of the "punch", a signature movement of boxing actions, but not around specific movements of the walker. These findings support the existence of a detector for signature movements in action perception. This detector helps in rapidly detecting aggressive behavior in a crowd, potentially through an expedited (subcortical threat-detection mechanism.

  18. Temporal Expectations Guide Dynamic Prioritization in Visual Working Memory through Attenuated α Oscillations.

    Science.gov (United States)

    van Ede, Freek; Niklaus, Marcel; Nobre, Anna C

    2017-01-11

    Although working memory is generally considered a highly dynamic mnemonic store, popular laboratory tasks used to understand its psychological and neural mechanisms (such as change detection and continuous reproduction) often remain relatively "static," involving the retention of a set number of items throughout a shared delay interval. In the current study, we investigated visual working memory in a more dynamic setting, and assessed the following: (1) whether internally guided temporal expectations can dynamically and reversibly prioritize individual mnemonic items at specific times at which they are deemed most relevant; and (2) the neural substrates that support such dynamic prioritization. Participants encoded two differently colored oriented bars into visual working memory to retrieve the orientation of one bar with a precision judgment when subsequently probed. To test for the flexible temporal control to access and retrieve remembered items, we manipulated the probability for each of the two bars to be probed over time, and recorded EEG in healthy human volunteers. Temporal expectations had a profound influence on working memory performance, leading to faster access times as well as more accurate orientation reproductions for items that were probed at expected times. Furthermore, this dynamic prioritization was associated with the temporally specific attenuation of contralateral α (8-14 Hz) oscillations that, moreover, predicted working memory access times on a trial-by-trial basis. We conclude that attentional prioritization in working memory can be dynamically steered by internally guided temporal expectations, and is supported by the attenuation of α oscillations in task-relevant sensory brain areas. In dynamic, everyday-like, environments, flexible goal-directed behavior requires that mental representations that are kept in an active (working memory) store are dynamic, too. We investigated working memory in a more dynamic setting than is conventional

  19. Museums for all: evaluation of an audio descriptive guide for visually impaired visitors at the science museum

    Directory of Open Access Journals (Sweden)

    Silvia Soler Gallego

    2014-12-01

    Full Text Available Translation and interpreting are valuable tools to improve accessibility at museums. Theese tools permit the museum communicate with visitors with different capabilities. The aim of this article is to show the results of a study carried out within the TACTO project, aimed at creating and evaluating an audio descriptive guide for visually impaired visitors at the Science Museum of Granada. The project focused on the linguistic aspects of the guide’s contents and its evaluation, which combined the participatory observation with a survey and interview. The results from this study allow us to conclude that the proposed design improves visually impaired visitors’ access to the museum. However, the expectations and specific needs of each visitor change considerably depending on individual factors such as their level of disability and museum visiting habits.

  20. Impaired Saccadic Eye Movement in Primary Open-angle Glaucoma

    DEFF Research Database (Denmark)

    Lamirel, Cédric; Milea, Dan; Cochereau, Isabelle

    2013-01-01

    PURPOSE:: Our study aimed at investigating the extent to which saccadic eye movements are disrupted in patients with primary open-angle glaucoma (POAG). This approach followed upon the discovery of differences in the eye-movement behavior of POAG patients during the exploration of complex visual...

  1. Hypothesized eye movements of neurolinguistic programming: a statistical artifact.

    Science.gov (United States)

    Farmer, A; Rooney, R; Cunningham, J R

    1985-12-01

    Neurolinguistic programming's hypothesized eye-movements were measured independently from videotapes of 30 subjects, aged 15 to 76 yr., who were asked to recall visual pictures, recorded audio sounds, and textural objects. chi 2 indicated that subjects' responses were significantly different from those predicted. When chi 2 comparisons were weighted by number of eye positions assigned to each modality (3 visual, 3 auditory, 1 kinesthetic), subjects' responses did not differ significantly from the expected pattern. These data indicate that the eye-movement hypothesis may represent randomly occurring rather than sensory-modality-related positions.

  2. Seeing via miniature eye movements: A dynamic hypothesis for vision

    Directory of Open Access Journals (Sweden)

    Ehud eAhissar

    2012-11-01

    Full Text Available During natural viewing, the eyes are never still. Even during fixation, miniature movements of the eyes move the retinal image across tens of foveal photoreceptors. Most theories of vision implicitly assume that the visual system ignores these movements and somehow overcomes the resulting smearing. However, evidence has accumulated to indicate that fixational eye movements cannot be ignored by the visual system if fine spatial details are to be resolved. We argue that the only way the visual system can achieve its high resolution given its fixational movements is by seeing via these movements. Seeing via eye movements also eliminates the instability of the image, which would be induced by them otherwise. Here we present a hypothesis for vision, in which coarse details are spatially-encoded in gaze-related coordinates, and fine spatial details are temporally-encoded in relative retinal coordinates. The temporal encoding presented here achieves its highest resolution by encoding along the elongated axes of simple cell receptive fields and not across these axes as suggested by spatial models of vision. According to our hypothesis, fine details of shape are encoded by inter-receptor temporal phases, texture by instantaneous intra-burst rates of individual receptors, and motion by inter-burst temporal frequencies. We further describe the ability of the visual system to readout the encoded information and recode it internally. We show how reading out of retinal signals can be facilitated by neuronal phase-locked loops (NPLLs, which lock to the retinal jitter; this locking enables recoding of motion information and temporal framing of shape and texture processing. A possible implementation of this locking-and-recoding process by specific thalamocortical loops is suggested. Overall it is suggested that high-acuity vision is based primarily on temporal mechanisms of the sort presented here and low-acuity vision is based primarily on spatial mechanisms.

  3. Visual analysis and quantitative assessment of human movement

    NARCIS (Netherlands)

    Soancatl Aguilar, Venustiano

    2018-01-01

    Our ability to navigate in our environment depends on the condition of the musculoskeletal and nervous systems. Any deterioration of a component of these two systems can cause instability or disability of body movements. Such deterioration can happen as a consequence of natural age-related changes,

  4. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  5. The differences of movement between children at risk of developmental coordination disorder and those not at risk

    Directory of Open Access Journals (Sweden)

    Adrián Agricola

    2015-09-01

    Full Text Available Background: Developmental coordination disorder (DCD is a syndrome unexplained by medical condition, which is marked by defects in the development of motor coordination. Children with this impairment are more dependent on visual information to perform movements than their typically developing (TD peers. Objective: The main aim of the research was to create a checklist for the evaluation of the head and limb movement while walking. After that, based on this tool, to find differences in the movement of various body segments in children at risk of DCD (DCDr compared to typically developing children under different visual conditions. Methods: A total of 32 children aged 8.7 ± 1.1 years participated in this study. The Movement Assessment Battery for Children - 2nd edition (MABC-2 was used to make a classification of motor competence level of the participants. PLATO goggles were used to make four different visual conditions. All trials were recorded. Based on the video analysis we completed a qualitative checklist. Results: The analysis between the children from the DCDr group and TD children showed significant differences in the head (p = .023 and the arm (p = .005 movements, in body position (p = .002 and total summary score (p = .001. The main effects of visual conditions showed significant differences in all cases; in the head (p = .015, with the arm (p = .006, trunk (p =  .009, leg (p = .001 movements, in body position (p = .001 and also in the total summary score (p = .001. The interaction between groups and visual conditions was significant in leg movements (p = .007 and body position (p = .002. Conclusions: This study has shown which movements of body segments are most affected by different visual conditions and how children at risk of DCD are dependent on visual perception.

  6. THE MOVEMENT SYSTEM IN EDUCATION.

    Science.gov (United States)

    Hoogenboom, Barbara J; Sulavik, Mark

    2017-11-01

    Although many physical therapists have begun to focus on movement and function in clinical practice, a significant number continue to focus on impairments or pathoanatomic models to direct interventions. This paradigm may be driven by the current models used to direct and guide curricula used for physical therapist education. The methods by which students are educated may contribute to a focus on independent systems, rather than viewing the body as a functional whole. Students who enter practice must be able to integrate information across multiple systems that affect a patient or client's movement and function. Such integration must be taught to students and it is the responsibility of those in physical therapist education to embrace and teach the next generation of students this identifying professional paradigm of the movement system. The purpose of this clinical commentary is to describe the current state of the movement system in physical therapy education, suggest strategies for enhancing movement system focus in entry level education, and envision the future of physical therapy education related to the movement system. Contributions by a student author offer depth and perspective to the ideas and suggestions presented. 5.

  7. Seeing through rose-colored glasses: How optimistic expectancies guide visual attention.

    Science.gov (United States)

    Kress, Laura; Bristle, Mirko; Aue, Tatjana

    2018-01-01

    Optimism bias and positive attention bias have important highly similar implications for mental health but have only been examined in isolation. Investigating the causal relationships between these biases can improve the understanding of their underlying cognitive mechanisms, leading to new directions in neurocognitive research and revealing important information about normal functioning as well as the development, maintenance, and treatment of psychological diseases. In the current project, we hypothesized that optimistic expectancies can exert causal influences on attention deployment. To test this causal relation, we conducted two experiments in which we manipulated optimistic and pessimistic expectancies regarding future rewards and punishments. In a subsequent visual search task, we examined participants' attention to positive (i.e., rewarding) and negative (i.e., punishing) target stimuli, measuring their eye gaze behavior and reaction times. In both experiments, participants' attention was guided toward reward compared with punishment when optimistic expectancies were induced. Additionally, in Experiment 2, participants' attention was guided toward punishment compared with reward when pessimistic expectancies were induced. However, the effect of optimistic (rather than pessimistic) expectancies on attention deployment was stronger. A key characteristic of optimism bias is that people selectively update expectancies in an optimistic direction, not in a pessimistic direction, when receiving feedback. As revealed in our studies, selective attention to rewarding versus punishing evidence when people are optimistic might explain this updating asymmetry. Thus, the current data can help clarify why optimistic expectancies are difficult to overcome. Our findings elucidate the cognitive mechanisms underlying optimism and attention bias, which can yield a better understanding of their benefits for mental health.

  8. Multisensory Integration in the Virtual Hand Illusion with Active Movement.

    Science.gov (United States)

    Choi, Woong; Li, Liang; Satoh, Satoru; Hachimura, Kozaburo

    2016-01-01

    Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality.

  9. Otolith dysfunction alters exploratory movement in mice.

    Science.gov (United States)

    Blankenship, Philip A; Cherep, Lucia A; Donaldson, Tia N; Brockman, Sarah N; Trainer, Alexandria D; Yoder, Ryan M; Wallace, Douglas G

    2017-05-15

    The organization of rodent exploratory behavior appears to depend on self-movement cue processing. As of yet, however, no studies have directly examined the vestibular system's contribution to the organization of exploratory movement. The current study sequentially segmented open field behavior into progressions and stops in order to characterize differences in movement organization between control and otoconia-deficient tilted mice under conditions with and without access to visual cues. Under completely dark conditions, tilted mice exhibited similar distance traveled and stop times overall, but had significantly more circuitous progressions, larger changes in heading between progressions, and less stable clustering of home bases, relative to control mice. In light conditions, control and tilted mice were similar on all measures except for the change in heading between progressions. This pattern of results is consistent with otoconia-deficient tilted mice using visual cues to compensate for impaired self-movement cue processing. This work provides the first empirical evidence that signals from the otolithic organs mediate the organization of exploratory behavior, based on a novel assessment of spatial orientation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Visual Categorization and the Parietal Cortex

    Directory of Open Access Journals (Sweden)

    Jamie K Fitzgerald

    2012-05-01

    Full Text Available The primate brain is adept at rapidly grouping items and events into functional classes, or categories, in order to recognize the significance of stimuli and guide behavior. Higher cognitive functions have traditionally been considered the domain of frontal areas. However, increasing evidence suggests that parietal cortex is also involved in categorical and associative processes. Previous work showed that the parietal cortex is highly involved in spatial processing, attention and saccadic eye movement planning, and more recent studies have found decision-making signals in LIP. We recently found that a subdivision of parietal cortex, the lateral intraparietal area (LIP, reflects learned categories for multiple types of visual stimuli. Additionally, a comparison of categorization signals in parietal and frontal areas found stronger and earlier categorization signals in parietal cortex, arguing that parietal abstract association or category signals are unlikely to arise via feedback from prefrontal cortex (PFC.

  11. A Prospective Curriculum Using Visual Literacy.

    Science.gov (United States)

    Hortin, John A.

    This report describes the uses of visual literacy programs in the schools and outlines four categories for incorporating training in visual thinking into school curriculums as part of the back to basics movement in education. The report recommends that curriculum writers include materials pertaining to: (1) reading visual language and…

  12. Zero-fluoroscopy cryothermal ablation of atrioventricular nodal re-entry tachycardia guided by endovascular and endocardial catheter visualization using intracardiac echocardiography (Ice&ICE Trial).

    Science.gov (United States)

    Luani, Blerim; Zrenner, Bernhard; Basho, Maksim; Genz, Conrad; Rauwolf, Thomas; Tanev, Ivan; Schmeisser, Alexander; Braun-Dullaeus, Rüdiger C

    2018-01-01

    Stochastic damage of the ionizing radiation to both patients and medical staff is a drawback of fluoroscopic guidance during catheter ablation of cardiac arrhythmias. Therefore, emerging zero-fluoroscopy catheter-guidance techniques are of great interest. We investigated, in a prospective pilot study, the feasibility and safety of the cryothermal (CA) slow-pathway ablation in patients with symptomatic atrioventricular-nodal-re-entry-tachycardia (AVNRT) using solely intracardiac echocardiography (ICE) for endovascular and endocardial catheter visualization. Twenty-five consecutive patients (mean age 55.6 ± 12.0 years, 17 female) with ECG-documentation or symptoms suggesting AVNRT underwent an electrophysiology study (EPS) in our laboratory utilizing ICE for catheter navigation. Supraventricular tachycardia was inducible in 23 (92%) patients; AVNRT was confirmed by appropriate stimulation maneuvers in 20 (80%) patients. All EPS in the AVNRT subgroup could be accomplished without need for fluoroscopy, relying solely on ICE-guidance. CA guided by anatomical location and slow-pathway potentials was successful in all patients, median cryo-mappings = 6 (IQR:3-10), median cryo-ablations = 2 (IQR:1-3). Fluoroscopy was used to facilitate the trans-septal puncture and localization of the ablation substrate in the remaining 3 patients (one focal atrial tachycardia and two atrioventricular-re-entry-tachycardias). Mean EPS duration in the AVNRT subgroup was 99.8 ± 39.6 minutes, ICE guided catheter placement 11.9 ± 5.8 minutes, time needed for diagnostic evaluation 27.1 ± 10.8 minutes, and cryo-application duration 26.3 ± 30.8 minutes. ICE-guided zero-fluoroscopy CA in AVNRT patients is feasible and safe. Real-time visualization of the true endovascular borders and cardiac structures allow for safe catheter navigation during the ICE-guided EPS and might be an alternative to visualization technologies using geometry reconstructions. © 2017 Wiley Periodicals, Inc.

  13. Visual strategies underpinning the development of visual-motor expertise when hitting a ball.

    Science.gov (United States)

    Sarpeshkar, Vishnu; Abernethy, Bruce; Mann, David L

    2017-10-01

    It is well known that skilled batters in fast-ball sports do not align their gaze with the ball throughout ball-flight, but instead adopt a unique sequence of eye and head movements that contribute toward their skill. However, much of what we know about visual-motor behavior in hitting is based on studies that have employed case study designs, and/or used simplified tasks that fall short of replicating the spatiotemporal demands experienced in the natural environment. The aim of this study was to provide a comprehensive examination of the eye and head movement strategies that underpin the development of visual-motor expertise when intercepting a fast-moving target. Eye and head movements were examined in situ for 4 groups of cricket batters, who were crossed for playing level (elite or club) and age (U19 or adult), when hitting balls that followed either straight or curving ('swinging') trajectories. The results provide support for some widely cited markers of expertise in batting, while questioning the legitimacy of others. Swinging trajectories alter the visual-motor behavior of all batters, though in large part because of the uncertainty generated by the possibility of a variation in trajectory rather than any actual change in trajectory per se. Moreover, curving trajectories influence visual-motor behavior in a nonlinear fashion, with targets that curve away from the observer influencing behavior more than those that curve inward. The findings provide a more comprehensive understanding of the development of visual-motor expertise in interception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    Science.gov (United States)

    Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F

    2010-03-16

    Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for

  15. Teach yourself visually Mac Mini

    CERN Document Server

    Hart-Davis, Guy

    2012-01-01

    The perfect how-to guide for visual learners Apple?s Mac Mini packs a powerful punch is in a small package, including both HDMI and Thunderbolt ports plus the acclaimed OS X. But if you want to get the very most from all this power and versatility, be sure to get this practical visual guide. With full-color, step-by-step instructions as well as screenshots and illustrations on every page, it clearly shows you how to accomplish tasks rather than burying you in pages of text. Discover helpful visuals and how-tos on the OS, hardware specs, Launchpad, the App Store, multimedia capabilities (such

  16. Eye tracking for visual marketing

    NARCIS (Netherlands)

    Wedel, M.; Pieters, R.

    2008-01-01

    We provide the theory of visual attention and eye-movements that serves as a basis for evaluating eye-tracking research and for discussing salient and emerging issues in visual marketing. Motivated from its rising importance in marketing practice and its potential for theoretical contribution, we

  17. Eye movements during listening reveal spontaneous grammatical processing.

    Science.gov (United States)

    Huette, Stephanie; Winter, Bodo; Matlock, Teenie; Ardell, David H; Spivey, Michael

    2014-01-01

    Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analog representations of features such as movement, that could be a key perceptual component to grammatical comprehension.

  18. Eye movements during listening reveal spontaneous grammatical processing

    Directory of Open Access Journals (Sweden)

    Stephanie eHuette

    2014-05-01

    Full Text Available Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking, which emphasizes the ongoing nature of actions, and the simple past (e.g., walked, which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analogue representations of features such as movement, that could be a key perceptual component to grammatical comprehension.

  19. Priming and the guidance by visual and categorical templates in visual search

    Directory of Open Access Journals (Sweden)

    Anna eWilschut

    2014-02-01

    Full Text Available Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity towards the target feature, i.e. the extent to which observers searched selectively among items of the cued versus uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  20. Priming and the guidance by visual and categorical templates in visual search.

    Science.gov (United States)

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  1. Listening to music reduces eye movements.

    Science.gov (United States)

    Schäfer, Thomas; Fachner, Jörg

    2015-02-01

    Listening to music can change the way that people visually experience the environment, probably as a result of an inwardly directed shift of attention. We investigated whether this attentional shift can be demonstrated by reduced eye movement activity, and if so, whether that reduction depends on absorption. Participants listened to their preferred music, to unknown neutral music, or to no music while viewing a visual stimulus (a picture or a film clip). Preference and absorption were significantly higher for the preferred music than for the unknown music. Participants exhibited longer fixations, fewer saccades, and more blinks when they listened to music than when they sat in silence. However, no differences emerged between the preferred music condition and the neutral music condition. Thus, music significantly reduces eye movement activity, but an attentional shift from the outer to the inner world (i.e., to the emotions and memories evoked by the music) emerged as only one potential explanation. Other explanations, such as a shift of attention from visual to auditory input, are discussed.

  2. Woman Suffrage Movement: 1848-1920.

    Science.gov (United States)

    Eisenberg, Bonnie

    This unit is designed to be used in a history or government class in grades 5-12. It introduces students to individuals, organizations, and the political processes of the women's suffrage movement. In addition, the guide links past women's organizations to today's womens organizations, and helps students understand political strategies used in…

  3. Comparison of visual biofeedback system with a guiding waveform and abdomen-chest motion self-control system for respiratory motion management

    International Nuclear Information System (INIS)

    Nakajima, Yujiro; Kadoya, Noriyuki; Kanai, Takayuki; Ito, Kengo; Sato, Kiyokazu; Dobashi, Suguru; Yamamoto, Takaya; Ishikawa, Yojiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2016-01-01

    Irregular breathing can influence the outcome of 4D computed tomography imaging and cause artifacts. Visual biofeedback systems associated with a patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches) (representing simpler visual coaching techniques without a guiding waveform) are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching in reducing respiratory irregularities by comparing two respiratory management systems. We collected data from 11 healthy volunteers. Bar and wave models were used as visual biofeedback systems. Abches consisted of a respiratory indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. All coaching techniques improved respiratory variation, compared with free-breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86 and 0.98 ± 0.47 mm for free-breathing, Abches, bar model and wave model, respectively. Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18 and 0.17 ± 0.05 s for free-breathing, Abches, bar model and wave model, respectively. The average reduction in displacement and period RMSE compared with the wave model were 27% and 47%, respectively. For variation in both displacement and period, wave model was superior to the other techniques. Our results showed that visual biofeedback combined with a wave model could potentially provide clinical benefits in respiratory management, although all techniques were able to reduce respiratory irregularities

  4. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  5. Visually guided obstacle avoidance in the box jellyfish Tripedalia cystophora and Chiropsella bronzie

    DEFF Research Database (Denmark)

    Garm, A; O'Connor, M; Parkefelt, L

    2007-01-01

    Box jellyfish, cubomedusae, possess an impressive total of 24 eyes of four morphologically different types. Two of these eye types, called the upper and lower lens eyes, are camera-type eyes with spherical fish-like lenses. Compared with other cnidarians, cubomedusae also have an elaborate...... behavioral repertoire, which seems to be predominantly visually guided. Still, positive phototaxis is the only behavior described so far that is likely to be correlated with the eyes. We have explored the obstacle avoidance response of the Caribbean species Tripedalia cystophora and the Australian species...... a tendency to follow the intensity contrast between the obstacle and the surroundings (chamber walls). In the flow chamber Tripedalia cystophora displayed a stronger obstacle avoidance response than Chiropsella bronzie since they had less contact with the obstacles. This seems to follow differences...

  6. Design and analysis of ultrasonic monaural audio guiding device for the visually impaired.

    Science.gov (United States)

    Kim, Keonwook; Kim, Hyunjai; Yun, Gihun; Kim, Myungsoo

    2009-01-01

    The novel Audio Guiding Device (AGD) based on the ultrasonic, which is named as SonicID, has been developed in order to localize point of interest for the visually impaired. The SonicID requires the infrastructure of the transmitters for broadcasting the location information over the ultrasonic carrier. The user with ultrasonic headset receives the information with variable amplitude upon the location and direction of the user due to the ultrasonic characteristic and modulation method. This paper proposes the monaural headset form factor of the SonicID which improves the daily life of the beneficiary compare to the previous version which uses the both ears. Experimental results from SonicID, Bluetooth, and audible sound show that the SonicID demonstrates comparable localization performance to the audible sound with silence to others.

  7. Deciding Which Way to Go: How Do Insects alter Movements to Negotiate Barriers?

    Directory of Open Access Journals (Sweden)

    Roy E. Ritzmann

    2012-07-01

    Full Text Available Animals must routinely deal with barriers as they move through their natural environment. These challenges require directed changes in leg movements and posture performed in the context of ever changing internal and external conditions. In particular, cockroaches use a combination of tactile and visual information to evaluate objects in their path in order to effectively guide their movements in complex terrain. When encountering a large block, the insect uses its antennae to evaluate the object’s height then rears upward accordingly before climbing. A shelf presents a choice between climbing and tunneling that depends on how the antennae strike the shelf; tapping from above yields climbing, while tapping from below causes tunneling. However, ambient light conditions detected by the ocelli can bias that decision. Similarly, in a T-maze turning is determined by antennal contact but influenced by visual cues. These multi-sensory behaviors led us to look at the central complex as a center for sensori-motor integration within the insect brain. Visual and antennal tactile cues are processed within the central complex and, in tethered preparations, several central complex units changed firing rates in tandem with or prior to altered step frequency or turning, while stimulation through the implanted electrodes evoked these same behavioral changes. To further test for a central complex role in these decisions, we examined behavioral effects of brain lesions. Electrolytic lesions in restricted regions of the central complex generated site specific behavioral deficits. Similar changes were also found in reversible effects of procaine injections in the brain. Finally, we are examining these kinds of decisions made in a large arena that more closely matches the conditions under which cockroaches forage. Overall, our studies suggest that CC circuits may indeed influence the descending commands associated with navigational decisions, thereby making them

  8. Fuel assembly guide tube

    International Nuclear Information System (INIS)

    Jabsen, F.S.

    1979-01-01

    This invention is directed toward a nuclear fuel assembly guide tube arrangement which restrains spacer grid movement due to coolant flow and which offers secondary means for supporting a fuel assembly during handling and transfer operations

  9. Feasibility assessment of visual quality analyzer KR-1W guiding personalized aspheric IOL implantation

    Directory of Open Access Journals (Sweden)

    Xiao-Li Wang

    2015-01-01

    Full Text Available AIM: To discuss the feasibility of using the visual quality analyzer KR-1W to guide the relatively personalized aspheric intraocular lens(IOLimplants to make the whole eye spherical aberration close to 0.1μm.METHODS: In this prospective case series study, the corneal spherical aberration with 6mm aperture of 73 patients(100 eyeswas measured with KR-1W Visual Function Analyzer 1d before surgery. For the sake of the whole postoperative spherical aberration were close to 0.1μm, 9 cases(16 eyeswith corneal spherical aberration 0.35μm were implanted Tecnis ZA9003 IOL, named Tecnis group. Aspherical IOL was implanted after phacoemulsification through a cornea 2.75mm incision without suture.Uncorrected visual acuity, beat corrected visual acuity, spherical aberration of the whole eye and jnternal optics(mainly IOLat 6mm pupil diameter were examined at 3mo postoperatively. The relevant data were analyzed using t-test and variance analysis.RESULTS: The whole ocular spherical aberration at 6mm pupil diameter in all postoperative were 0.084±0.032μm; in Tecnis group, the data were 0.091 ± 0.021μm; in AO group, the data were 0.0814-0.013μm; IQ group were 0.093±0.042μm. There was no significantly different between the predicted value and actual value of ocular spherical aberration at 6 mm pupil diameter in all postoperative(t=1.932, P=0.061and in the three groups. The difference value in the predicted values of the preoperative spherical aberrations of the whole eye and the actual values after surgery was 0.013±0.041μm; there was no statistically significant difference(F=2.537, P=0.091. Respectively compared the uncorrected visual acuity and besta corrected visual acuity among three groups of postoperative, no significant difference were found(F=0.897, P=0.421; F=1.423, P=0.097.CONCLUSION: Personality selection of aspheric IOL based on preoperative corneal spherical aberration of patients is feasible and produces satisfactory target postoperative

  10. Exploratory eye movements to pictures in childhood-onset schizophrenia and attention-deficit/hyperactivity disorder (ADHD).

    Science.gov (United States)

    Karatekin, C; Asarnow, R F

    1999-02-01

    We investigated exploratory eye movements to thematic pictures in schizophrenic, attention-deficit/hyperactivity disorder (ADHD), and normal children. For each picture, children were asked three questions varying in amount of structure. We tested if schizophrenic children would stare or scan extensively and if their scan patterns were differentially affected by the question. Time spent viewing relevant and irrelevant regions, fixation duration (an estimate of processing rate), and distance between fixations (an estimate of breadth of attention) were measured. ADHD children showed a trend toward shorter fixations than normals on the question requiring the most detailed analysis. Schizophrenic children looked at fewer relevant, but not more irrelevant, regions than normals. They showed a tendency to stare more when asked to decide what was happening but not when asked to attend to specific regions. Thus, lower levels of visual attention (e.g., basic control of eye movements) were intact in schizophrenic children. In contrast, they had difficulty with top-down control of selective attention in the service of self-guided behavior.

  11. Shape of magnifiers affects controllability in children with visual impairment.

    Science.gov (United States)

    Liebrand-Schurink, Joyce; Boonstra, F Nienke; van Rens, Ger H M B; Cillessen, Antonius H N; Meulenbroek, Ruud G J; Cox, Ralf F A

    2016-12-01

    This study aimed to examine the controllability of cylinder-shaped and dome-shaped magnifiers in young children with visual impairment. This study investigates goal-directed arm movements in low-vision aid use (stand and dome magnifier-like object) in a group of young children with visual impairment (n = 56) compared to a group of children with normal sight (n = 66). Children with visual impairment and children with normal sight aged 4-8 years executed two types of movements (cyclic and discrete) in two orientations (vertical or horizontal) over two distances (10 cm and 20 cm) with two objects resembling the size and shape of regularly prescribed stand and dome magnifiers. The visually impaired children performed slower movements than the normally sighted children. In both groups, the accuracy and speed of the reciprocal aiming movements improved significantly with age. Surprisingly, in both groups, the performance with the dome-shaped object was significantly faster (in the 10 cm condition and 20 cm condition with discrete movements) and more accurate (in the 20 cm condition) than with the stand-shaped object. From a controllability perspective, this study suggests that it is better to prescribe dome-shaped than cylinder-shaped magnifiers to young children with visual impairment. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  12. Visual communication, reproductive behavior, and home range of Hylodes dactylocinus (Anura, Leptodactylidae

    Directory of Open Access Journals (Sweden)

    Patrícia Narvaes

    2005-12-01

    Full Text Available We studied the signaling, reproductive and courtship behaviors of the diurnal stream-dwelling frog Hylodes dactylocinus. The repertoire of visual signals of H. dactylocinus includes foot-flagging, leg-stretching, body movements, and toe-wiggling. The visual signals are performed only by males and are used to defend territories against intruders and to attract females. Home rangesize varied from 0.12 to 13.12 m2 for males (N = 44, and from 0.45 to 7.98 m2 for females (N = 24; residency time varied from one to 12 months for males, and from two to 10 months for females. During the courtship of H. dactylocinus the male gives an encounter call towards an approaching female, touches her snout, and guides her to a previously dug nest. After oviposition, the female leaves the nest and returns to her own home range; the male remains calling after concealing the nest entrance.

  13. Multisensory Integration in the Virtual Hand Illusion with Active Movement

    Directory of Open Access Journals (Sweden)

    Woong Choi

    2016-01-01

    Full Text Available Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality.

  14. Dancers Entrain More Effectively than Non-Dancers to Another Actor’s Movements

    Directory of Open Access Journals (Sweden)

    Auriel eWashburn

    2014-10-01

    Full Text Available For many everyday sensorimotor tasks, trained dancers have been found to exhibit distinct and sometimes superior (more stable or robust patterns of behavior compared to non-dancers. Past research has demonstrated that experts in fields requiring specialized physical training and behavioral control exhibit superior interpersonal coordination capabilities for expertise-related tasks. To date, however, no published studies have compared dancers’ abilities to coordinate their movements with the movements of another individual—i.e., during a so-called visual-motor interpersonal coordination task. The current study was designed to investigate whether trained dancers would be better able to coordinate with a partner performing short sequences of dance-like movements than non-dancers. Movement time series were recorded for individual dancers and non-dancers asked to synchronize with a confederate during three different movement sequences characterized by distinct dance styles (i.e., dance team routine, contemporary ballet, mixed style without hearing any auditory signals or music. A diverse range of linear and nonlinear analyses (i.e., Cross-correlation, Cross-Recurrence Quantification Analysis (CRQA, and Cross-Wavelet analysis provided converging measures of coordination across multiple time scales. While overall levels of interpersonal coordination were influenced by differences in movement sequence for both groups, dancers consistently displayed higher levels of coordination with the confederate at both short and long time scales. These findings demonstrate that the visual-motor coordination capabilities of trained dancers allow them to better synchronize with other individuals performing dance-like movements than non-dancers. Further investigation of similar tasks may help to increase the understanding of visual-motor entrainment in general, as well as provide insight into the effects of focused training on visual-motor and interpersonal

  15. Detecting delay in visual feedback of an action as a monitor of self recognition.

    Science.gov (United States)

    Hoover, Adria E N; Harris, Laurence R

    2012-10-01

    How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

  16. Effect of Canister Movement on Water Turbidity

    International Nuclear Information System (INIS)

    TRIMBLE, D.J.

    2000-01-01

    Requirements for evaluating the adherence characteristics of sludge on the fuel stored in the K East Basin and the effect of canister movement on basin water turbidity are documented in Briggs (1996). The results of the sludge adherence testing have been documented (Bergmann 1996). This report documents the results of the canister movement tests. The purpose of the canister movement tests was to characterize water turbidity under controlled canister movements (Briggs 1996). The tests were designed to evaluate methods for minimizing the plumes and controlling water turbidity during fuel movements leading to multi-canister overpack (MCO) loading. It was expected that the test data would provide qualitative visual information for use in the design of the fuel retrieval and water treatment systems. Video recordings of the tests were to be the only information collected

  17. Perceptual learning modifies untrained pursuit eye movements

    OpenAIRE

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training...

  18. Seeing through rose-colored glasses: How optimistic expectancies guide visual attention.

    Directory of Open Access Journals (Sweden)

    Laura Kress

    Full Text Available Optimism bias and positive attention bias have important highly similar implications for mental health but have only been examined in isolation. Investigating the causal relationships between these biases can improve the understanding of their underlying cognitive mechanisms, leading to new directions in neurocognitive research and revealing important information about normal functioning as well as the development, maintenance, and treatment of psychological diseases. In the current project, we hypothesized that optimistic expectancies can exert causal influences on attention deployment. To test this causal relation, we conducted two experiments in which we manipulated optimistic and pessimistic expectancies regarding future rewards and punishments. In a subsequent visual search task, we examined participants' attention to positive (i.e., rewarding and negative (i.e., punishing target stimuli, measuring their eye gaze behavior and reaction times. In both experiments, participants' attention was guided toward reward compared with punishment when optimistic expectancies were induced. Additionally, in Experiment 2, participants' attention was guided toward punishment compared with reward when pessimistic expectancies were induced. However, the effect of optimistic (rather than pessimistic expectancies on attention deployment was stronger. A key characteristic of optimism bias is that people selectively update expectancies in an optimistic direction, not in a pessimistic direction, when receiving feedback. As revealed in our studies, selective attention to rewarding versus punishing evidence when people are optimistic might explain this updating asymmetry. Thus, the current data can help clarify why optimistic expectancies are difficult to overcome. Our findings elucidate the cognitive mechanisms underlying optimism and attention bias, which can yield a better understanding of their benefits for mental health.

  19. Role of the cerebellum in reaching movements in humans. II. A neural model of the intermediate cerebellum.

    Science.gov (United States)

    Schweighofer, N; Spoelstra, J; Arbib, M A; Kawato, M

    1998-01-01

    The cerebellum is essential for the control of multijoint movements; when the cerebellum is lesioned, the performance error is more than the summed errors produced by single joints. In the companion paper (Schweighofer et al., 1998), a functional anatomical model for visually guided arm movement was proposed. The model comprised a basic feedforward/feedback controller with realistic transmission delays and was connected to a two-link, six-muscle, planar arm. In the present study, we examined the role of the cerebellum in reaching movements by embedding a novel, detailed cerebellar neural network in this functional control model. We could derive realistic cerebellar inputs and the role of the cerebellum in learning to control the arm was assessed. This cerebellar network learned the part of the inverse dynamics of the arm not provided by the basic feedforward/feedback controller. Despite realistically low inferior olive firing rates and noisy mossy fibre inputs, the model could reduce the error between intended and planned movements. The responses of the different cell groups were comparable to those of biological cell groups. In particular, the modelled Purkinje cells exhibited directional tuning after learning and the parallel fibres, due to their length, provide Purkinje cells with the input required for this coordination task. The inferior olive responses contained two different components; the earlier response, locked to movement onset, was always present and the later response disappeared after learning. These results support the theory that the cerebellum is involved in motor learning.

  20. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    Science.gov (United States)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  1. Precision of jaw-closing movements for different jaw gaps.

    Science.gov (United States)

    Hellmann, Daniel; Becker, Georg; Giannakopoulos, Nikolaos N; Eberhard, Lydia; Fingerhut, Christopher; Rammelsberg, Peter; Schindler, Hans J

    2014-02-01

    Jaw-closing movements are basic components of physiological motor actions precisely achieving intercuspation without significant interference. The main purpose of this study was to test the hypothesis that, despite an imperfect intercuspal position, the precision of jaw-closing movements fluctuates within the range of physiological closing movements indispensable for meeting intercuspation without significant interference. For 35 healthy subjects, condylar and incisal point positions for fast and slow jaw-closing, interrupted at different jaw gaps by the use of frontal occlusal plateaus, were compared with uninterrupted physiological jaw closing, with identical jaw gaps, using a telemetric system for measuring jaw position. Examiner-guided centric relation served as a clinically relevant reference position. For jaw gaps ≤4 mm, no significant horizontal or vertical displacement differences were observed for the incisal or condylar points among physiological, fast, and slow jaw-closing. However, the jaw positions under these three closing conditions differed significantly from guided centric relation for nearly all experimental jaw gaps. The findings provide evidence of stringent neuromuscular control of jaw-closing movements in the vicinity of intercuspation. These results might be of clinical relevance to occlusal intervention with different objectives. © 2013 Eur J Oral Sci.

  2. Social experience does not abolish cultural diversity in eye movements

    Directory of Open Access Journals (Sweden)

    David J Kelly

    2011-05-01

    Full Text Available Adults from Eastern (e.g., China and Western (e.g., USA cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese (BBC adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of BBC adults deployed ‘Eastern’ eye movement strategies, while approximately 25% of participants displayed ‘Western’ strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that ‘culture’ alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required.

  3. Interactive balance training integrating sensor-based visual feedback of movement performance: a pilot study in older adults.

    Science.gov (United States)

    Schwenk, Michael; Grewal, Gurtej S; Honarvar, Bahareh; Schwenk, Stefanie; Mohler, Jane; Khalsa, Dharma S; Najafi, Bijan

    2014-12-13

    Wearable sensor technology can accurately measure body motion and provide incentive feedback during exercising. The aim of this pilot study was to evaluate the effectiveness and user experience of a balance training program in older adults integrating data from wearable sensors into a human-computer interface designed for interactive training. Senior living community residents (mean age 84.6) with confirmed fall risk were randomized to an intervention (IG, n = 17) or control group (CG, n = 16). The IG underwent 4 weeks (twice a week) of balance training including weight shifting and virtual obstacle crossing tasks with visual/auditory real-time joint movement feedback using wearable sensors. The CG received no intervention. Outcome measures included changes in center of mass (CoM) sway, ankle and hip joint sway measured during eyes open (EO) and eyes closed (EC) balance test at baseline and post-intervention. Ankle-hip postural coordination was quantified by a reciprocal compensatory index (RCI). Physical performance was quantified by the Alternate-Step-Test (AST), Timed-up-and-go (TUG), and gait assessment. User experience was measured by a standardized questionnaire. After the intervention sway of CoM, hip, and ankle were reduced in the IG compared to the CG during both EO and EC condition (p = .007-.042). Improvement was obtained for AST (p = .037), TUG (p = .024), fast gait speed (p = . 010), but not normal gait speed (p = .264). Effect sizes were moderate for all outcomes. RCI did not change significantly. Users expressed a positive training experience including fun, safety, and helpfulness of sensor-feedback. Results of this proof-of-concept study suggest that older adults at risk of falling can benefit from the balance training program. Study findings may help to inform future exercise interventions integrating wearable sensors for guided game-based training in home- and community environments. Future studies should evaluate the

  4. A flow visualization study of single-arm sculling movement emulating cephalopod thrust generation

    Science.gov (United States)

    Kazakidi, Asimina; Gnanamanickam, Ebenezer P.; Tsakiris, Dimitris P.; Ekaterinaris, John A.

    2014-11-01

    In addition to jet propulsion, octopuses use arm-swimming motion as an effective means of generating bursts of thrust, for hunting, defense, or escape. The individual role of their arms, acting as thrust generators during this motion, is still under investigation, in view of an increasing robotic interest for alternative modes of propulsion, inspired by the octopus. Computational studies have revealed that thrust generation is associated with complex vortical flow patterns in the wake of the moving arm, however further experimental validation is required. Using the hydrogen bubble technique, we studied the flow disturbance around a single octopus-like robotic arm, undergoing two-stroke sculling movements in quiescent fluid. Although simplified, sculling profiles have been found to adequately capture the fundamental kinematics of the octopus arm-swimming behavior. In fact, variation of the sculling parameters alters considerably the generation of forward thrust. Flow visualization revealed the generation of complex vortical structures around both rigid and compliant arms. Increased disturbance was evident near the tip, particularly at the transitional phase between recovery and power strokes. These results are in good qualitative agreement with computational and robotic studies. Work funded by the ESF-GSRT HYDRO-ROB Project PE7(281).

  5. Auditory and Visual Memories in PTSD Patients Targeted with Eye Movements and Counting: The Effect of Modality-Specific Loading of Working Memory

    Directory of Open Access Journals (Sweden)

    Suzy J. M. A. Matthijssen

    2017-11-01

    Full Text Available Introduction: Eye movement desensitization and reprocessing (EMDR therapy is an evidence-based treatment for post-traumatic stress disorder (PTSD. A key element of this therapy is simultaneously recalling an emotionally disturbing memory and performing a dual task that loads working memory. Memories targeted with this therapy are mainly visual, though there is some evidence that auditory memories can also be targeted.Objective: The present study tested whether auditory memories can be targeted with EMDR in PTSD patients. A second objective was to test whether taxing the patient (performing a dual task while recalling a memory in a modality specific way (auditory demanding for auditory memories and visually demanding for visual memories was more effective in reducing the emotionality experienced than taxing in cross-modality.Methods: Thirty-six patients diagnosed with PTSD were asked to recall two disturbing memories, one mainly visual, the other one mainly auditory. They rated the emotionality of the memories before being exposed to any condition. Both memories were then recalled under three alternating conditions [visual taxation, auditory taxation, and a control condition (CC, which comprised staring a non-moving dot] – counterbalanced in order – and patients rerated emotionality after each condition.Results: All three conditions were equally effective in reducing the emotionality of the auditory memory. Auditory loading was more effective in reducing the emotionality in the visual intrusion than the CC, but did not differ from the visual load.Conclusion: Auditory and visual aversive memories were less emotional after working memory taxation (WMT. This has some clinical implications for EMDR therapy, where mainly visual intrusions are targeted. In this study, there was no benefit of modality specificity. Further fundamental research should be conducted to specify the best protocol for WMT.

  6. Auditory and Visual Memories in PTSD Patients Targeted with Eye Movements and Counting: The Effect of Modality-Specific Loading of Working Memory.

    Science.gov (United States)

    Matthijssen, Suzy J M A; Verhoeven, Liselotte C M; van den Hout, Marcel A; Heitland, Ivo

    2017-01-01

    Introduction: Eye movement desensitization and reprocessing (EMDR) therapy is an evidence-based treatment for post-traumatic stress disorder (PTSD). A key element of this therapy is simultaneously recalling an emotionally disturbing memory and performing a dual task that loads working memory. Memories targeted with this therapy are mainly visual, though there is some evidence that auditory memories can also be targeted. Objective: The present study tested whether auditory memories can be targeted with EMDR in PTSD patients. A second objective was to test whether taxing the patient (performing a dual task while recalling a memory) in a modality specific way (auditory demanding for auditory memories and visually demanding for visual memories) was more effective in reducing the emotionality experienced than taxing in cross-modality. Methods: Thirty-six patients diagnosed with PTSD were asked to recall two disturbing memories, one mainly visual, the other one mainly auditory. They rated the emotionality of the memories before being exposed to any condition. Both memories were then recalled under three alternating conditions [visual taxation, auditory taxation, and a control condition (CC), which comprised staring a non-moving dot] - counterbalanced in order - and patients rerated emotionality after each condition. Results: All three conditions were equally effective in reducing the emotionality of the auditory memory. Auditory loading was more effective in reducing the emotionality in the visual intrusion than the CC, but did not differ from the visual load. Conclusion: Auditory and visual aversive memories were less emotional after working memory taxation (WMT). This has some clinical implications for EMDR therapy, where mainly visual intrusions are targeted. In this study, there was no benefit of modality specificity. Further fundamental research should be conducted to specify the best protocol for WMT.

  7. Mental Imagery as Revealed by Eye Movements and Spoken Predicates: A Test of Neurolinguistic Programming.

    Science.gov (United States)

    Elich, Matthew; And Others

    1985-01-01

    Tested Bandler and Grinder's proposal that eye movement direction and spoken predicates are indicative of sensory modality of imagery. Subjects reported images in the three modes, but no relation between imagery and eye movements or predicates was found. Visual images were most vivid and often reported. Most subjects rated themselves as visual,…

  8. Kinesthetic information disambiguates visual motion signals.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. Gymnasts utilize visual and auditory information for behavioural synchronization in trampolining.

    Science.gov (United States)

    Heinen, T; Koschnick, J; Schmidt-Maaß, D; Vinken, P M

    2014-08-01

    In synchronized trampolining, two gymnasts perform the same routine at the same time. While trained gymnasts are thought to coordinate their own movements with the movements of another gymnast by detecting relevant movement information, the question arises how visual and auditory information contribute to the emergence of synchronicity between both gymnasts. Therefore the aim of this study was to examine the role of visual and auditory information in the emergence of coordinated behaviour in synchronized trampolining. Twenty female gymnasts were asked to synchronize their leaps with the leaps of a model gymnast, while visual and auditory information was manipulated. The results revealed that gymnasts needed more leaps to reach synchronicity when only either auditory (12.9 leaps) or visual information (10.8 leaps) was available, as compared to when both auditory and visual information was available (8.1 leaps). It is concluded that visual and auditory information play significant roles in synchronized trampolining, whilst visual information seems to be the dominant source for emerging behavioural synchronization, and auditory information supports this emergence.

  10. Contextual cueing of pop-out visual search: when context guides the deployment of attention.

    Science.gov (United States)

    Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J

    2010-05-01

    Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.

  11. Application for TJ-II Signals Visualization: User's Guide; Aplicacion para la Visualizacion de Senales de TJ-II: Guia del Usuario

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E.; Portas, A. B.; Vega, J. [Ciemat, Madrid (Spain)

    2000-07-01

    In this documents are described the functionalities of the application developed by the Data Acquisition Group for TJ-II signal visualization. There are two versions of the application, the On-line version, used for signal visualization during TJ-II operation, and the Off-line version, used for signal visualization without TJ-II operation. Both versions of the application consist in a graphical user interface developed for X/Motif, in which most of the actions can be done using the mouse buttons. The functionalities of both versions of the application are described in this user's guide, beginning at the application start-up and explaining in detail all the options that it provides and the actions that can be done with each graphic control. (Author) 8 refs.

  12. Does the brain use sliding variables for the control of movements?

    Science.gov (United States)

    Hanneton, S; Berthoz, A; Droulez, J; Slotine, J J

    1997-12-01

    tracking error and its derivatives should be correlated at a particular time lag before movement onset. A peak of correlation was found for a physiologically plausible reaction time, corresponding to a stable composite variable. The direction and amplitude of the ongoing stereotyped movements seemed also be adjusted in order to minimize this variable. These findings suggest that, during visually guided movements, human subjects attempt to minimize such a composite variable and not the instantaneous error. This minimization seems to be obtained by the execution of stereotyped corrective movements.

  13. The anti-vaccination movement and resistance to allergen-immunotherapy: a guide for clinical allergists

    Directory of Open Access Journals (Sweden)

    Behrmann Jason

    2010-09-01

    Full Text Available Abstract Despite over a century of clinical use and a well-documented record of efficacy and safety, a growing minority in society questions the validity of vaccination and fear that this common public health intervention is the root-cause of severe health problems. This article questions whether growing public anti-vaccine sentiments might have the potential to spill-over into other therapies distinct from vaccination, namely allergen-immunotherapy. Allergen-immunotherapy shares certain medical vernacular with vaccination (e.g., allergy shots, allergy vaccines, and thus may become "guilty by association" due to these similarities. Indeed, this article demonstrates that anti-vaccine websites have begun unduly discrediting this allergy treatment regimen. Following an explanation of the anti-vaccine movement, the article aims to provide guidance on how clinicians can respond to patient fears towards allergen-immunotherapy in the clinical setting. This guide focuses on the provision of reliable information to patients in order to dispel misconceived associations between vaccination and allergen-immunotherapy, and the discussion of the risks and benefits of both therapies in order to assist patients in making autonomous decisions about their choice of allergy treatment.

  14. Teach yourself visually Photoshop CC

    CERN Document Server

    Wooldridge, Mike

    2013-01-01

    Get savvy with the newest features and enhancements of Photoshop CC The newest version of Photoshop boasts enhanced and new features that afford you some amazing and creative ways to create images with impact, and this popular guide gets visual learners up to speed quickly. Packed with colorful screen shots that illustrate the step-by-step instructions, this visual guide is perfect for Photoshop newcomers as well as experienced users who are looking for some beginning to intermediate-level techniques to give their projects the ""wow"" factor! Veteran and bestselling authors Mik

  15. Astronomy a visual guide

    CERN Document Server

    Garlick, Mark A

    2004-01-01

    Space has fascinated man and challenged scientists for centuries and astronomy is the oldest and one of the most dynamic of the sciences. Here is a book that will stimulate your curiosity and feed your imagination. Detailed and fascinating text is clearly and richly illustrated with fabulous, vibrant photographs and diagrams. This is a comprehensive guide to understanding and observing the night sky, from distant stars and galaxies to our neighbouring planets; from comets to shooting stars; from eclipses to black holes. With details of the latest space probes, a series of monthly sky maps to provide guidance for the amateur observer and the latest photos from space, this book brings the beauty and wonder of our universe into your living room and will have you reaching for the telescope!

  16. Modeling the shape hierarchy for visually guided grasping

    CSIR Research Space (South Africa)

    Rezai, O

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient...

  17. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    Directory of Open Access Journals (Sweden)

    David P Crabb

    Full Text Available BACKGROUND: Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT. METHODOLOGY/PRINCIPAL FINDINGS: The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers. Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis. On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%. Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. CONCLUSIONS/SIGNIFICANCE: Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could

  18. Differences in Sequential Eye Movement Behavior between Taiwanese and American Viewers

    Directory of Open Access Journals (Sweden)

    Yen Ju eLee

    2016-05-01

    Full Text Available Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context.

  19. Python data visualization cookbook

    CERN Document Server

    Milovanovic, Igor

    2013-01-01

    This book is written in a Cookbook style targeted towards an advanced audience. It covers the advanced topics of data visualization in Python.Python Data Visualization Cookbook is for developers that already know about Python programming in general. If you have heard about data visualization but you don't know where to start, then this book will guide you from the start and help you understand data, data formats, data visualization, and how to use Python to visualize data.You will need to know some general programming concepts, and any kind of programming experience will be helpful, but the co

  20. Data points visualization that means something

    CERN Document Server

    Yau, Nathan

    2013-01-01

    A fresh look at visualization from the author of Visualize This Whether it's statistical charts, geographic maps, or the snappy graphical statistics you see on your favorite news sites, the art of data graphics or visualization is fast becoming a movement of its own. In Data Points: Visualization That Means Something, author Nathan Yau presents an intriguing complement to his bestseller Visualize This, this time focusing on the graphics side of data analysis. Using examples from art, design, business, statistics, cartography, and online media, he explores both

  1. Shared periodic performer movements coordinate interactions in duo improvisations

    Science.gov (United States)

    Jakubowski, Kelly; Moran, Nikki; Keller, Peter E.

    2018-01-01

    Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets—(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations—to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers’ movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers’ movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions. PMID:29515867

  2. Staging Visual Methods

    DEFF Research Database (Denmark)

    Flensborg, Ingelise

    2009-01-01

    A visual methodological approach of exploring postures and movemenets in young childrens communication with art. How do we translate bodily postures and movements into methodological categories to access data of the interactive processes? These issues will be discussed through video matrials...

  3. Binocular eye movement control and motion perception: what is being tracked?

    Science.gov (United States)

    van der Steen, Johannes; Dits, Joyce

    2012-10-19

    We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.

  4. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    Science.gov (United States)

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short

  5. Excessive sensitivity to uncertain visual input in L-dopa induced dyskinesias in Parkinson’s disease: further implications for cerebellar involvement

    Directory of Open Access Journals (Sweden)

    James eStevenson

    2014-02-01

    Full Text Available When faced with visual uncertainty during motor performance, humans rely more on predictive forward models and proprioception and attribute lesser importance to the ambiguous visual feedback. Though disrupted predictive control is typical of patients with cerebellar disease, sensorimotor deficits associated with the involuntary and often unconscious nature of L-dopa-induced dyskinesias in Parkinson’s disease (PD suggests dyskinetic subjects may also demonstrate impaired predictive motor control. Methods: We investigated the motor performance of 9 dyskinetic and 10 non-dyskinetic PD subjects on and off L-dopa, and of 10 age-matched control subjects, during a large-amplitude, overlearned, visually-guided tracking task. Ambiguous visual feedback was introduced by adding ‘jitter’ to a moving target that followed a Lissajous pattern. Root mean square (RMS tracking error was calculated, and ANOVA, robust multivariate linear regression and linear dynamical system analyses were used to determine the contribution of speed and ambiguity to tracking performance. Results: Increasing target ambiguity and speed contributed significantly more to the RMS error of dyskinetic subjects off medication. L-dopa improved the RMS tracking performance of both PD groups. At higher speeds, controls and PDs without dyskinesia were able to effectively de-weight ambiguous visual information. Conclusions: PDs’ visually-guided motor performance degrades with visual jitter and speed of movement to a greater degree compared to age-matched controls. However, there are fundamental differences in PDs with and without dyskinesia: subjects without dyskinesia are generally slow, and less responsive to dynamic changes in motor task requirements but, PDs with dyskinesia there was a trade-off between overall performance and inappropriate reliance on ambiguous visual feedback. This is likely associated with functional changes in posterior parietal-ponto-cerebellar pathways.

  6. Where to attend next: guiding refreshing of visual, spatial, and verbal representations in working memory.

    Science.gov (United States)

    Souza, Alessandra S; Vergauwe, Evie; Oberauer, Klaus

    2018-04-23

    One of the functions that attention may serve in working memory (WM) is boosting information accessibility, a mechanism known as attentional refreshing. Refreshing is assumed to be a domain-general process operating on visual, spatial, and verbal representations alike. So far, few studies have directly manipulated refreshing of individual WM representations to measure the WM benefits of refreshing. Recently, a guided-refreshing method was developed, which consists of presenting cues during the retention interval of a WM task to instruct people to refresh (i.e., attend to) the cued items. Using a continuous-color reconstruction task, previous studies demonstrated that the error in reporting a color varies linearly with the frequency with which it was refreshed. Here, we extend this approach to assess the WM benefits of refreshing different representation types, from colors to spatial locations and words. Across six experiments, we show that refreshing frequency modulates performance in all stimulus domains in accordance with the tenet that refreshing is a domain-general process in WM. The benefits of refreshing were, however, larger for visual-spatial than verbal materials. © 2018 New York Academy of Sciences.

  7. Data visualization with D3.js cookbook

    CERN Document Server

    Zhu, Nick Qi

    2013-01-01

    Packed with practical recipes, this is a step-by-step guide to learning data visualization with D3 with the help of detailed illustrations and code samples.If you are a developer familiar with HTML, CSS, and JavaScript, and you wish to get the most out of D3, then this book is for you. This book can also serve as a desktop quick-reference guide for experienced data visualization developers.

  8. Differences in visual attention between those who correctly and incorrectly answer physics problems

    Directory of Open Access Journals (Sweden)

    N. Sanjay Rebello1

    2012-05-01

    Full Text Available This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas consistent with a novicelike response and areas of high perceptual salience. Participants ranged from those who had only taken one high school physics course to those who had completed a Physics Ph.D. We found that participants who answered correctly spent a higher percentage of time looking at the relevant areas of the diagram, and those who answered incorrectly spent a higher percentage of time looking in areas of the diagram consistent with a novicelike answer. Thus, when solving physics problems, top-down processing plays a key role in guiding visual selective attention either to thematically relevant areas or novicelike areas depending on the accuracy of a student’s physics knowledge. This result has implications for the use of visual cues to redirect individuals’ attention to relevant portions of the diagrams and may potentially influence the way they reason about these problems.

  9. Web-based Visual Analytics for Extreme Scale Climate Science

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Evans, Katherine J [ORNL; Harney, John F [ORNL; Jewell, Brian C [ORNL; Shipman, Galen M [ORNL; Smith, Brian E [ORNL; Thornton, Peter E [ORNL; Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL)

    2014-01-01

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via new visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.

  10. The Competency-Based Movement in Student Affairs: Implications for Curriculum and Professional Development

    Science.gov (United States)

    Eaton, Paul William

    2016-01-01

    This article examines the limitations and possibilities of the emerging competency-based movement in student affairs. Using complexity theory and postmodern educational theory as guiding frameworks, examination of the competency-based movement will raise questions about overapplication of competencies in graduate preparation programs and…

  11. Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics.

    Science.gov (United States)

    Srinivasan, Mandyam V

    2011-04-01

    Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles.

  12. Visual acuity and visual skills in Malaysian children with learning disabilities

    Directory of Open Access Journals (Sweden)

    Muzaliha MN

    2012-09-01

    Full Text Available Mohd-Nor Muzaliha,1 Buang Nurhamiza,1 Adil Hussein,1 Abdul-Rani Norabibas,1 Jaafar Mohd-Hisham-Basrun,1 Abdullah Sarimah,2 Seo-Wei Leo,3 Ismail Shatriah11Department of Ophthalmology, 2Biostatistics and Research Methodology Unit, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia; 3Paediatric Ophthalmology and Strabismus Unit, Department of Ophthalmology, Tan Tock Seng Hospital, SingaporeBackground: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia.Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects.Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40, 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged.Conclusion: Although their visual acuity was satisfactory, nearly 30% of the

  13. Magnetic-resonance-guided biopsy of focal liver lesions

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Ethan A. [University of Michigan Health System, Section of Pediatric Radiology, C.S. Mott Children' s Hospital, Department of Radiology, Ann Arbor, MI (United States); Grove, Jason J. [University of Michigan Health System, Division of Interventional Radiology, C.S. Mott Children' s Hospital, Department of Radiology, Ann Arbor, MI (United States); Der Spek, Abraham F.L.V. [University of Michigan Health System, Department of Anesthesiology, C.S. Mott Children' s Hospital, Ann Arbor, MI (United States); Jarboe, Marcus D. [University of Michigan Health System, Division of Interventional Radiology, C.S. Mott Children' s Hospital, Department of Radiology, Ann Arbor, MI (United States); University of Michigan Health System, Section of Pediatric Surgery, C.S. Mott Children' s Hospital, Department of Surgery, Ann Arbor, MI (United States)

    2017-05-15

    Image-guided biopsy techniques are widely used in clinical practice. Commonly used methods employ either ultrasound (US) or computed tomography (CT) for image guidance. In certain patients, US or CT guidance may be suboptimal, or even impossible, because of artifacts, suboptimal lesion visualization, or both. We recently began performing magnetic resonance (MR)-guided biopsy of focal liver lesions in select pediatric patients with lesions that are not well visualized by US or CT. This report describes our experience performing MR-guided biopsy of focal liver lesions, with case examples to illustrate innovative techniques and novel aspects of these procedures. (orig.)

  14. Coronary angioscopy: a monorail angioscope with movable guide wire.

    Science.gov (United States)

    Nanto, S; Ohara, T; Mishima, M; Hirayama, A; Komamura, K; Matsumura, Y; Kodama, K

    1991-03-01

    A new angioscope was devised for easier visualization of the coronary artery. In its tip, the angioscope (Olympus) with an outer diameter of 0.8 mm had a metal lumen, through which a 0.014-in steerable guide wire passed. Using a 8F guiding catheter and a guide wire, it was introduced into the distal coronary artery. With injection of warmed saline through the guiding catheter, the coronary segments were visualized. In the attempted 70 vessels (32 left anterior descending [LAD], 10 right coronary [RCA], 28 left circumflex [LCX]) from 48 patients, 60 vessels (86%) were successfully examined. Twenty-two patients who underwent attempted examination of both LAD and LCX; both coronary arteries were visualized in 19 patients (86%). In the proximal site of the lesion, 40 patients have the diagonal branch or the obtuse marginal branch. In 34 patients (85%) the angioscope was inserted beyond these branches. In 12 very tortuous vessels, eight vessels (67%) were examined. In conclusion, the new monorail coronary angioscope with movable guide wire is useful to examine the stenotic lesions of the coronary artery.

  15. Visual evoked responses during standing and walking

    Directory of Open Access Journals (Sweden)

    Klaus Gramann

    2010-10-01

    Full Text Available Human cognition has been shaped both by our body structure and by its complex interactionswith its environment. Our cognition is thus inextricably linked to our own and others’ motorbehavior. To model brain activity associated with natural cognition, we propose recording theconcurrent brain dynamics and body movements of human subjects performing normal actions.Here we tested the feasibility of such a mobile brain/body (MoBI imaging approach byrecording high-density electroencephalographic (EEG activity and body movements of subjectsstanding or walking on a treadmill while performing a visual oddball response task. Independentcomponent analysis (ICA of the EEG data revealed visual event-related potentials (ERPs thatduring standing, slow walking, and fast walking did not differ across movement conditions,demonstrating the viability of recording brain activity accompanying cognitive processes duringwhole body movement. Non-invasive and relatively low-cost MoBI studies of normal, motivatedactions might improve understanding of interactions between brain and body dynamics leadingto more complete biological models of cognition.

  16. Visual-Motor Learning Using Haptic Devices: How Best to Train Surgeons?

    Directory of Open Access Journals (Sweden)

    Oscar Giles

    2012-05-01

    Full Text Available Laparoscopic surgery has revolutionised medicine but requires surgeons to learn new visual-motor mappings. The optimal method for training surgeons is unknown. For instance, it may be easier to learn planar movements when training is constrained to a plane, since this forces the surgeon to develop an appropriate perceptual-motor map. In contrast, allowing the surgeon to move without constraints could improve performance because this provides greater experience of the control dynamics of the device. In order to test between these alternatives, we created an experimental tool that connected a commercially available robotic arm with specialised software that presents visual stimuli and objectively records kinematics. Participants were given the task of generating a series of aiming movements to move a visual cursor to a series of targets. The actions required movement along a horizontal plane, whereas the visual display was a screen positioned perpendicular to this plane (ie, vertically. One group (n=8 received training where the force field constrained their movement to the correct plane of action, whilst a second group (n=8 trained without constraints. On test trials (after training the unconstrained group showed better performance, as indexed by reduced movement duration and reduced path length. These results show that participants who explored the entire action space had an advantage, which highlights the importance of experiencing the full dynamics of a control device and the action space when learning a new visual-motor mapping.

  17. Language-driven anticipatory eye movements in virtual reality.

    Science.gov (United States)

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  18. Object-based processes in the planning of goal-directed hand movements

    NARCIS (Netherlands)

    Bekkering, H.; Pratt, J.

    2004-01-01

    Theories in motor control suggest that the parameters specified during the planning of goal-directed hand movements to a visual target are defined in spatial parameters like direction and amplitude. Recent findings in the visual attention literature, however, argue widely for early object-based

  19. Frontal eye field sends delay activity related to movement, memory, and vision to the superior colliculus.

    Science.gov (United States)

    Sommer, M A; Wurtz, R H

    2001-04-01

    Many neurons within prefrontal cortex exhibit a tonic discharge between visual stimulation and motor response. This delay activity may contribute to movement, memory, and vision. We studied delay activity sent from the frontal eye field (FEF) in prefrontal cortex to the superior colliculus (SC). We evaluated whether this efferent delay activity was related to movement, memory, or vision, to establish its possible functions. Using antidromic stimulation, we identified 66 FEF neurons projecting to the SC and we recorded from them while monkeys performed a Go/Nogo task. Early in every trial, a monkey was instructed as to whether it would have to make a saccade (Go) or not (Nogo) to a target location, which permitted identification of delay activity related to movement. In half of the trials (memory trials), the target disappeared, which permitted identification of delay activity related to memory. In the remaining trials (visual trials), the target remained visible, which permitted identification of delay activity related to vision. We found that 77% (51/66) of the FEF output neurons had delay activity. In 53% (27/51) of these neurons, delay activity was modulated by Go/Nogo instructions. The modulation preceded saccades made into only part of the visual field, indicating that the modulation was movement-related. In some neurons, delay activity was modulated by Go/Nogo instructions in both memory and visual trials and seemed to represent where to move in general. In other neurons, delay activity was modulated by Go/Nogo instructions only in memory trials, which suggested that it was a correlate of working memory, or only in visual trials, which suggested that it was a correlate of visual attention. In 47% (24/51) of FEF output neurons, delay activity was unaffected by Go/Nogo instructions, which indicated that the activity was related to the visual stimulus. In some of these neurons, delay activity occurred in both memory and visual trials and seemed to represent a

  20. [Cortical potentials evoked to response to a signal to make a memory-guided saccade].

    Science.gov (United States)

    Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V

    2010-01-01

    The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.

  1. Determination of mandibular border and functional movement protocols using an electromagnetic articulograph (EMA).

    Science.gov (United States)

    Fuentes, Ramon; Navarro, Pablo; Curiqueo, Aldo; Ottone, Nicolas E

    2015-01-01

    The electromagnetic articulograph (EMA) is a device that can collect movement data by positioning sensors at multiple points, measuring displacements of the structure in real time, as well as the acoustics and mechanics of speech using a microphone connected to the measurement system. The aim of this study is to describe protocols for the generation, measurement and visualization of mandibular border and functional movements in the three spatial planes (frontal, sagittal and horizontal) using the EMA. The EMA has transmitter coils that determine magnetic fields to collect information about movements from sensors located on different structures (tongue, palate, mouth, incisors, skin, etc.) and in every direction in an area of 300 mm. After measurement with the EMA, the information is transferred to a computer and read with the Visartico software to visualize the recording of the mandibular movements registered by the EMA. The sensors placed in the space between the three axes XYZ are observed, and then the plots created from the mandibular movements included in the corresponding protocol can be visualized, enabling interpretation of these data. Four protocols for the obtaining of images of the opening and closing mandibular movements were defined and developed, as well as border movements in the frontal, sagittal and horizontal planes, managing to accurately reproduce Posselt's diagram and Gothic arch on the latter two axes. Measurements with the EMA will allow more exact data to be collected in relation to the mandibular clinical physiology and morphology, which will permit more accurate diagnoses and application of more precise and adjusted treatments in the future.

  2. Determination of mandibular border and functional movement protocols using an electromagnetic articulograph (EMA)

    Science.gov (United States)

    Fuentes, Ramon; Navarro, Pablo; Curiqueo, Aldo; Ottone, Nicolas E

    2015-01-01

    The electromagnetic articulograph (EMA) is a device that can collect movement data by positioning sensors at multiple points, measuring displacements of the structure in real time, as well as the acoustics and mechanics of speech using a microphone connected to the measurement system. The aim of this study is to describe protocols for the generation, measurement and visualization of mandibular border and functional movements in the three spatial planes (frontal, sagittal and horizontal) using the EMA. The EMA has transmitter coils that determine magnetic fields to collect information about movements from sensors located on different structures (tongue, palate, mouth, incisors, skin, etc.) and in every direction in an area of 300 mm. After measurement with the EMA, the information is transferred to a computer and read with the Visartico software to visualize the recording of the mandibular movements registered by the EMA. The sensors placed in the space between the three axes XYZ are observed, and then the plots created from the mandibular movements included in the corresponding protocol can be visualized, enabling interpretation of these data. Four protocols for the obtaining of images of the opening and closing mandibular movements were defined and developed, as well as border movements in the frontal, sagittal and horizontal planes, managing to accurately reproduce Posselt’s diagram and Gothic arch on the latter two axes. Measurements with the EMA will allow more exact data to be collected in relation to the mandibular clinical physiology and morphology, which will permit more accurate diagnoses and application of more precise and adjusted treatments in the future. PMID:26884903

  3. The argumentative role of visual metaphor and visual antithesis in ‘fly-on-the-wall’ documentary

    NARCIS (Netherlands)

    Tseronis, A.; Forceville, C.; Grannetia, M.; Garssen, B.; Godden, D.; Mitchell, G.; Snoeck Henkemans, F.

    2015-01-01

    In this paper, we explore the argumentative role of visual metaphor and visual antithesis in the so-called 'fly-on-the-wall' documentary. In this subtype of documentary, which emphatically renounces voice-over narration, the filmmakers guide their viewers into reaching certain conclusions by making

  4. Movement Sonification: Audiovisual benefits on motor learning

    Directory of Open Access Journals (Sweden)

    Weber Andreas

    2011-12-01

    Full Text Available Processes of motor control and learning in sports as well as in motor rehabilitation are based on perceptual functions and emergent motor representations. Here a new method of movement sonification is described which is designed to tune in more comprehensively the auditory system into motor perception to enhance motor learning. Usually silent features of the cyclic movement pattern "indoor rowing" are sonified in real time to make them additionally available to the auditory system when executing the movement. Via real time sonification movement perception can be enhanced in terms of temporal precision and multi-channel integration. But beside the contribution of a single perceptual channel to motor perception and motor representation also mechanisms of multisensory integration can be addressed, if movement sonification is configured adequately: Multimodal motor representations consisting of at least visual, auditory and proprioceptive components - can be shaped subtly resulting in more precise motor control and enhanced motor learning.

  5. Recess for Students with Visual Impairments

    Science.gov (United States)

    Lucas, Matthew D.

    2010-01-01

    During recess, the participation of a student with visual impairments in terms of movement can often be both challenging and rewarding for the student and general education teacher. This paper will address common characteristics of students with visual impairments and present basic solutions to improve the participation of these students in the…

  6. Método para avaliação da conduta visual de lactentes A method to evaluate visual ability in infants

    Directory of Open Access Journals (Sweden)

    Heloisa G.R. Gardon Gagliardo

    2004-06-01

    Full Text Available O objetivo deste estudo é apresentar um método para avaliação de funções visuais em lactentes no primeiro trimestre de vida. Utilizou-se o Roteiro de Avaliação da Conduta Visual em Lactentes, modificado de Gagliardo (1997. O material foi aro suspenso por cordão. Realizou-se estudo piloto com 33 lactentes, segundo os critérios de inclusão: neonatos assintomáticos, sem necessidade de cuidados especiais nas primeiras 48 horas; idade cronológica variando entre 1 e 3 meses; avaliação mensal sem nenhuma falta; procedência da região de Campinas/SP. No 1º mês destacaram-se as provas: fixação visual-93,9%; contato de olho-90,9%; seguimento visual horizontal-72,7% e exploração visual do ambiente-97,0%. No 3º mês: exploração visual da mão-42,4% e aumento da movimentação de braços-36,4%. Este Roteiro permitiu observar a função visual segundo a idade cronológica; desvio dessa função possibilitará pronto encaminhamento a serviços médicos para diagnóstico.The purpose of this study is to introduce a method to evaluate visual functions in infants in the first three months of life. An adaptation of the Guide for the Assessment of Visual Ability in Infants (Gagliardo, 1997 was used. The instrument was a ring with string. It was implemented a pilot study with 33 infants, selected according to the following criteria: neonates well enough to go home within two days of birth; 1 to 3 months of chronological age; monthly evaluation with no absence; subjects living in Campinas/SP metropolitan area. In the first month we observed: visual fixation (93,9%; eye contact (90,9%; horizontal tracking (72,7%; inspects surroundings (97,0%. In the third month, we observed: inspects own hands (42,4% and increased movements of arms (36,4%. This method allowed the evaluation of visual functions in infants, according to the chronological age. Alterations in this function will facilitate immediate referral to medical services for diagnoses.

  7. "The only way is up" : Location and movement in product packaging as predictors of sensorial impressions and brand identity

    NARCIS (Netherlands)

    van Rompay, Thomas J.L.; Fransen, M.L.; Borgelink, B.; Brassett, J.; McDonnell, J.; Malpass, M.; Hekkert, P.P.M.; Ludden, G.D.S.

    2012-01-01

    Based on embodiment research linking visual-spatial design parameters to symbolic meaning portrayal, this study investigates to what extent location of imagery on product packaging and visual devices portraying movement (i.e., an arrow indicating movement along an upward-headed or downward-headed

  8. Action recognition and movement direction discrimination tasks are associated with different adaptation patterns

    Directory of Open Access Journals (Sweden)

    Stephan eDe La Rosa

    2016-02-01

    Full Text Available The ability to discriminate between different actions is essential for action recognition and social interaction. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g. left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g. when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently either categorized the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms.

  9. GuideLiner™ as guide catheter extension for the unreachable mammary bypass graft.

    Science.gov (United States)

    Vishnevsky, Alec; Savage, Michael P; Fischman, David L

    2018-03-09

    Percutaneous coronary intervention (PCI) of mammary artery bypass grafts through a trans-radial (TR) approach can present unique challenges, including coaxial vessel engagement of the guiding catheter, adequate visualization of the target lesion, sufficient backup support for equipment delivery, and the ability to reach very distal lesions. The GuideLiner catheter, a rapid exchange monorail mother-in-daughter system, facilitates successful interventions in such challenging anatomy. We present a case of a patient undergoing PCI of a right internal mammary artery (RIMA) graft via TR access in whom the graft could not be engaged with any guiding catheter. Using a balloon tracking technique over a guidewire, a GuideLiner was placed as an extension of the guiding catheter and facilitated TR-PCI by overcoming technical challenges associated with difficult anatomy. © 2018 Wiley Periodicals, Inc.

  10. An interactive visualization tool for mobile objects

    Science.gov (United States)

    Kobayashi, Tetsuo

    Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data

  11. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    National Research Council Canada - National Science Library

    Marjanovic, Matthew; Scassellati, Brian; Williamson, Matthew

    2006-01-01

    .... This task requires systems for learning saccade to visual targets, generating smooth arm trajectories, locating the arm in the visual field, and learning the map between gaze direction and correct...

  12. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment

    Directory of Open Access Journals (Sweden)

    Katja eFiehler

    2014-08-01

    Full Text Available When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects and three objects in the environment (global objects. After a 2s delay, a visual test scene reappeared for 1s in which one local object was missing (=target and of the remaining, one, three or five local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.

  13. Teach yourself visually Windows 8

    CERN Document Server

    McFedries, Paul

    2012-01-01

    A practical guide for visual learners eager to get started with Windows 8 If you learn more quickly when you can see how things are done, this Visual guide is the easiest way to get up and running on Windows 8. It covers more than 150 essential Windows tasks, using full-color screen shots and step-by-step instructions to show you just what to do. Learn your way around the interface and how to install programs, set up user accounts, play music and other media files, download photos from your digital camera, go online, set up and secure an e-mail account, and much more. The tried-and-true format

  14. Reflexive Learning through Visual Methods

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2014-01-01

    What. This chapter concerns how visual methods and visual materials can support visually oriented, collaborative, and creative learning processes in education. The focus is on facilitation (guiding, teaching) with visual methods in learning processes that are designerly or involve design. Visual...... methods are exemplified through two university classroom cases about collaborative idea generation processes. The visual methods and materials in the cases are photo elicitation using photo cards, and modeling with LEGO Serious Play sets. Why. The goal is to encourage the reader, whether student...... or professional, to facilitate with visual methods in a critical, reflective, and experimental way. The chapter offers recommendations for facilitating with visual methods to support playful, emergent designerly processes. The chapter also has a critical, situated perspective. Where. This chapter offers case...

  15. Visual and Proprioceptive Cue Weighting in Children with Developmental Coordination Disorder, Autism Spectrum Disorder and Typical Development

    Directory of Open Access Journals (Sweden)

    L Miller

    2013-10-01

    Full Text Available Accurate movement of the body and the perception of the body's position in space usually rely on both visual and proprioceptive cues. These cues are weighted differently depending on task, visual conditions and neurological factors. Children with Developmental Coordination Disorder (DCD and often also children with Autism Spectrum Disorder (ASD have movement deficits, and there is evidence that cue weightings may differ between these groups. It is often reported that ASD is linked to an increased reliance on proprioceptive information at the expense of visual information (Haswell et al, 2009; Gepner et al, 1995. The inverse appears to be true for DCD (Wann et al, 1998; Biancotto et al, 2011. I will report experiments comparing, for the first time, relative weightings of visual and proprioceptive information in children aged 8-14 with ASD, DCD and typical development. Children completed the Movement Assessment Battery for Children (MABC-II to assess motor ability and a visual-proprioceptive matching task to assess relative cue weighting. Results from the movement battery provided evidence for movement deficits in ASD similar to those in DCD. Cue weightings in the matching task did not differentiate the clinical groups, however those children with ASD with relatively spared movement skills tended to weight visual cues less heavily than those with DCD-like movement deficits. These findings will be discussed with reference to previous DSM-IV diagnostic criteria and also relevant revisions in the DSM-V.

  16. AVS user's guide on the basis of practice

    International Nuclear Information System (INIS)

    Masuko, Kenji; Kato, Katsumi; Kume, Etsuo; Fujii, Minoru.

    1997-07-01

    The special guides for the use of visualization software AVS have been developed at Japan Atomic Energy Research Institute (JAERI). The purpose of these guides is to help the AVS users understand easily the use of the one, due to the fact that it is so difficult for beginners to understand the original manuals. In this report, 'Transportation Evacuation Simulation' is taken up as an object of visualization, and the procedure of visualization and images recording by using the AVS are described. By using the AVS according to this report, a series of the procedure which are necessary for use of the AVS can be acquired. (author)

  17. Act quickly, decide later: long-latency visual processing underlies perceptual decisions but not reflexive behavior

    NARCIS (Netherlands)

    Jolij, J.; Scholte, H.S.; van Gaal, S.; Hodgson, T.L.; Lamme, V.A.F.

    2011-01-01

    Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is

  18. Act quickly, decide later : long latency visual processing underlies perceptual decisions but not reflexive behaviour

    NARCIS (Netherlands)

    Jolij, Jacob; Scholte, H. Steven; van Gaal, Simon; Hodgson, Timothy L.; Lamme, Victor A. F.

    2011-01-01

    Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is

  19. Memory and visual search in naturalistic 2D and 3D environments.

    Science.gov (United States)

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.

  20. Bending it like Beckham: how to visually fool the goalkeeper.

    Directory of Open Access Journals (Sweden)

    Joost C Dessing

    2010-10-01

    Full Text Available As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements.Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases.While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer.

  1. The influence of the immediate visual context on incremental thematic role-assignment: evidence from eye-movements in depicted events.

    Science.gov (United States)

    Knoeferle, Pia; Crocker, Matthew W; Scheepers, Christoph; Pickering, Martin J

    2005-02-01

    Studies monitoring eye-movements in scenes containing entities have provided robust evidence for incremental reference resolution processes. This paper addresses the less studied question of whether depicted event scenes can affect processes of incremental thematic role-assignment. In Experiments 1 and 2, participants inspected agent-action-patient events while listening to German verb-second sentences with initial structural and role ambiguity. The experiments investigated the time course with which listeners could resolve this ambiguity by relating the verb to the depicted events. Such verb-mediated visual event information allowed early disambiguation on-line, as evidenced by anticipatory eye-movements to the appropriate agent/patient role filler. We replicated this finding while investigating the effects of intonation. Experiment 3 demonstrated that when the verb was sentence-final and thus did not establish early reference to the depicted events, linguistic cues alone enabled disambiguation before people encountered the verb. Our results reveal the on-line influence of depicted events on incremental thematic role-assignment and disambiguation of local structural and role ambiguity. In consequence, our findings require a notion of reference that includes actions and events in addition to entities (e.g. Semantics and Cognition, 1983), and argue for a theory of on-line sentence comprehension that exploits a rich inventory of semantic categories.

  2. Interpersonal Movement Synchrony Responds to High- and Low-Level Conversational Constraints

    Directory of Open Access Journals (Sweden)

    Alexandra Paxton

    2017-07-01

    Full Text Available Much work on communication and joint action conceptualizes interaction as a dynamical system. Under this view, dynamic properties of interaction should be shaped by the context in which the interaction is taking place. Here we explore interpersonal movement coordination or synchrony—the degree to which individuals move in similar ways over time—as one such context-sensitive property. Studies of coordination have typically investigated how these dynamics are influenced by either high-level constraints (i.e., slow-changing factors or low-level constraints (i.e., fast-changing factors like movement. Focusing on nonverbal communication behaviors during naturalistic conversation, we analyzed how interacting participants' head movement dynamics were shaped simultaneously by high-level constraints (i.e., conversation type; friendly conversations vs. arguments and low-level constraints (i.e., perceptual stimuli; non-informative visual stimuli vs. informative visual stimuli. We found that high- and low-level constraints interacted non-additively to affect interpersonal movement dynamics, highlighting the context sensitivity of interaction and supporting the view of joint action as a complex adaptive system.

  3. Does motor expertise facilitate amplitude differentiation of lower limb-movements in an asymmetrical bipedal coordination task?

    Science.gov (United States)

    Roelofsen, Eefje G J; Brown, Derrick D; Nijhuis-van der Sanden, Maria W G; Staal, J Bart; Meulenbroek, Ruud G J

    2018-04-30

    The motor system's natural tendency is to move the limbs over equal amplitudes, for example in walking. However, in many situations in which people must perform complex movements, a certain degree of amplitude differentiation of the limbs is required. Visual and haptic feedback have recently been shown to facilitate such independence of limb movements. However, it is unknown whether motor expertise moderates the extent to which individuals are able to differentiate the amplitudes of their limb-movements while being supported with visual and haptic feedback. To answer this question 14 pre-professional dancers were compared to 14 non-dancers on simultaneously generating a small displacement with one foot, and a larger one with the other foot, in four different feedback conditions. In two conditions, haptic guidance was offered, either in a passive or active mode. In the other two conditions, veridical and enhanced visual feedback were provided. Surprisingly, no group differences were found regarding the degree to which the visual or haptic feedback assisted the generation of the different target amplitudes of the feet (mean amplitude difference between the feet). The correlation between the displacements of the feet and the standard deviation of the continuous relative phase between the feet, reflecting the degree of independence of the feet movements, also failed to show between-group differences. Sample entropy measures, indicating the predictability of the foot movements, did show a group difference. In the haptically-assisted conditions, the dancers demonstrated more predictable coordination patterns than the non-dancers as reflected by lower sample entropy values whereas the reverse was true in the visual-feedback conditions. The results demonstrate that motor expertise does not moderate the extent to which haptic tracking facilitates the differentiation of the amplitudes of the lower limb movements in an asymmetrical bipedal coordination task. Copyright © 2018

  4. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  5. Prey capture behaviour evoked by simple visual stimuli in larval zebrafish

    Directory of Open Access Journals (Sweden)

    Isaac Henry Bianco

    2011-12-01

    Full Text Available Understanding how the nervous system recognises salient stimuli in the environ- ment and selects and executes the appropriate behavioural responses is a fundamen- tal question in systems neuroscience. To facilitate the neuroethological study of visually-guided behaviour in larval zebrafish, we developed virtual reality assays in which precisely controlled visual cues can be presented to larvae whilst their behaviour is automatically monitored using machine-vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼ 20◦ towards small moving spots (1◦ but reacted to larger spots (10◦ with high-amplitude aversive turns (∼ 60◦. The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analysing movie sequences of larvae hunting parame- cia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behaviour in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey.

  6. [Influence of "prehistory" of sequential movements of the right and the left hand on reproduction: coding of positions, movements and sequence structure].

    Science.gov (United States)

    Bobrova, E V; Liakhovetskiĭ, V A; Borshchevskaia, E R

    2011-01-01

    The dependence of errors during reproduction of a sequence of hand movements without visual feedback on the previous right- and left-hand performance ("prehistory") and on positions in space of sequence elements (random or ordered by the explicit rule) was analyzed. It was shown that the preceding information about the ordered positions of the sequence elements was used during right-hand movements, whereas left-hand movements were performed with involvement of the information about the random sequence. The data testify to a central mechanism of the analysis of spatial structure of sequence elements. This mechanism activates movement coding specific for the left hemisphere (vector coding) in case of an ordered sequence structure and positional coding specific for the right hemisphere in case of a random sequence structure.

  7. Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples

    Science.gov (United States)

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2012-01-01

    Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…

  8. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  9. Learning to See: Guiding Students' Attention via a Model's Eye Movements Fosters Learning

    Science.gov (United States)

    Jarodzka, Halszka; van Gog, Tamara; Dorr, Michael; Scheiter, Katharina; Gerjets, Peter

    2013-01-01

    This study investigated how to teach perceptual tasks, that is, classifying fish locomotion, through eye movement modeling examples (EMME). EMME consisted of a replay of eye movements of a didactically behaving domain expert (model), which had been recorded while he executed the task, superimposed onto the video stimulus. Seventy-five students…

  10. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  11. Alcohol badly affects eye movements linked to steering, providing for automatic in-car detection of drink driving.

    Science.gov (United States)

    Marple-Horvat, Dilwyn E; Cooper, Hannah L; Gilbey, Steven L; Watson, Jessica C; Mehta, Neena; Kaur-Mann, Daljit; Wilson, Mark; Keil, Damian

    2008-03-01

    Driving is a classic example of visually guided behavior in which the eyes move before some other action. When approaching a bend in the road, a driver looks across to the inside of the curve before turning the steering wheel. Eye and steering movements are tightly linked, with the eyes leading, which allows the parts of the brain that move the eyes to assist the parts of the brain that control the hands on the wheel. We show here that this optimal relationship deteriorates with levels of breath alcohol well within the current UK legal limit for driving. The eyes move later, and coordination reduces. These changes lead to bad performance and can be detected by an automated in-car system, which warns the driver is no longer fit to drive.

  12. Constraining eye movement in individuals with Parkinson's disease during walking turns.

    Science.gov (United States)

    Ambati, V N Pradeep; Saucedo, Fabricio; Murray, Nicholas G; Powell, Douglas W; Reed-Jones, Rebecca J

    2016-10-01

    Walking and turning is a movement that places individuals with Parkinson's disease (PD) at increased risk for fall-related injury. However, turning is an essential movement in activities of daily living, making up to 45 % of the total steps taken in a given day. Hypotheses regarding how turning is controlled suggest an essential role of anticipatory eye movements to provide feedforward information for body coordination. However, little research has investigated control of turning in individuals with PD with specific consideration for eye movements. The purpose of this study was to examine eye movement behavior and body segment coordination in individuals with PD during walking turns. Three experimental groups, a group of individuals with PD, a group of healthy young adults (YAC), and a group of healthy older adults (OAC), performed walking and turning tasks under two visual conditions: free gaze and fixed gaze. Whole-body motion capture and eye tracking characterized body segment coordination and eye movement behavior during walking trials. Statistical analysis revealed significant main effects of group (PD, YAC, and OAC) and visual condition (free and fixed gaze) on timing of segment rotation and horizontal eye movement. Within group comparisons, revealed timing of eye and head movement was significantly different between the free and fixed gaze conditions for YAC (p  0.05). In addition, while intersegment timings (reflecting segment coordination) were significantly different for YAC and OAC during free gaze (p training programs for those with PD, possibly promoting better coordination during turning and potentially reducing the risk of falls.

  13. Gravity-dependent estimates of object mass underlie the generation of motor commands for horizontal limb movements.

    Science.gov (United States)

    Crevecoeur, F; McIntyre, J; Thonnard, J-L; Lefèvre, P

    2014-07-15

    Moving requires handling gravitational and inertial constraints pulling on our body and on the objects that we manipulate. Although previous work emphasized that the brain uses internal models of each type of mechanical load, little is known about their interaction during motor planning and execution. In this report, we examine visually guided reaching movements in the horizontal plane performed by naive participants exposed to changes in gravity during parabolic flight. This approach allowed us to isolate the effect of gravity because the environmental dynamics along the horizontal axis remained unchanged. We show that gravity has a direct effect on movement kinematics, with faster movements observed after transitions from normal gravity to hypergravity (1.8g), followed by significant movement slowing after the transition from hypergravity to zero gravity. We recorded finger forces applied on an object held in precision grip and found that the coupling between grip force and inertial loads displayed a similar effect, with an increase in grip force modulation gain under hypergravity followed by a reduction of modulation gain after entering the zero-gravity environment. We present a computational model to illustrate that these effects are compatible with the hypothesis that participants partially attribute changes in weight to changes in mass and scale incorrectly their motor commands with changes in gravity. These results highlight a rather direct internal mapping between the force generated during stationary holding against gravity and the estimation of inertial loads that limb and hand motor commands must overcome. Copyright © 2014 the American Physiological Society.

  14. Interactive map of refugee movement in Europe

    Directory of Open Access Journals (Sweden)

    Calka Beata

    2016-12-01

    Full Text Available Considering the recent mass movement of people fleeing war and oppression, an analysis of changes in migration, in particular an analysis of the final destination refugees choose, seems to be of utmost importance. Many international organisations like UNHCR (the United Nations High Commissioner for Refugees or EuroStat gather and provide information on the number of refugees and the routes they follow. What is also needed to study the state of affairs closely is a visual form presenting the rapidly changing situation. An analysis of the problem together with up-to-date statistical data presented in the visual form of a map is essential. This article describes methods of preparing such interactive maps displaying movement of refugees in European Union countries. Those maps would show changes taking place throughout recent years but also the dynamics of the development of the refugee crisis in Europe. The ArcGIS software was applied to make the map accessible on the Internet. Additionally, online sources and newspaper articles were used to present the movement of migrants. The interactive map makes it possible to watch spatial data with an opportunity to navigate within the map window. Because of that it is a clear and convenient tool to visualise such processes as refugee migration in Europe.

  15. Interactive map of refugee movement in Europe

    Science.gov (United States)

    Calka, Beata; Cahan, Bruce

    2016-12-01

    Considering the recent mass movement of people fleeing war and oppression, an analysis of changes in migration, in particular an analysis of the final destination refugees choose, seems to be of utmost importance. Many international organisations like UNHCR (the United Nations High Commissioner for Refugees) or EuroStat gather and provide information on the number of refugees and the routes they follow. What is also needed to study the state of affairs closely is a visual form presenting the rapidly changing situation. An analysis of the problem together with up-to-date statistical data presented in the visual form of a map is essential. This article describes methods of preparing such interactive maps displaying movement of refugees in European Union countries. Those maps would show changes taking place throughout recent years but also the dynamics of the development of the refugee crisis in Europe. The ArcGIS software was applied to make the map accessible on the Internet. Additionally, online sources and newspaper articles were used to present the movement of migrants. The interactive map makes it possible to watch spatial data with an opportunity to navigate within the map window. Because of that it is a clear and convenient tool to visualise such processes as refugee migration in Europe.

  16. Contextual effects on smooth-pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2007-02-01

    Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.

  17. Learning QlikView data visualization

    CERN Document Server

    Pover, Karl

    2013-01-01

    A practical and fast-paced guide that gives you all the information you need to start developing charts from your data.Learning QlikView Data Visualization is for anybody interested in performing powerful data analysis and crafting insightful data visualization, independent of any previous knowledge of QlikView. Experience with spreadsheet software will help you understand QlikView functions.

  18. How context information and target information guide the eyes from the first epoch of search in real-world scenes.

    Science.gov (United States)

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2014-02-11

    This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.

  19. A Citizen's Guide to Vapor Intrusion Mitigation

    Science.gov (United States)

    This guide describes how vapor intrusion is the movement of chemical vapors from contaminated soil and groundwater into nearby buildings.Vapors primarily enter through openings in the building foundation or basement walls.

  20. Influences of Long-Term Memory-Guided Attention and Stimulus-Guided Attention on Visuospatial Representations within Human Intraparietal Sulcus.

    Science.gov (United States)

    Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C

    2015-08-12

    Human parietal cortex plays a central role in encoding visuospatial information and multiple visual maps exist within the intraparietal sulcus (IPS), with each hemisphere symmetrically representing contralateral visual space. Two forms of hemispheric asymmetries have been identified in parietal cortex ventrolateral to visuotopic IPS. Key attentional processes are localized to right lateral parietal cortex in the temporoparietal junction and long-term memory (LTM) retrieval processes are localized to the left lateral parietal cortex in the angular gyrus. Here, using fMRI, we investigate how spatial representations of visuotopic IPS are influenced by stimulus-guided visuospatial attention and by LTM-guided visuospatial attention. We replicate prior findings that a hemispheric asymmetry emerges under stimulus-guided attention: in the right hemisphere (RH), visual maps IPS0, IPS1, and IPS2 code attentional targets across the visual field; in the left hemisphere (LH), IPS0-2 codes primarily contralateral targets. We report the novel finding that, under LTM-guided attention, both RH and LH IPS0-2 exhibit bilateral responses and hemispheric symmetry re-emerges. Therefore, we demonstrate that both hemispheres of IPS0-2 are independently capable of dynamically changing spatial coding properties as attentional task demands change. These findings have important implications for understanding visuospatial and memory-retrieval deficits in patients with parietal lobe damage. The human parietal lobe contains multiple maps of the external world that spatially guide perception, action, and cognition. Maps in each cerebral hemisphere code information from the opposite side of space, not from the same side, and the two hemispheres are symmetric. Paradoxically, damage to specific parietal regions that lack spatial maps can cause patients to ignore half of space (hemispatial neglect syndrome), but only for right (not left) hemisphere damage. Conversely, the left parietal cortex has

  1. Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.

    Science.gov (United States)

    Brielmann, Aenne A; Spering, Miriam

    2015-08-01

    Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).

  2. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  3. Action-blindsight in healthy subjects after transcranial magnetic stimulation

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Kristiansen, Lasse; Rowe, James B.

    2008-01-01

    Clinical cases of blindsight have shown that visually guided movements can be accomplished without conscious visual perception. Here, we show that blindsight can be induced in healthy subjects by using transcranial magnetic stimulation over the visual cortex. Transcranial magnetic stimulation...

  4. Receptive fields for smooth pursuit eye movements and motion perception.

    Science.gov (United States)

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Visualization system on the earth simulator user's guide

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Sai, Kazunori

    2002-08-01

    A visualization system on the Earth Simulator is developed. The system enables users to see a graphic representation of simulation results on a client terminal simultaneously with them being computed on the Earth Simulator. Moreover, the system makes it possible to change parameters of the calculation and its visualization in the middle of calculation. The graphical user interface (GUI) of the system is constructed on a Java applet. Consequently, the client only needs a web browser, so it is independent of operating systems. The system consists of a server function, post-processing function and client function. The server and post-processing functions work on the Earth Simulator, and the client function works on the client terminal. The server function employs a library style format so that users can easily invoke real-time visualization functions by applying their code. The post-processing function employs a library style format and moreover provides a load module. This report describes mainly the usage of the server and post-processing functions. (author)

  6. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  7. Transcranial magnetic stimulation and preparation of visually-guided reaching movements

    Directory of Open Access Journals (Sweden)

    Pierpaolo eBusan

    2012-08-01

    Full Text Available To better define the neural networks related to preparation of reaching, we applied transcranial magnetic stimulation (TMS to the lateral parietal and frontal cortex. TMS did not evoke effects closely related to preparation of reaching, suggesting that neural networks already identified by our group are not larger than previously thought. We also replicated previous TMS/EEG data by applying TMS to the parietal cortex: new analyses were performed to better support reliability of already reported findings (Zanon et al., 2010; Brain Topography 22, 307-317. We showed the existence of neural circuits ranging from posterior to frontal regions of the brain after the stimulation of parietal cortex, supporting the idea of strong connections among these areas and suggesting their possible temporal dynamic. Connection with ventral stream was confirmed.The present work helps to define those areas which are involved in preparation of natural reaching in humans. They correspond to parieto-occipital, parietal and premotor medial regions of the left hemisphere, i.e. the contralateral one with respect to the moving hand, as suggested by previous studies. Behavioral data support the existence of a discrete stream involved in reaching. Besides the serial flow of activation from posterior to anterior direction, a parallel elaboration of information among parietal and premotor areas seems also to exist. Present cortico-cortical interactions (TMS/EEG experiment show propagation of activity to frontal, temporal, parietal and more posterior regions, exhibiting distributed communication among various areas in the brain.The neural system highlighted by TMS/EEG experiments is wider with respect to the one disclosed by the TMS behavioral approach. Further studies are needed to unravel this paucity of overlap. Moreover, the understanding of these mechanisms is crucial for the comprehension of response inhibition and changes in prepared actions, which are common behaviors in everyday life.

  8. Preliminary study of visual effect of multiplex hologram

    Science.gov (United States)

    Fu, Huaiping; Xiong, Bingheng; Yang, Hong; Zhang, Xueguo

    2004-06-01

    The process of any movement of real object can be recorded and displayed by a multiplex holographic stereogram. An embossing multiplex holographic stereogram and a multiplex rainbow holographic stereogram have been made by us, the multiplex rainbow holographic stereogram reconstructs the dynamic 2D line drawing of speech organs, the embossing multiplex holographic stereogram reconstructs the process of an old man drinking water. In this paper, we studied the visual result of an embossing multiplex holographic stereogram made with 80 films of 2-D pictures. Forty-eight persons of aged from 13 to 67 were asked to see the hologram and then to answer some questions about the feeling of viewing. The results indicate that this kind of holograms could be accepted by human visual sense organ without any problem. This paper also discusses visual effect of the multiplex holography stereograms base on visual perceptual psychology. It is open out that the planar multiplex holograms can be recorded and present the movement of real animal and object. Not only have the human visual perceptual constancy for shape, just as that size, color, etc... but also have visual perceptual constancy for binocular parallax.

  9. Predictive and tempo-flexible synchronization to a visual metronome in monkeys.

    Science.gov (United States)

    Takeya, Ryuji; Kameda, Masashi; Patel, Aniruddh D; Tanaka, Masaki

    2017-07-21

    Predictive and tempo-flexible synchronization to an auditory beat is a fundamental component of human music. To date, only certain vocal learning species show this behaviour spontaneously. Prior research training macaques (vocal non-learners) to tap to an auditory or visual metronome found their movements to be largely reactive, not predictive. Does this reflect the lack of capacity for predictive synchronization in monkeys, or lack of motivation to exhibit this behaviour? To discriminate these possibilities, we trained monkeys to make synchronized eye movements to a visual metronome. We found that monkeys could generate predictive saccades synchronized to periodic visual stimuli when an immediate reward was given for every predictive movement. This behaviour generalized to novel tempi, and the monkeys could maintain the tempo internally. Furthermore, monkeys could flexibly switch from predictive to reactive saccades when a reward was given for each reactive response. In contrast, when humans were asked to make a sequence of reactive saccades to a visual metronome, they often unintentionally generated predictive movements. These results suggest that even vocal non-learners may have the capacity for predictive and tempo-flexible synchronization to a beat, but that only certain vocal learning species are intrinsically motivated to do it.

  10. Highly Realistic 3D Presentation Agents with Visual Attention Capability

    NARCIS (Netherlands)

    Hoekstra, A; Prendinger, H.; Bee, N.; Heylen, Dirk K.J.; Ishizuka, M.

    2007-01-01

    This research proposes 3D graphical agents in the role of virtual presenters with a new type of functionality – the capability to process and respond to visual attention of users communicated by their eye movements. Eye gaze is an excellent clue to users’ attention, visual interest, and visual

  11. Visual updating across saccades by working memory integration

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing, by shifting externally-stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  12. Visualization of vessel traffic

    NARCIS (Netherlands)

    Willems, C.M.E.

    2011-01-01

    Moving objects are captured in multivariate trajectories, often large data with multiple attributes. We focus on vessel traffic as a source of such data. Patterns appearing from visually analyzing attributes are used to explain why certain movements have occurred. In this research, we have developed

  13. Reporting with Visual Studio and Crystal Reports

    CERN Document Server

    Elkoush, Mahmoud

    2013-01-01

    A fast-paced, example-based guide to learn how to create a reporting application using Visual Studio and Crystal Reports.""Reporting with Visual Studio and Crystal Reports"" is for developers new to Crystal Reports. It will also prove useful to intermediate users who wish to explore some new techniques in Crystal Reports using Microsoft Visual Studio. Readers are expected to have basic knowledge of C#, Microsoft Visual Studio, and Structured Query Language (SQL).

  14. Visual screening of incercerated juvenile delinquents: a study of ...

    African Journals Online (AJOL)

    Further analysis revealed significantly higher frequencies for convergence, phoria, hyperopia, pursuit and saccadic eye movement subtest (P<0.05) in male offenders as compared with male control population. Also higher frequencies for visual acuity (distant and near), hyperopia, phoria, pursuit and saccadic eye movement ...

  15. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  16. Directional asymmetries in human smooth pursuit eye movements.

    Science.gov (United States)

    Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam

    2013-06-27

    Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.

  17. The spatiotopic 'visual' cortex of the blind

    Science.gov (United States)

    Likova, Lora

    2012-03-01

    Visual cortex activity in the blind has been shown in sensory tasks. Can it be activated in memory tasks? If so, are inherent features of its organization meaningfully employed? Our recent results in short-term blindfolded subjects imply that human primary visual cortex (V1) may operate as a modality-independent 'sketchpad' for working memory (Likova, 2010a). Interestingly, the spread of the V1 activation approximately corresponded to the spatial extent of the images in terms of their angle of projection to the subject. We now raise the questions of whether under long-term visual deprivation V1 is also employed in non-visual memory task, in particular in congenitally blind individuals, who have never had visual stimulation to guide the development of the visual area organization, and whether such spatial organization is still valid for the same paradigm that was used in blindfolded individuals. The outcome has implications for an emerging reconceptualization of the principles of brain architecture and its reorganization under sensory deprivation. Methods: We used a novel fMRI drawing paradigm in congenitally and late-onset blind, compared with sighted and blindfolded subjects in three conditions of 20s duration, separated by 20s rest-intervals, (i) Tactile Exploration: raised-line images explored and memorized; (ii) Tactile Memory Drawing: drawing the explored image from memory; (iii) Scribble: mindless drawing movements with no memory component. Results and Conclusions: V1 was strongly activated for Tactile Memory Drawing and Tactile Exploration in these totally blind subjects. Remarkably, after training, even in the memory task, the mapping of V1 activation largely corresponded to the angular projection of the tactile stimuli relative to the ego-center (i.e., the effective visual angle at the head); beyond this projective boundary, peripheral V1 signals were dramatically reduced or even suppressed. The matching extent of the activation in the congenitally blind

  18. Structural and functional changes across the visual cortex of a patient with visual form agnosia.

    Science.gov (United States)

    Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J

    2013-07-31

    Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.

  19. Magnetic stimulation of the dorsolateral prefrontal cortex dissociates fragile visual short-term memory from visual working memory

    NARCIS (Netherlands)

    Sligte, I.G.; Wokke, M.E.; Tesselaar, J.P.; Scholte, H.S.; Lamme, V.A.F.

    2011-01-01

    To guide our behavior in successful ways, we often need to rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). While VSTM is usually broken down into iconic memory (brief and high-capacity store) and visual working memory (sustained, yet limited-capacity

  20. Real-time recording and classification of eye movements in an immersive virtual environment.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.