WorldWideScience

Sample records for enhanced visual motion

  1. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    Science.gov (United States)

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Saccade-induced image motion cannot account for post-saccadic enhancement of visual processing in primate MST

    Directory of Open Access Journals (Sweden)

    Shaun L Cloherty

    2015-09-01

    Full Text Available Primates use saccadic eye movements to make gaze changes. In many visual areas, including the dorsal medial superior temporal area (MSTd of macaques, neural responses to visual stimuli are reduced during saccades but enhanced afterwards. How does this enhancement arise – from an internal mechanism associated with saccade generation or through visual mechanisms activated by the saccade sweeping the image of the visual scene across the retina? Spontaneous activity in MSTd is elevated even after saccades made in darkness, suggesting a central mechanism for post-saccadic enhancement. However, based on the timing of this effect, it may arise from a different mechanism than occurs in normal vision. Like neural responses in MSTd, initial ocular following eye speed is enhanced after saccades, with evidence suggesting both internal and visually mediated mechanisms. Here we recorded from visual neurons in MSTd and measured responses to motion stimuli presented soon after saccades and soon after simulated saccades – saccade-like displacements of the background image during fixation. We found that neural responses in MSTd were enhanced when preceded by real saccades but not when preceded by simulated saccades. Furthermore, we also observed enhancement following real saccades made across a blank screen that generated no motion signal within the recorded neurons’ receptive fields. We conclude that in MSTd the mechanism leading to post-saccadic enhancement has internal origins.

  3. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    Science.gov (United States)

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  4. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  5. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  6. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  7. An Enhanced Intelligent Handheld Instrument with Visual Servo Control for 2-DOF Hand Motion Error Compensation

    Directory of Open Access Journals (Sweden)

    Yan Naing Aye

    2013-10-01

    Full Text Available The intelligent handheld instrument, ITrem2, enhances manual positioning accuracy by cancelling erroneous hand movements and, at the same time, provides automatic micromanipulation functions. Visual data is acquired from a high speed monovision camera attached to the optical surgical microscope and acceleration measurements are acquired from the inertial measurement unit (IMU on board ITrem2. Tremor estimation and canceling is implemented via Band-limited Multiple Fourier Linear Combiner (BMFLC filter. The piezoelectric actuated micromanipulator in ITrem2 generates the 3D motion to compensate erroneous hand motion. Preliminary bench-top 2-DOF experiments have been conducted. The error motions simulated by a motion stage is reduced by 67% for multiple frequency oscillatory motions and 56.16% for pre-conditioned recorded physiological tremor.

  8. The Right Hemisphere Planum Temporale Supports Enhanced Visual Motion Detection Ability in Deaf People: Evidence from Cortical Thickness.

    Science.gov (United States)

    Shiell, Martha M; Champoux, François; Zatorre, Robert J

    2016-01-01

    After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenitally deaf cat, and not in humans. Using T1-weighted magnetic resonance imaging, we measured cortical thickness in the planum temporale, Heschl's gyrus and sulcus, the middle temporal area MT+, and the calcarine sulcus, in early-deaf persons. We tested for a correlation between this measure and visual motion detection thresholds, a visual function where deaf people show enhancements as compared to hearing. We found that the cortical thickness of a region in the right hemisphere planum temporale, typically an auditory region, was greater in deaf individuals with better visual motion detection thresholds. This same region has previously been implicated in functional imaging studies as important for functional reorganization. The structure-behaviour correlation observed here demonstrates this area's involvement in compensatory vision and indicates an anatomical correlate, increased cortical thickness, of cross-modal plasticity.

  9. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  10. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    Science.gov (United States)

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  11. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  12. Sound-contingent visual motion aftereffect

    Directory of Open Access Journals (Sweden)

    Kobayashi Maori

    2011-05-01

    Full Text Available Abstract Background After a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion, one of the signals (color becomes a driver for the other signal (motion. This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adult's brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound. Results Dynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days. Conclusions These results indicate that a new neural representation can be rapidly established between auditory and visual modalities.

  13. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    Science.gov (United States)

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (pperception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion

  14. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    Directory of Open Access Journals (Sweden)

    Steven David Rosenblatt

    Full Text Available A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37 participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001 and rotation (p0.1 for both. Thus, although a true moving visual field can induce self-motion, results of this

  15. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    Science.gov (United States)

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  17. Characterizing head motion in three planes during combined visual and base of support disturbances in healthy and visually sensitive subjects.

    Science.gov (United States)

    Keshner, E A; Dhaher, Y

    2008-07-01

    Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (pplane of platform motion significantly increased (phistory of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (pplanes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.

  18. Perception of biological motion in visual agnosia

    Directory of Open Access Journals (Sweden)

    Elisabeth eHuberle

    2012-08-01

    Full Text Available Over the past twenty-five years, visual processing has been discussed in the context of the dual stream hypothesis consisting of a ventral (‘what' and a dorsal ('where' visual information processing pathway. Patients with brain damage of the ventral pathway typically present with signs of visual agnosia, the inability to identify and discriminate objects by visual exploration, but show normal perception of motion perception. A dissociation between the perception of biological motion and non-biological motion has been suggested: Perception of biological motion might be impaired when 'non-biological' motion perception is intact and vice versa. The impact of object recognition on the perception of biological motion remains unclear. We thus investigated this question in a patient with severe visual agnosia, who showed normal perception of non-biological motion. The data suggested that the patient's perception of biological motion remained largely intact. However, when tested with objects constructed of coherently moving dots (‘Shape-from-Motion’, recognition was severely impaired. The results are discussed in the context of possible mechanisms of biological motion perception.

  19. Auditory motion in the sighted and blind: Early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions.

    Science.gov (United States)

    Dormal, Giulia; Rezk, Mohamed; Yakobov, Esther; Lepore, Franco; Collignon, Olivier

    2016-07-01

    How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Visual motion influences the contingent auditory motion aftereffect

    NARCIS (Netherlands)

    Vroomen, J.; de Gelder, B.

    2003-01-01

    In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa).

  1. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  2. The role of human ventral visual cortex in motion perception

    Science.gov (United States)

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  3. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  4. Neurons Responsive to Global Visual Motion Have Unique Tuning Properties in Hummingbirds.

    Science.gov (United States)

    Gaede, Andrea H; Goller, Benjamin; Lam, Jessica P M; Wylie, Douglas R; Altshuler, Douglas L

    2017-01-23

    Neurons in animal visual systems that respond to global optic flow exhibit selectivity for motion direction and/or velocity. The avian lentiformis mesencephali (LM), known in mammals as the nucleus of the optic tract (NOT), is a key nucleus for global motion processing [1-4]. In all animals tested, it has been found that the majority of LM and NOT neurons are tuned to temporo-nasal (back-to-front) motion [4-11]. Moreover, the monocular gain of the optokinetic response is higher in this direction, compared to naso-temporal (front-to-back) motion [12, 13]. Hummingbirds are sensitive to small visual perturbations while hovering, and they drift to compensate for optic flow in all directions [14]. Interestingly, the LM, but not other visual nuclei, is hypertrophied in hummingbirds relative to other birds [15], which suggests enhanced perception of global visual motion. Using extracellular recording techniques, we found that there is a uniform distribution of preferred directions in the LM in Anna's hummingbirds, whereas zebra finch and pigeon LM populations, as in other tetrapods, show a strong bias toward temporo-nasal motion. Furthermore, LM and NOT neurons are generally classified as tuned to "fast" or "slow" motion [10, 16, 17], and we predicted that most neurons would be tuned to slow visual motion as an adaptation for slow hovering. However, we found the opposite result: most hummingbird LM neurons are tuned to fast pattern velocities, compared to zebra finches and pigeons. Collectively, these results suggest a role in rapid responses during hovering, as well as in velocity control and collision avoidance during forward flight of hummingbirds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2018-02-15

    contrast, beta oscillatory activity in the auditory task, which varied as a function of SNR in all groups, was overall enhanced in congenital cataract reversal individuals. These results suggest that intramodal plasticity elicited by a transient phase of blindness was maintained and might mediate the prevailing auditory processing advantages in congenital cataract reversal individuals. By contrast, auditory and visual motion processing do not seem to compete for the same neural resources. We speculate that incomplete visual recovery is due to impaired neural network turning which seems to depend on early visual input. The present results demonstrate a privilege of the first arriving input for shaping neural circuits mediating both auditory and visual functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  7. Filling gaps in visual motion for target capture

    Directory of Open Access Journals (Sweden)

    Gianfranco eBosco

    2015-02-01

    Full Text Available A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.

  8. Filling gaps in visual motion for target capture

    Science.gov (United States)

    Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637

  9. Filling gaps in visual motion for target capture.

    Science.gov (United States)

    Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.

  10. Visual Motion Perception and Visual Attentive Processes.

    Science.gov (United States)

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  11. Visual motion detection and habitat preference in Anolis lizards.

    Science.gov (United States)

    Steinberg, David S; Leal, Manuel

    2016-11-01

    The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.

  12. Visual motion perception predicts driving hazard perception ability.

    Science.gov (United States)

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  13. Images of illusory motion in primary visual cortex

    DEFF Research Database (Denmark)

    Larsen, A.; Madsen, Kristoffer Hougaard; Lund, T.E.

    2006-01-01

    Illusory motion can be generated by successively flashing a stationary visual stimulus in two spatial locations separated by several degrees of visual angle. In appropriate conditions, the apparent motion is indistinguishable from real motion: The observer experiences a luminous object traversing...... a continuous path from one stimulus location to the other through intervening positions where no physical stimuli exist. The phenomenon has been extensively investigated for nearly a century but little is known about its neurophysiological foundation. Here we present images of activations in the primary visual...

  14. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    Science.gov (United States)

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  15. Kinesthetic information disambiguates visual motion signals.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. Visualization system of swirl motion

    International Nuclear Information System (INIS)

    Nakayama, K.; Umeda, K.; Ichikawa, T.; Nagano, T.; Sakata, H.

    2004-01-01

    The instrumentation of a system composed of an experimental device and numerical analysis is presented to visualize flow and identify swirling motion. Experiment is performed with transparent material and PIV (Particle Image Velocimetry) instrumentation, by which velocity vector field is obtained. This vector field is then analyzed numerically by 'swirling flow analysis', which estimates its velocity gradient tensor and the corresponding eigenvalue (swirling function). Since an instantaneous flow field in steady/unsteady states is captured by PIV, the flow field is analyzed, and existence of vortices or swirling motions and their locations are identified in spite of their size. In addition, intensity of swirling is evaluated. The analysis enables swirling motion to emerge, even though it is hidden in uniform flow and velocity filed does not indicate any swirling. This visualization system can be applied to investigate condition to control flow or design flow. (authors)

  17. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    Science.gov (United States)

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  18. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    Science.gov (United States)

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  19. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  20. Visual Hierarchy and Mind Motion in Advertising Design

    Directory of Open Access Journals (Sweden)

    Doaa Farouk Badawy Eldesouky

    2013-06-01

    Full Text Available Visual hierarchy is a significant concept in the field of advertising, a field that is dominated by effective communication, visual recognition and motion. Designers of advertisements have always been trying to organize the visual hierarchy throughout their advertising designs to aid the eye to recognize information in the desired order, to achieve the ultimate goals of clear perception and effectively delivering the advertising messages. However many assumptions and questions usually rise on how to create effective hierarchy throughout advertising designs and lead the eye and mind of the viewer in the most favorable way. This paper attempts to study visual hierarchy and mind motion in advertising designs and why it is important to develop visual paths when designing an advertisement. It explores the theory behind it, and how the very principles can be used to put these concepts into practice. The paper demonstrates some advertising samples applying visual hierarchy and mind motion in a representation of applying the basics and discussing the results.

  1. Visual Hierarchy and Mind Motion in Advertising Design

    Directory of Open Access Journals (Sweden)

    Doaa Farouk Badawy Eldesouky

    2013-06-01

    Full Text Available Visual hierarchy is a significant concept in the field of advertising, a field that is dominated by effective communication, visual recognition and motion. Designers of advertisements have always been trying to organize the visual hierarchy throughout their advertising designs to aid the eye to recognize information in the desired order, to achieve the ultimate goals of clear perception and effectively delivering the advertising messages. However many assumptions and questions usually rise on how to create effective hierarchy throughout advertising designs and lead the eye and mind of the viewer in the most favorable way. This paper attempts to study visual hierarchy and mind motion in advertising designs and why it is important to develop visual paths when designing an advertisement. It explores the theory behind it, and how the very principles can be used to put these concepts into practice. The paper demonstrates some advertising samples applying visual hierarchy and mind motion in a representation of applying the basics and discussing the results. 

  2. Precision of working memory for visual motion sequences and transparent motion surfaces.

    Science.gov (United States)

    Zokaei, Nahid; Gorgoraptis, Nikos; Bahrami, Bahador; Bays, Paul M; Husain, Masud

    2011-12-01

    Recent studies investigating working memory for location, color, and orientation support a dynamic resource model. We examined whether this might also apply to motion, using random dot kinematograms (RDKs) presented sequentially or simultaneously. Mean precision for motion direction declined as sequence length increased, with precision being lower for earlier RDKs. Two alternative models of working memory were compared specifically to distinguish between the contributions of different sources of error that corrupt memory (W. Zhang & S. J. Luck, 2008 vs. P. M. Bays, R. F. G. Catalao, & M. Husain, 2009). The latter provided a significantly better fit for the data, revealing that decrease in memory precision for earlier items is explained by an increase in interference from other items in a sequence rather than random guessing or a temporal decay of information. Misbinding feature attributes is an important source of error in working memory. Precision of memory for motion direction decreased when two RDKs were presented simultaneously as transparent surfaces, compared to sequential RDKs. However, precision was enhanced when one motion surface was prioritized, demonstrating that selective attention can improve recall precision. These results are consistent with a resource model that can be used as a general conceptual framework for understanding working memory across a range of visual features.

  3. Trend-Centric Motion Visualization: Designing and Applying a New Strategy for Analyzing Scientific Motion Collections.

    Science.gov (United States)

    Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M; Nuckley, David; Carlis, John; Keefe, Daniel F

    2014-12-01

    In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.

  4. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.

    Science.gov (United States)

    Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2011-04-01

    Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.

  5. Cross-sensory facilitation reveals neural interactions between visual and tactile motion in humans

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2011-04-01

    Full Text Available Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. We find that while summation produced a generic, non-specific improvement of thresholds, probably reflecting higher-order interaction of decision signals, facilitation reveals a strong, direction-specific interaction, which we believe reflects sensory interactions. We measured visual and tactile velocity discrimination thresholds over a wide range of base velocities and conditions. Thresholds for both visual and tactile stimuli showed the characteristic dipper function, with the minimum thresholds occurring at a given pedestal speed. When visual and tactile coherent stimuli were combined (summation condition the thresholds for these multi-sensory stimuli also showed a dipper function with the minimum thresholds occurring in a similar range to that for unisensory signals. However, the improvement of multisensory thresholds was weak and not directionally specific, well predicted by the maximum likelihood estimation model (agreeing with previous research. A different technique (facilitation did, however, reveal direction-specific enhancement. Adding a non-informative pedestal motion stimulus in one sensory modality (vision or touch selectively lowered thresholds in the other, by the same amount as pedestals in the same modality. Facilitation did not occur for neutral stimuli like sounds (that would also have reduced temporal uncertainty, nor for motion in opposite direction, even in blocked trials where the subjects knew that the motion was in the opposite direction showing that the facilitation was not under subject control. Cross-sensory facilitation is strong evidence for functionally relevant cross-sensory integration at early levels of sensory

  6. The roles of non-retinotopic motions in visual search

    Directory of Open Access Journals (Sweden)

    Ryohei eNakayama

    2016-06-01

    Full Text Available In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate system – retinal, relative, and spatiotopic (head/body-centered. Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinates systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.

  7. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    Science.gov (United States)

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The notion of the motion: the neurocognition of motion lines in visual narratives.

    Science.gov (United States)

    Cohn, Neil; Maher, Stephen

    2015-03-19

    Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the "streaks" appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the "vocabulary" of the visual language of comics. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Visualization of Kepler's Laws of Planetary Motion

    Science.gov (United States)

    Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong

    2017-01-01

    For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler's laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler's laws of planetary motion to be visualized and will contribute to improving the…

  10. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    Science.gov (United States)

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  11. Visual gravitational motion and the vestibular system in humans

    Directory of Open Access Journals (Sweden)

    Francesco eLacquaniti

    2013-12-01

    Full Text Available The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  12. Visual gravitational motion and the vestibular system in humans.

    Science.gov (United States)

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  13. Vection and visually induced motion sickness: How are they related?

    Directory of Open Access Journals (Sweden)

    Behrang eKeshavarz

    2015-04-01

    Full Text Available The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (so-called vection, however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate whether or not vection is a necessary prerequisite for visually induced motion sickness (VIMS. That is, can there be visually induced motion sickness without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that may speak to this relationship (including theoretical accounts of vection and VIMS, and offer suggestions with respect to operationally defining and reporting these phenomena in future.

  14. Integration of Visual and Vestibular Information Used to Discriminate Rotational Self-Motion

    Directory of Open Access Journals (Sweden)

    Florian Soyka

    2011-10-01

    Full Text Available Do humans integrate visual and vestibular information in a statistically optimal fashion when discriminating rotational self-motion stimuli? Recent studies are inconclusive as to whether such integration occurs when discriminating heading direction. In the present study eight participants were consecutively rotated twice (2s sinusoidal acceleration on a chair about an earth-vertical axis in vestibular-only, visual-only and visual-vestibular trials. The visual stimulus was a video of a moving stripe pattern, synchronized with the inertial motion. Peak acceleration of the reference stimulus was varied and participants reported which rotation was perceived as faster. Just-noticeable differences (JND were estimated by fitting psychometric functions. The visual-vestibular JND measurements are too high compared to the predictions based on the unimodal JND estimates and there is no JND reduction between visual-vestibular and visual-alone estimates. These findings may be explained by visual capture. Alternatively, the visual precision may not be equal between visual-vestibular and visual-alone conditions, since it has been shown that visual motion sensitivity is reduced during inertial self-motion. Therefore, measuring visual-alone JNDs with an underlying uncorrelated inertial motion might yield higher visual-alone JNDs compared to the stationary measurement. Theoretical calculations show that higher visual-alone JNDs would result in predictions consistent with the JND measurements for the visual-vestibular condition.

  15. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    OpenAIRE

    Rick Van Der Zwan; Anna Brooks; Duncan Blair; Coralia Machatch; Graeme Hacker

    2011-01-01

    Johnson and Tassinary (2005) proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009). Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ...

  16. Usage of stereoscopic visualization in the learning contents of rotational motion.

    Science.gov (United States)

    Matsuura, Shu

    2013-01-01

    Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.

  17. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  19. S4-3: Spatial Processing of Visual Motion

    Directory of Open Access Journals (Sweden)

    Shin'ya Nishida

    2012-10-01

    Full Text Available Local motion signals are extracted in parallel by a bank of motion detectors, and their spatiotemporal interactions are processed in subsequent stages. In this talk, I will review our recent studies on spatial interactions in visual motion processing. First, we found two types of spatial pooling of local motion signals. Directionally ambiguous 1D local motion signals are pooled across orientation and space for solution of the aperture problem, while 2D local motion signals are pooled for estimation of global vector average (e.g., Amano et al., 2009 Journal of Vision 9(3:4 1–25. Second, when stimulus presentation is brief, coherent motion detection of dynamic random-dot kinematogram is not efficient. Nevertheless, it is significantly improved by transient and synchronous presentation of a stationary surround pattern. This suggests that centre-surround spatial interaction may help rapid perception of motion (Linares et al., submitted. Third, to know how the visual system encodes pairwise relationships between remote motion signals, we measured the temporal rate limit for perceiving the relationship of two motion directions presented at the same time at different spatial locations. Compared with similar tasks with luminance or orientation signals, motion comparison was more rapid and hence efficient. This high performance was affected little by inter-element separation even when it was increased up to 100 deg. These findings indicate the existence of specialized processes to encode long-range relationships between motion signals for quick appreciation of global dynamic scene structure (Maruya et al., in preparation.

  20. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  1. Task-specific impairments and enhancements induced by magnetic stimulation of human visual area V5.

    Science.gov (United States)

    Walsh, V; Ellison, A; Battelli, L; Cowey, A

    1998-03-22

    Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subjects were impaired in a motion but not a form 'pop-out' task when TMS was applied over V5. When motion was present, but irrelevant, or when attention to colour and form were required, TMS applied to V5 enhanced performance. When attention to motion was required in a motion-form conjunction search task, irrespective of whether the target was moving or stationary, TMS disrupted performance. These data suggest that attention to different visual attributes involves mutual inhibition between different extrastriate visual areas.

  2. Representation of visual gravitational motion in the human vestibular cortex.

    Science.gov (United States)

    Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco

    2005-04-15

    How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.

  3. Implied motion language can influence visual spatial memory

    NARCIS (Netherlands)

    Vinson, David; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it

  4. Absence of direction-specific cross-modal visual-auditory adaptation in motion-onset event-related potentials.

    Science.gov (United States)

    Grzeschik, Ramona; Lewald, Jörg; Verhey, Jesko L; Hoffmann, Michael B; Getzmann, Stephan

    2016-01-01

    Adaptation to visual or auditory motion affects within-modality motion processing as reflected by visual or auditory free-field motion-onset evoked potentials (VEPs, AEPs). Here, a visual-auditory motion adaptation paradigm was used to investigate the effect of visual motion adaptation on VEPs and AEPs to leftward motion-onset test stimuli. Effects of visual adaptation to (i) scattered light flashes, and motion in the (ii) same or in the (iii) opposite direction of the test stimulus were compared. For the motion-onset VEPs, i.e. the intra-modal adaptation conditions, direction-specific adaptation was observed--the change-N2 (cN2) and change-P2 (cP2) amplitudes were significantly smaller after motion adaptation in the same than in the opposite direction. For the motion-onset AEPs, i.e. the cross-modal adaptation condition, there was an effect of motion history only in the change-P1 (cP1), and this effect was not direction-specific--cP1 was smaller after scatter than after motion adaptation to either direction. No effects were found for later components of motion-onset AEPs. While the VEP results provided clear evidence for the existence of a direction-specific effect of motion adaptation within the visual modality, the AEP findings suggested merely a motion-related, but not a direction-specific effect. In conclusion, the adaptation of veridical auditory motion detectors by visual motion is not reflected by the AEPs of the present study. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    Science.gov (United States)

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  6. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated.

    Directory of Open Access Journals (Sweden)

    Merle-Marie Ahrens

    Full Text Available Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing. Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design and task-irrelevant (by instruction, and by creating instead endogenous (orthogonal expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech.

  7. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  8. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish.

    Science.gov (United States)

    Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi

    2018-06-05

    Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.

  9. Primary visual cortex activity along the apparent-motion trace reflects illusory perception.

    Directory of Open Access Journals (Sweden)

    Lars Muckli

    2005-08-01

    Full Text Available The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1 is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.

  10. Parallax visualization of full motion video using the Pursuer GUI

    Science.gov (United States)

    Mayhew, Christopher A.; Forgues, Mark B.

    2014-06-01

    In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.

  11. Vestibular nuclei and cerebellum put visual gravitational motion in context.

    Science.gov (United States)

    Miller, William L; Maffei, Vincenzo; Bosco, Gianfranco; Iosa, Marco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco

    2008-04-01

    Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.

  12. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    Science.gov (United States)

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Visual working memory contents bias ambiguous structure from motion perception.

    Directory of Open Access Journals (Sweden)

    Lisa Scocchia

    Full Text Available The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM.

  14. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    Science.gov (United States)

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  15. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    Science.gov (United States)

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  16. Sound frequency and aural selectivity in sound-contingent visual motion aftereffect.

    Directory of Open Access Journals (Sweden)

    Maori Kobayashi

    Full Text Available BACKGROUND: One possible strategy to evaluate whether signals in different modalities originate from a common external event or object is to form associations between inputs from different senses. This strategy would be quite effective because signals in different modalities from a common external event would then be aligned spatially and temporally. Indeed, it has been demonstrated that after adaptation to visual apparent motion paired with alternating auditory tones, the tones begin to trigger illusory motion perception to a static visual stimulus, where the perceived direction of visual lateral motion depends on the order in which the tones are replayed. The mechanisms underlying this phenomenon remain unclear. One important approach to understanding the mechanisms is to examine whether the effect has some selectivity in auditory processing. However, it has not yet been determined whether this aftereffect can be transferred across sound frequencies and between ears. METHODOLOGY/PRINCIPAL FINDINGS: Two circles placed side by side were presented in alternation, producing apparent motion perception, and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. However, the aftereffect was observed only when the adapter and test tones were presented at the same frequency and to the same ear. CONCLUSIONS/SIGNIFICANCE: These findings suggest that the auditory processing underlying the establishment of novel audiovisual associations is selective, potentially but not necessarily indicating that this processing occurs at an early stage.

  17. Hummingbirds control hovering flight by stabilizing visual motion.

    Science.gov (United States)

    Goller, Benjamin; Altshuler, Douglas L

    2014-12-23

    Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.

  18. 3D geospatial visualizations: Animation and motion effects on spatial objects

    Science.gov (United States)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  19. Ventral aspect of the visual form pathway is not critical for the perception of biological motion

    Science.gov (United States)

    Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene

    2015-01-01

    Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504

  20. Postural sway and gaze can track the complex motion of a visual target.

    Directory of Open Access Journals (Sweden)

    Vassilia Hatzitaki

    Full Text Available Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictive visual target motions does not take into account this essential characteristic of the human movement, and may result in task specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degree of complexity during whole-body swaying in the Anterior-Posterior (AP and Medio-Lateral (ML direction. Participants were asked to track three visual target motions: a complex (Lorenz attractor, a noise (brown and a periodic (sine moving target while receiving online visual feedback about their performance. Postural sway, gaze and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.

  1. Circuit Mechanisms Governing Local vs. Global Motion Processing in Mouse Visual Cortex

    Directory of Open Access Journals (Sweden)

    Rune Rasmussen

    2017-12-01

    Full Text Available A withstanding question in neuroscience is how neural circuits encode representations and perceptions of the external world. A particularly well-defined visual computation is the representation of global object motion by pattern direction-selective (PDS cells from convergence of motion of local components represented by component direction-selective (CDS cells. However, how PDS and CDS cells develop their distinct response properties is still unresolved. The visual cortex of the mouse is an attractive model for experimentally solving this issue due to the large molecular and genetic toolbox available. Although mouse visual cortex lacks the highly ordered orientation columns of primates, it is organized in functional sub-networks and contains striate- and extrastriate areas like its primate counterparts. In this Perspective article, we provide an overview of the experimental and theoretical literature on global motion processing based on works in primates and mice. Lastly, we propose what types of experiments could illuminate what circuit mechanisms are governing cortical global visual motion processing. We propose that PDS cells in mouse visual cortex appear as the perfect arena for delineating and solving how individual sensory features extracted by neural circuits in peripheral brain areas are integrated to build our rich cohesive sensory experiences.

  2. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    Science.gov (United States)

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  3. Electrophysiological correlates of learning-induced modulation of visual motion processing in humans

    Directory of Open Access Journals (Sweden)

    Viktor Gál

    2010-01-01

    Full Text Available Training on a visual task leads to increased perceptual and neural responses to visual features that were attended during training as well as decreased responses to neglected distractor features. However, the time course of these attention-based modulations of neural sensitivity for visual features has not been investigated before. Here we measured event related potentials (ERP in response to motion stimuli with different coherence levels before and after training on a speed discrimination task requiring object-based attentional selection of one of the two competing motion stimuli. We found that two peaks on the ERP waveform were modulated by the strength of the coherent motion signal; the response amplitude associated with motion directions that were neglected during training was smaller than the response amplitude associated with motion directions that were attended during training. The first peak of motion coherence-dependent modulation of the ERP responses was at 300 ms after stimulus onset and it was most pronounced over the occipitotemporal cortex. The second peak was around 500 ms and was focused over the parietal cortex. A control experiment suggests that the earlier motion coherence-related response modulation reflects the extraction of the coherent motion signal whereas the later peak might index accumulation and readout of motion signals by parietal decision mechanisms. These findings suggest that attention-based learning affects neural responses both at the sensory and decision processing stages.

  4. The economics of motion perception and invariants of visual sensitivity.

    Science.gov (United States)

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  5. A Sensory-Driven Trade-Off between Coordinated Motion in Social Prey and a Predator's Visual Confusion.

    Directory of Open Access Journals (Sweden)

    Bertrand H Lemasson

    2016-02-01

    Full Text Available Social animals are capable of enhancing their awareness by paying attention to their neighbors, and prey found in groups can also confuse their predators. Both sides of these sensory benefits have long been appreciated, yet less is known of how the perception of events from the perspectives of both prey and predator can interact to influence their encounters. Here we examined how a visual sensory mechanism impacts the collective motion of prey and, subsequently, how their resulting movements influenced predator confusion and capture ability. We presented virtual prey to human players in a targeting game and measured the speed and accuracy with which participants caught designated prey. As prey paid more attention to neighbor movements their collective coordination increased, yet increases in prey coordination were positively associated with increases in the speed and accuracy of attacks. However, while attack speed was unaffected by the initial state of the prey, accuracy dropped significantly if the prey were already organized at the start of the attack, rather than in the process of self-organizing. By repeating attack scenarios and masking the targeted prey's neighbors we were able to visually isolate them and conclusively demonstrate how visual confusion impacted capture ability. Delays in capture caused by decreased coordination amongst the prey depended upon the collection motion of neighboring prey, while it was primarily the motion of the targets themselves that determined capture accuracy. Interestingly, while a complete loss of coordination in the prey (e.g., a flash expansion caused the greatest delay in capture, such behavior had little effect on capture accuracy. Lastly, while increases in collective coordination in prey enhanced personal risk, traveling in coordinated groups was still better than appearing alone. These findings demonstrate a trade-off between the sensory mechanisms that can enhance the collective properties that

  6. Circuit Mechanisms Governing Local vs. Global Motion Processing in Mouse Visual Cortex

    DEFF Research Database (Denmark)

    Rasmussen, Rune; Yonehara, Keisuke

    2017-01-01

    components represented by component direction-selective (CDS) cells. However, how PDS and CDS cells develop their distinct response properties is still unresolved. The visual cortex of the mouse is an attractive model for experimentally solving this issue due to the large molecular and genetic toolbox...... literature on global motion processing based on works in primates and mice. Lastly, we propose what types of experiments could illuminate what circuit mechanisms are governing cortical global visual motion processing. We propose that PDS cells in mouse visual cortex appear as the perfect arena...

  7. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    Science.gov (United States)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  8. Perception of the dynamic visual vertical during sinusoidal linear motion.

    Science.gov (United States)

    Pomante, A; Selen, L P J; Medendorp, W P

    2017-10-01

    The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the

  9. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  10. Defective motion processing in children with cerebral visual impairment due to periventricular white matter damage.

    Science.gov (United States)

    Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D

    2012-07-01

    We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.

  11. Single-unit studies of visual motion processing in cat extrastriate areas

    NARCIS (Netherlands)

    Vajda, Ildiko

    2003-01-01

    Motion vision has high survival value and is a fundamental property of all visual systems. The old Greeks already studied motion vision, but the physiological basis of it first came under scrutiny in the late nineteenth century. Later, with the introduction of single-cell (single-unit)

  12. Interactions between motion and form processing in the human visual system

    OpenAIRE

    Mather, G.; Pavan, A.; Bellacosa Marotti, R.; Campana, G.; Casco, C.

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form proce...

  13. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    Science.gov (United States)

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  14. Integration of visual and inertial cues in perceived heading of self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Weesie, H.M.; Werkhoven, P.J.; Groen, E.L.

    2010-01-01

    In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants

  15. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  16. Hand based visual intent recognition algorithm for wheelchair motion

    CSIR Research Space (South Africa)

    Luhandjula, T

    2010-05-01

    Full Text Available This paper describes an algorithm for a visual human-machine interface that infers a person’s intention from the motion of the hand. Work in progress shows a proof of concept tested on static images. The context for which this solution is intended...

  17. Development of visual motion perception for prospective control: Brain and behavioural studies in infants

    Directory of Open Access Journals (Sweden)

    Seth B. Agyei

    2016-02-01

    Full Text Available During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioural and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioural data when studying the neural correlates of prospective control.

  18. Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.

    Science.gov (United States)

    Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp

    2012-07-30

    Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.

  19. Neural correlates of visually induced self-motion illusion in depth.

    Science.gov (United States)

    Kovács, Gyula; Raabe, Markus; Greenlee, Mark W

    2008-08-01

    Optic-flow fields can induce the conscious illusion of self-motion in a stationary observer. Here we used functional magnetic resonance imaging to reveal the differential processing of self- and object-motion in the human brain. Subjects were presented a constantly expanding optic-flow stimulus, composed of disparate red-blue dots, viewed through red-blue glasses to generate a vivid percept of three-dimensional motion. We compared the activity obtained during periods of illusory self-motion with periods of object-motion percept. We found that the right MT+, precuneus, as well as areas located bilaterally along the dorsal part of the intraparietal sulcus and along the left posterior intraparietal sulcus were more active during self-motion perception than during object-motion. Additional signal increases were located in the depth of the left superior frontal sulcus, over the ventral part of the left anterior cingulate, in the depth of the right central sulcus and in the caudate nucleus/putamen. We found no significant deactivations associated with self-motion perception. Our results suggest that the illusory percept of self-motion is correlated with the activation of a network of areas, ranging from motion-specific areas to regions involved in visuo-vestibular integration, visual imagery, decision making, and introspection.

  20. Evaluering av Leap Motion kontrollern för visualisering av musik

    OpenAIRE

    Uvman, Oliver

    2016-01-01

    An experiment was carried out, attempting to ascertain whether the Leap Motion Controller can be a useful input device for dynamically controlling graphic visualizations, e.g. by artists who use video and interactive visual arts to enhance music performances. The Leap Motion Controller was found to be too unreliable to be used as the primary controller in a professional visual arts performance.

  1. Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2014-05-27

    Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.

  2. Frequency of gamma oscillations in humans is modulated by velocity of visual motion

    Science.gov (United States)

    Butorina, Anna V.; Sysoeva, Olga V.; Prokofyev, Andrey O.; Nikolaeva, Anastasia Yu.; Stroganova, Tatiana A.

    2015-01-01

    Gamma oscillations are generated in networks of inhibitory fast-spiking (FS) parvalbumin-positive (PV) interneurons and pyramidal cells. In animals, gamma frequency is modulated by the velocity of visual motion; the effect of velocity has not been evaluated in humans. In this work, we have studied velocity-related modulations of gamma frequency in children using MEG/EEG. We also investigated whether such modulations predict the prominence of the “spatial suppression” effect (Tadin D, Lappin JS, Gilroy LA, Blake R. Nature 424: 312-315, 2003) that is thought to depend on cortical center-surround inhibitory mechanisms. MEG/EEG was recorded in 27 normal boys aged 8–15 yr while they watched high-contrast black-and-white annular gratings drifting with velocities of 1.2, 3.6, and 6.0°/s and performed a simple detection task. The spatial suppression effect was assessed in a separate psychophysical experiment. MEG gamma oscillation frequency increased while power decreased with increasing velocity of visual motion. In EEG, the effects were less reliable. The frequencies of the velocity-specific gamma peaks were 64.9, 74.8, and 87.1 Hz for the slow, medium, and fast motions, respectively. The frequency of the gamma response elicited during slow and medium velocity of visual motion decreased with subject age, whereas the range of gamma frequency modulation by velocity increased with age. The frequency modulation range predicted spatial suppression even after controlling for the effect of age. We suggest that the modulation of the MEG gamma frequency by velocity of visual motion reflects excitability of cortical inhibitory circuits and can be used to investigate their normal and pathological development in the human brain. PMID:25925324

  3. Facial motion engages predictive visual mechanisms.

    Directory of Open Access Journals (Sweden)

    Jordy Kaufman

    Full Text Available We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent or a different emotion (incongruent with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.

  4. Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving

    Directory of Open Access Journals (Sweden)

    Mingbo Du

    2016-01-01

    Full Text Available This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.

  5. Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.

    Science.gov (United States)

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-15

    This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.

  6. Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving

    Science.gov (United States)

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-01

    This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203

  7. Interactions between motion and form processing in the human visual system.

    Science.gov (United States)

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  8. Sharpened cortical tuning and enhanced cortico-cortical communication contribute to the long-term neural mechanisms of visual motion perceptual learning.

    Science.gov (United States)

    Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang

    2015-07-15

    Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  10. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning.

    Science.gov (United States)

    Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly

    2018-01-01

    Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  11. Motion-induced blindness and microsaccades: cause and effect

    NARCIS (Netherlands)

    Bonneh, Y.S.; Donner, T.H.; Sagi, D.; Fried, M.; Heeger, D.J.; Arieli, A.

    2010-01-01

    It has been suggested that subjective disappearance of visual stimuli results from a spontaneous reduction of microsaccade rate causing image stabilization, enhanced adaptation, and a consequent fading. In motion-induced blindness (MIB), salient visual targets disappear intermittently when

  12. Integration of visual and inertial cues in the perception of angular self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Soyka, F.; Barnett-Cowan, M.; Bülthoff, H.H.; Groen, E.L.; Werkhoven, P.J.

    2013-01-01

    The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when

  13. Anatomical alterations of the visual motion processing network in migraine with and without aura.

    Directory of Open Access Journals (Sweden)

    Cristina Granziera

    2006-10-01

    Full Text Available Patients suffering from migraine with aura (MWA and migraine without aura (MWoA show abnormalities in visual motion perception during and between attacks. Whether this represents the consequences of structural changes in motion-processing networks in migraineurs is unknown. Moreover, the diagnosis of migraine relies on patient's history, and finding differences in the brain of migraineurs might help to contribute to basic research aimed at better understanding the pathophysiology of migraine.To investigate a common potential anatomical basis for these disturbances, we used high-resolution cortical thickness measurement and diffusion tensor imaging (DTI to examine the motion-processing network in 24 migraine patients (12 with MWA and 12 MWoA and 15 age-matched healthy controls (HCs. We found increased cortical thickness of motion-processing visual areas MT+ and V3A in migraineurs compared to HCs. Cortical thickness increases were accompanied by abnormalities of the subjacent white matter. In addition, DTI revealed that migraineurs have alterations in superior colliculus and the lateral geniculate nucleus, which are also involved in visual processing.A structural abnormality in the network of motion-processing areas could account for, or be the result of, the cortical hyperexcitability observed in migraineurs. The finding in patients with both MWA and MWoA of thickness abnormalities in area V3A, previously described as a source in spreading changes involved in visual aura, raises the question as to whether a "silent" cortical spreading depression develops as well in MWoA. In addition, these experimental data may provide clinicians and researchers with a noninvasively acquirable migraine biomarker.

  14. Study of the perception of visual motion in amblyopia using functional MRI

    International Nuclear Information System (INIS)

    Lu Guangming; Zhang Zhiqiang; Zhou Wenzhen; Zheng Ling; Yin Jie; Liang Ping

    2006-01-01

    Objective: To research the pathophysiological mechanism of anisometropic and strabismic amblyopia through observation of the cortex activation under the stimulus of visual motion using functional MRI (fMRI). Methods: Seven patients with anisometropic amblyopia and 10 patients with strabismic amblyopia were examined under the stimulus with the paradigm that task and control states were rotating and stationary grating with 1.5 T MR scanners. The data were processed using software of SPM offline, and the result was analyzed with single subject. An index of interocular difference of activation (IDA) was set for Mann-Whitney rank sum test to denote the extension of difference between activation of each eye. Results: There appeared activation on bilaterally occipital lobe in both group of amblyopia patients. There was mild activation on frontal lobe when amblyopic eyes were stimulated, but no activation when sound eyes. The MT area was regarded as region of interesting when analyzed, the activation of all sound eyes was stronger than amblyopic eyes in 7 anisometropic amblyopia patients. There were 5 patients whose level of activation of amblyopic eye's were lower than sound eye, and four were higher than sound eye, among the strabismic amblyopia patients except one patient's activation was none. There was statistical difference between IDA value of two groups (Z=2.382, P=0.017). Conclusion: There are more cortex areas activated of amblyopic eye than sound eye when single eye is stimulated. The function of visual motion maybe has been affected in anisometropic amblyopia. In strabismic amblyopia, the function of visual motion may relate to the underlying mechanism of strabismic, which suggests, as for the impairment of perception of visual motion, there is difference between two types of amblyopia. (authors)

  15. Speed and accuracy of visual motion discrimination by rats.

    Directory of Open Access Journals (Sweden)

    Pamela Reinagel

    Full Text Available Animals must continuously evaluate sensory information to select the preferable among possible actions in a given context, including the option to wait for more information before committing to another course of action. In experimental sensory decision tasks that replicate these features, reaction time distributions can be informative about the implicit rules by which animals determine when to commit and what to do. We measured reaction times of Long-Evans rats discriminating the direction of motion in a coherent random dot motion stimulus, using a self-paced two-alternative forced-choice (2-AFC reaction time task. Our main findings are: (1 When motion strength was constant across trials, the error trials had shorter reaction times than correct trials; in other words, accuracy increased with response latency. (2 When motion strength was varied in randomly interleaved trials, accuracy increased with motion strength, whereas reaction time decreased. (3 Accuracy increased with reaction time for each motion strength considered separately, and in the interleaved motion strength experiment overall. (4 When stimulus duration was limited, accuracy improved with stimulus duration, whereas reaction time decreased. (5 Accuracy decreased with response latency after stimulus offset. This was the case for each stimulus duration considered separately, and in the interleaved duration experiment overall. We conclude that rats integrate visual evidence over time, but in this task the time of their response is governed more by elapsed time than by a criterion for sufficient evidence.

  16. Interactions between motion and form processing in the human visual system

    Directory of Open Access Journals (Sweden)

    George eMather

    2013-05-01

    Full Text Available The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by ‘motion-streaks’ influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  17. Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

    Science.gov (United States)

    Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco

    2013-05-01

    Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Improved motion description for action classification

    Directory of Open Access Journals (Sweden)

    Mihir eJain

    2016-01-01

    Full Text Available Even though the importance of explicitly integrating motion characteristics in video descriptions has been demonstrated by several recent papers on action classification, our current work concludes that adequately decomposing visual motion into dominant and residual motions, i.e.: camera and scene motion, significantly improves action recognition algorithms. This holds true both for the extraction of the space-time trajectories and for computation of descriptors.We designed a new motion descriptor – the DCS descriptor – that captures additional information on local motion patterns enhancing results based on differential motion scalar quantities, divergence, curl and shear features. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. These findings are complementary to each other and they outperformed all previously reported results by a significant margin on three challenging datasets: Hollywood 2, HMDB51 and Olympic Sports as reported in (Jain et al. (2013. These results were further improved by (Oneata et al. (2013; Wang and Schmid (2013; Zhu et al. (2013 through the use of the Fisher vector encoding. We therefore also employ Fisher vector in this paper and we further enhance our approach by combining trajectories from both optical flow and compensated flow. We as well provide additional details of DCS descriptors, including visualization. For extending the evaluation, a novel dataset with 101 action classes, UCF101, was added.

  19. Impaired working memory for visual motion direction in schizophrenia: Absence of recency effects and association with psychopathology.

    Science.gov (United States)

    Stäblein, Michael; Sieprath, Lore; Knöchel, Christian; Landertinger, Axel; Schmied, Claudia; Ghinea, Denisa; Mayer, Jutta S; Bittner, Robert A; Reif, Andreas; Oertel-Knöchel, Viola

    2016-09-01

    Working memory (WM) impairments are a prominent neurocognitive symptom in schizophrenia (SZ) and include deficits in memory for serial order and abnormalities in serial position effects (i.e., primacy and recency effects). Former studies predominantly focused on investigating these deficits applying verbal or static visual stimuli, but little is known about WM processes that involve dynamic visual movements. We examined WM for visual motion directions, its susceptibility to distraction and the effect of serial positioning. Twenty-three patients with paranoid SZ and 23 healthy control subjects (HC) took part in the study. We conducted an adapted Sternberg-type recognition paradigm: three random dot kinematograms (RDKs) that depicted coherent visual motion were used as stimuli and a distractor stimulus was incorporated into the task. SZ patients performed significantly worse in the WM visual motion task, when a distractor stimulus was presented. While HC showed a recency effect for later RDKs, the effect was absent in SZ patients. WM deficits were associated with more severe psychopathological symptoms, poor visual and verbal learning, and a longer duration of illness. Furthermore, SZ patients showed impairments in several other neurocognitive domains. Findings suggest that early WM processing of visual motion is susceptible to interruption and that WM impairments are associated with clinical symptoms in SZ. The absence of a recency effect is discussed in respect of 3 theoretical approaches-impaired WM for serial order information, abnormalities in early visual representations (i.e., masking effects), and deficits in later visual processing (i.e., attentional blink effect). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Flow visualization in science and mathematics

    Energy Technology Data Exchange (ETDEWEB)

    Max, Nelson; Correa, Carlos; Muelder, Chris; Yan Shi; Chen, Cheng-Kai; Ma, Kwan-Liu, E-mail: max@cs.ucdavis.ed [Department of Computer Science, University of California, Davis 1 Shields Ave., Davis California, 95616 (United States)

    2009-07-01

    We present several methods for visualizing motion, vector fields, and flows, including polygonal surface advection, visibility driven transfer functions, feature extraction and tracking, and motion frequency analysis and enhancement. They are applied to chaotic attractors, turbulent vortices, supernovae, and seismic data.

  1. Technique for Measuring Speed and Visual Motion Sensitivity in Lizards

    Science.gov (United States)

    Woo, Kevin L.; Burke, Darren

    2008-01-01

    Testing sensory characteristics on herpetological species has been difficult due to a range of properties related to physiology, responsiveness, performance ability, and the type of reinforcer used. Using the Jacky lizard as a model, we outline a successfully established procedure in which to test the visual sensitivity to motion characteristics.…

  2. The Right Hemisphere Planum Temporale Supports Enhanced Visual Motion Detection Ability in Deaf People: Evidence from Cortical Thickness

    OpenAIRE

    Shiell, Martha M.; Champoux, Fran?ois; Zatorre, Robert J.

    2016-01-01

    After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenita...

  3. First-person and third-person verbs in visual motion-perception regions.

    Science.gov (United States)

    Papeo, Liuba; Lingnau, Angelika

    2015-02-01

    Verb-related activity is consistently found in the left posterior lateral cortex (PLTC), encompassing also regions that respond to visual-motion perception. Besides motion, those regions appear sensitive to distinctions among the entities beyond motion, including that between first- vs. third-person ("third-person bias"). In two experiments, using functional magnetic resonance imaging (fMRI), we studied whether the implied subject (first/third-person) and/or the semantic content (motor/non-motor) of verbs modulate the neural activity in the left PLTC-regions responsive during basic- and biological-motion perception. In those sites, we found higher activity for verbs than for nouns. This activity was modulated by the person (but not the semantic content) of the verbs, with stronger response to third- than first-person verbs. The third-person bias elicited by verbs supports a role of motion-processing regions in encoding information about the entity beyond (and independently from) motion, and sets in a new light the role of these regions in verb processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Independent and additive repetition priming of motion direction and color in visual search.

    Science.gov (United States)

    Kristjánsson, Arni

    2009-03-01

    Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.

  5. A computer-assisted test for the electrophysiological and psychophysical measurement of dynamic visual function based on motion contrast.

    Science.gov (United States)

    Wist, E R; Ehrenstein, W H; Schrauf, M; Schraus, M

    1998-03-13

    A new test is described that allows for electrophysiological and psychophysical measurement of visual function based on motion contrast. In a computer-generated random-dot display, completely camouflaged Landolt rings become visible only when dots within the target area are moved briefly while those of the background remain stationary. Thus, detection of contours and the location of the gap in the ring rely on motion contrast (form-from-motion) instead of luminance contrast. A standard version of this test has been used to assess visual performance in relation to age, in screening professional groups (truck drivers) and in clinical groups (glaucoma patients). Aside from this standard version, the computer program easily allows for various modifications. These include the option of a synchronizing trigger signal to allow for recording of time-locked motion-onset visual-evoked responses, the reversal of target and background motion, and the displacement of random-dot targets across stationary backgrounds. In all instances, task difficulty is manipulated by changing the percentage of moving dots within the target (or background). The present test offers a short, convenient method to probe dynamic visual functions relying on surprathreshold motion-contrast stimuli and complements other routine tests of form, contrast, depth, and color vision.

  6. Laser spectroscopic visualization of hydrogen bond motions in liquid water

    Science.gov (United States)

    Bratos, S.; Leicknam, J.-Cl.; Pommeret, S.; Gallot, G.

    2004-12-01

    Ultrafast pump-probe experiments are described permitting a visualization of molecular motions in diluted HDO/D 2O solutions. The experiments were realized in the mid-infrared spectral region with a time resolution of 150 fs. They were interpreted by a careful theoretical analysis, based on the correlation function approach of statistical mechanics. Combining experiment and theory, stretching motions of the OH⋯O bonds as well as HDO rotations were 'filmed' in real time. It was found that molecular rotations are the principal agent of hydrogen bond breaking and making in water. Recent literatures covering the subject, including molecular dynamics simulations, are reviewed in detail.

  7. Whole-field visual motion drives swimming in larval zebrafish via a stochastic process.

    Science.gov (United States)

    Portugues, Ruben; Haesemeyer, Martin; Blum, Mirella L; Engert, Florian

    2015-05-01

    Caudo-rostral whole-field visual motion elicits forward locomotion in many organisms, including larval zebrafish. Here, we investigate the dependence on the latency to initiate this forward swimming as a function of the speed of the visual motion. We show that latency is highly dependent on speed for slow speeds (1.5 s, which is much longer than neuronal transduction processes. What mechanisms underlie these long latencies? We propose two alternative, biologically inspired models that could account for this latency to initiate swimming: an integrate and fire model, which is history dependent, and a stochastic Poisson model, which has no history dependence. We use these models to predict the behavior of larvae when presented with whole-field motion of varying speed and find that the stochastic process shows better agreement with the experimental data. Finally, we discuss possible neuronal implementations of these models. © 2015. Published by The Company of Biologists Ltd.

  8. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  9. Visual error augmentation enhances learning in three dimensions.

    Science.gov (United States)

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  10. Visual error augmentation enhances learning in three dimensions

    Directory of Open Access Journals (Sweden)

    Huang Felix

    2011-09-01

    Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  11. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    Science.gov (United States)

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  12. Bottlenecks of motion processing during a visual glance: the leaky flask model.

    Directory of Open Access Journals (Sweden)

    Haluk Öğmen

    Full Text Available Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic memory, visual short-term memory (VSTM, and long-term memory (LTM. It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.

  13. Bottlenecks of motion processing during a visual glance: the leaky flask model.

    Science.gov (United States)

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.

  14. Modulation of orientation-selective neurons by motion: when additive, when multiplicative?

    Directory of Open Access Journals (Sweden)

    Torsten eLüdge

    2014-06-01

    Full Text Available The recurrent interaction among orientation-selective neurons in the primary visual cortex (V1 is suited to enhance contours in a noisy visual scene. Motion is known to have a strong pop-up effect in perceiving contours, but how motion-sensitive neurons in V1 support contour detection remains vastly elusive. Here we suggest how the various types of motion-sensitive neurons observed in V1 should be wired together in a micro-circuitry to optimally extract contours in the visual scene. Motion-sensitive neurons can be selective about the direction of motion occurring at some spot or respond equally to all directions (pandirectional. We show that, in the light of figure-ground segregation, direction-selective motion neurons should additively modulate the corresponding orientation-selective neurons with preferred orientation orthogonal to the motion direction. In turn, to maximally enhance contours, pandirectional motion neurons should multiplicatively modulate all orientation-selective neurons with co-localized receptive fields. This multiplicative modulation amplifies the local V1-circuitry among co-aligned orientation-selective neurons for detecting elongated contours. We suggest that the additive modulation by direction- specific motion neurons is achieved through synaptic projections to the somatic region, and the multiplicative modulation by pandirectional motion neurons through projections to the apical region of orientation-specific pyramidal neurons. For the purpose of contour detection, the V1- intrinsic integration of motion information is advantageous over a downstream integration as it exploits the recurrent V1-circuitry designed for that task.

  15. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    Science.gov (United States)

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  16. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    Directory of Open Access Journals (Sweden)

    Chih-Chung Ting

    2015-07-01

    Full Text Available Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods.

  17. The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.

    Science.gov (United States)

    Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P

    2015-01-01

    Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.

  18. Peripheral visual performance enhancement by neurofeedback training.

    Science.gov (United States)

    Nan, Wenya; Wan, Feng; Lou, Chin Ian; Vai, Mang I; Rosa, Agostinho

    2013-12-01

    Peripheral visual performance is an important ability for everyone, and a positive inter-individual correlation is found between the peripheral visual performance and the alpha amplitude during the performance test. This study investigated the effect of alpha neurofeedback training on the peripheral visual performance. A neurofeedback group of 13 subjects finished 20 sessions of alpha enhancement feedback within 20 days. The peripheral visual performance was assessed by a new dynamic peripheral visual test on the first and last training day. The results revealed that the neurofeedback group showed significant enhancement of the peripheral visual performance as well as the relative alpha amplitude during the peripheral visual test. It was not the case in the non-neurofeedback control group, which performed the tests within the same time frame as the neurofeedback group but without any training sessions. These findings suggest that alpha neurofeedback training was effective in improving peripheral visual performance. To the best of our knowledge, this is the first study to show evidence for performance improvement in peripheral vision via alpha neurofeedback training.

  19. Effects of spatial attention on motion discrimination are greater in the left than right visual field.

    Science.gov (United States)

    Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R

    2012-01-01

    In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the dorsal and ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the dorsal stream. Published by Elsevier Ltd.

  20. An Assessment of a Low-Cost Visual Tracking System (VTS) to Detect and Compensate for Patient Motion During SPECT

    Science.gov (United States)

    McNamara, Joseph E.; Bruyant, Philippe; Johnson, Karen; Feng, Bing; Lehovich, Andre; Gu, Songxiang; Gennert, Michael A.; King, Michael A.

    2008-06-01

    Patient motion is inevitable in SPECT and PET due to the lengthy period of time patients are imaged and patient motion can degrade diagnostic accuracy. The goal of our studies is to perfect a methodology for tracking and correcting patient motion when it occurs. In this paper we report on enhancements to the calibration, camera stability, accuracy of motion tracking, and temporal synchronization of a low-cost visual tracking system (VTS) we are developing. The purpose of the VTS is to track the motion of retro-reflective markers on stretchy bands wrapped about the chest and abdomen of patients. We have improved the accuracy of 3D spatial calibration by using a MATLAB optical camera calibration package with a planar calibration pattern. This allowed us to determine the intrinsic and extrinsic parameters for stereo-imaging with our CCD cameras. Locations in the VTS coordinate system are transformed to the SPECT coordinate system by a VTS/SPECT mapping using a phantom of 7 retro-reflective spheres each filled with a drop of Tc99m. We switched from pan, tilt and zoom (PTZ) network cameras to fixed network cameras to reduce the amount of camera drift. The improved stability was verified by tracking the positions of fixed retro-reflective markers on a wall. The ability of our VTS to track movement, on average, with sub-millimeter and sub-degree accuracy was established with the 7-sphere phantom for 1 cm vertical and axial steps as well as for an arbitrary rotation and translation. The difference in the time of optical image acquisition as decoded from the image headers relative to synchronization signals sent to the SPECT system was used to establish temporal synchrony between optical and list-mode SPECT acquisition. Two experiments showed better than 100 ms agreement between VTS and SPECT observed motion for three axial translations. We were able to track 3 reflective markers on an anthropomorphic phantom with a precision that allowed us to correct motion such that no

  1. Asymmetry of Drosophila ON and OFF motion detectors enhances real-world velocity estimation.

    Science.gov (United States)

    Leonhardt, Aljoscha; Ammer, Georg; Meier, Matthias; Serbe, Etienne; Bahl, Armin; Borst, Alexander

    2016-05-01

    The reliable estimation of motion across varied surroundings represents a survival-critical task for sighted animals. How neural circuits have adapted to the particular demands of natural environments, however, is not well understood. We explored this question in the visual system of Drosophila melanogaster. Here, as in many mammalian retinas, motion is computed in parallel streams for brightness increments (ON) and decrements (OFF). When genetically isolated, ON and OFF pathways proved equally capable of accurately matching walking responses to realistic motion. To our surprise, detailed characterization of their functional tuning properties through in vivo calcium imaging and electrophysiology revealed stark differences in temporal tuning between ON and OFF channels. We trained an in silico motion estimation model on natural scenes and discovered that our optimized detector exhibited differences similar to those of the biological system. Thus, functional ON-OFF asymmetries in fly visual circuitry may reflect ON-OFF asymmetries in natural environments.

  2. Gravity Cues Embedded in the Kinematics of Human Motion Are Detected in Form-from-Motion Areas of the Visual System and in Motor-Related Areas.

    Science.gov (United States)

    Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J J; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine

    2017-01-01

    The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer's motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex.

  3. Visual Motion Perception

    Science.gov (United States)

    1991-08-15

    displace- ment limit for motion in random dots," Vision Res., 24, 293-300. Pantie , A. & K. Turano (1986) "Direct comparisons of apparent motions...Hicks & AJ, Pantie (1978) "Apparent movement of successively generated subjec. uve figures," Perception, 7, 371-383. Ramachandran. V.S. & S.M. Anstis...thanks think deaf girl until world uncle flag home talk finish short thee our screwdiver sonry flower wrCstlir~g plan week wait accident guilty tree

  4. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    Science.gov (United States)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  5. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    Science.gov (United States)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  6. Visual motion-sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task

    Directory of Open Access Journals (Sweden)

    Marcel eMertes

    2014-09-01

    Full Text Available Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks - salient objects in the vicinity of the goal location - can play an important role in guiding the animal’s homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee’s visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee’s visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.

  7. Impact of the motion and visual complexity of the background on players' performance in video game-like displays.

    Science.gov (United States)

    Caroux, Loïc; Le Bigot, Ludovic; Vibert, Nicolas

    2013-01-01

    The visual interfaces of virtual environments such as video games often show scenes where objects are superimposed on a moving background. Three experiments were designed to better understand the impact of the complexity and/or overall motion of two types of visual backgrounds often used in video games on the detection and use of superimposed, stationary items. The impact of background complexity and motion was assessed during two typical video game tasks: a relatively complex visual search task and a classic, less demanding shooting task. Background motion impaired participants' performance only when they performed the shooting game task, and only when the simplest of the two backgrounds was used. In contrast, and independently of background motion, performance on both tasks was impaired when the complexity of the background increased. Eye movement recordings demonstrated that most of the findings reflected the impact of low-level features of the two backgrounds on gaze control.

  8. Active contour-based visual tracking by integrating colors, shapes, and motions.

    Science.gov (United States)

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  9. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Directory of Open Access Journals (Sweden)

    Qingcui eWang

    2015-05-01

    Full Text Available Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ or ‘group motion’. In element motion, the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in group motion, both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside. Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of group motion as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps. The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  10. Visualization of a Lifeboat Motion During Lowering Along Ship’s Side

    Directory of Open Access Journals (Sweden)

    Kniat Aleksander

    2017-12-01

    Full Text Available This paper presents description of a computer program for motion visualization of a lifeboat lowered along ship’s side. The program is a post-processor which reads results of numerical calculations of simulated objects’ motions. The data is used to create scene composed of 3D surfaces to visualize mutual spatial positions of a lifeboat, ship’s side and water waving surface. Since the numerical data contain description of a simulation as a function of time it is possible to screen a static scene showing the simulated objects in an arbitrary instance of time. The program can also reproduce a sequence of scenes in the form of animation and control its speed. The static mode allows to view an arbitrary crosssection of the scene, rotate and enlarge specific details and make the image more realistic by hiding invisible lines or shading. The application of the program is aimed at making it possible to assess and analyze numerical calculation results in advance of their experimental verification.

  11. Anticipating the effects of visual gravity during simulated self-motion: estimates of time-to-passage along vertical and horizontal paths.

    Science.gov (United States)

    Indovina, Iole; Maffei, Vincenzo; Lacquaniti, Francesco

    2013-09-01

    By simulating self-motion on a virtual rollercoaster, we investigated whether acceleration cued by the optic flow affected the estimate of time-to-passage (TTP) to a target. In particular, we studied the role of a visual acceleration (1 g = 9.8 m/s(2)) simulating the effects of gravity in the scene, by manipulating motion law (accelerated or decelerated at 1 g, constant speed) and motion orientation (vertical, horizontal). Thus, 1-g-accelerated motion in the downward direction or decelerated motion in the upward direction was congruent with the effects of visual gravity. We found that acceleration (positive or negative) is taken into account but is overestimated in module in the calculation of TTP, independently of orientation. In addition, participants signaled TTP earlier when the rollercoaster accelerated downward at 1 g (as during free fall), with respect to when the same acceleration occurred along the horizontal orientation. This time shift indicates an influence of the orientation relative to visual gravity on response timing that could be attributed to the anticipation of the effects of visual gravity on self-motion along the vertical, but not the horizontal orientation. Finally, precision in TTP estimates was higher during vertical fall than when traveling at constant speed along the vertical orientation, consistent with a higher noise in TTP estimates when the motion violates gravity constraints.

  12. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  13. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Science.gov (United States)

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  14. Making the invisible visible: verbal but not visual cues enhance visual detection.

    Directory of Open Access Journals (Sweden)

    Gary Lupyan

    Full Text Available BACKGROUND: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. CONCLUSIONS/SIGNIFICANCE: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  15. Transformations of visual memory induced by implied motions of pattern elements.

    Science.gov (United States)

    Finke, R A; Freyd, J J

    1985-10-01

    Four experiments measured distortions in short-term visual memory induced by displays depicting independent translations of the elements of a pattern. In each experiment, observers saw a sequence of 4 dot patterns and were instructed to remember the third pattern and to compare it with the fourth. The first three patterns depicted translations of the dots in consistent, but separate directions. Error rates and reaction times for rejecting the fourth pattern as different from the third were substantially higher when the dots in that pattern were displaced slightly forward, in the same directions as the implied motions, compared with when the dots were displaced in the opposite, backward directions. These effects showed little variation across interstimulus intervals ranging from 250 to 2,000 ms, and did not depend on whether the displays gave rise to visual apparent motion. However, they were eliminated when the dots in the fourth pattern were displaced by larger amounts in each direction, corresponding to the dot positions in the next and previous patterns in the same inducing sequence. These findings extend our initial report of the phenomenon of "representational momentum" (Freyd & Finke, 1984a), and help to rule out alternatives to the proposal that visual memories tend to undergo, at least to some extent, the transformations implied by a prior sequence of observed events.

  16. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    Science.gov (United States)

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  17. Deficient Biological Motion Perception in Schizophrenia: Results from a Motion Noise Paradigm

    Directory of Open Access Journals (Sweden)

    Jejoong eKim

    2013-07-01

    Full Text Available Background: Schizophrenia patients exhibit deficient processing of perceptual and cognitive information. However, it is not well understood how basic perceptual deficits contribute to higher level cognitive problems in this mental disorder. Perception of biological motion, a motion-based cognitive recognition task, relies on both basic visual motion processing and social cognitive processing, thus providing a useful paradigm to evaluate the potentially hierarchical relationship between these two levels of information processing. Methods: In this study, we designed a biological motion paradigm in which basic visual motion signals were manipulated systematically by incorporating different levels of motion noise. We measured the performances of schizophrenia patients (n=21 and healthy controls (n=22 in this biological motion perception task, as well as in coherent motion detection, theory of mind, and a widely used biological motion recognition task. Results: Schizophrenia patients performed the biological motion perception task with significantly lower accuracy than healthy controls when perceptual signals were moderately degraded by noise. A more substantial degradation of perceptual signals, through using additional noise, impaired biological motion perception in both groups. Performance levels on biological motion recognition, coherent motion detection and theory of mind tasks were also reduced in patients. Conclusion: The results from the motion-noise biological motion paradigm indicate that in the presence of visual motion noise, the processing of biological motion information in schizophrenia is deficient. Combined with the results of poor basic visual motion perception (coherent motion task and biological motion recognition, the association between basic motion signals and biological motion perception suggests a need to incorporate the improvement of visual motion perception in social cognitive remediation.

  18. Enhancing physics demos using iPhone slow motion

    Science.gov (United States)

    Lincoln, James

    2017-12-01

    Slow motion video enhances our ability to perceive and experience the physical world. This can help students and teachers especially in cases of fast moving objects or detailed events that happen too quickly for the eye to follow. As often as possible, demonstrations should be performed by the students themselves and luckily many of them will already have this technology in their pockets. The "S" series of iPhone has the slow motion video feature standard, which also includes simultaneous sound recording (somewhat unusual among slow motion cameras). In this article I share some of my experiences using this feature and provide advice on how to successfully use this technology in the classroom.

  19. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    Science.gov (United States)

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  20. Suppressive mechanisms in visual motion processing: From perception to intelligence.

    Science.gov (United States)

    Tadin, Duje

    2015-10-01

    Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. The efficacy of airflow and seat vibration on reducing visually induced motion sickness

    NARCIS (Netherlands)

    D’Amour, Sarah; Bos, Jelte E.; Keshavarz, Behrang

    2017-01-01

    Visually induced motion sickness (VIMS) is a well-known sensation in virtual environments and simulators, typically characterized by a variety of symptoms such as pallor, sweating, dizziness, fatigue, and/or nausea. Numerous methods to reduce VIMS have been previously introduced; however, a reliable

  2. Beta, but not gamma, band oscillations index visual form-motion integration.

    Directory of Open Access Journals (Sweden)

    Charles Aissani

    Full Text Available Electrophysiological oscillations in different frequency bands co-occur with perceptual, motor and cognitive processes but their function and respective contributions to these processes need further investigations. Here, we recorded MEG signals and seek for percept related modulations of alpha, beta and gamma band activity during a perceptual form/motion integration task. Participants reported their bound or unbound perception of ambiguously moving displays that could either be seen as a whole square-like shape moving along a Lissajou's figure (bound percept or as pairs of bars oscillating independently along cardinal axes (unbound percept. We found that beta (15-25 Hz, but not gamma (55-85 Hz oscillations, index perceptual states at the individual and group level. The gamma band activity found in the occipital lobe, although significantly higher during visual stimulation than during base line, is similar in all perceptual states. Similarly, decreased alpha activity during visual stimulation is not different for the different percepts. Trial-by-trial classification of perceptual reports based on beta band oscillations was significant in most observers, further supporting the view that modulation of beta power reliably index perceptual integration of form/motion stimuli, even at the individual level.

  3. Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.

    Science.gov (United States)

    Von Mühlenen, Adrian; Müller, Hermann J

    1999-07-01

    In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).

  4. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Science.gov (United States)

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  5. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Directory of Open Access Journals (Sweden)

    Andrew J Kolarik

    Full Text Available Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation and tactile (using a sensory substitution device, SSD guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  6. Art in Motion: A Sailboat Regatta

    Science.gov (United States)

    Angle, Julie; Foster, Gayla

    2011-01-01

    The activity described here uses the creative natures of visual art and music to enhance students' potential for creativity while increasing their understanding of the science associated with force and motion. Students design, test, and redesign a sailboat vehicle; collect data; make interpretations; and then defend their design. Music is used to…

  7. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    Science.gov (United States)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  8. Visual-vestibular integration motion perception reporting

    Science.gov (United States)

    Harm, Deborah L.; Reschke, Millard R.; Parker, Donald E.

    1999-01-01

    the development of preflight and in-flight training to help astronauts acquire and maintain a dual adaptive state. Despite the considerable experience with, and use of, an extensive set of countermeasures in the Russian space program, SMS and perceptual disturbances remain an unresolved problem on long-term flights. Reliable, valid perceptual reports are required to develop and refine stimulus rearrangements presented in the PAT devices currently being developed as countermeasures for the prevention of motion sickness and perceptual disturbances during spaceflight, and to ensure a less hazardous return to Earth. Prior to STS-8, crew member descriptions of their perceptual experiences were, at best, anecdotal. Crew members were not schooled in the physiology or psychology of sensory perception, nor were they exposed to the appropriate professional vocabulary. However, beginning with the STS-8 Shuttle flight, a serious effort was initiated to teach astronauts a systematic method to classify and quantify their perceptual responses in space, during entry, and after flight. Understanding, categorizing, and characterizing perceptual responses to spaceflight has been greatly enhanced by implementation of that training system.

  9. Three-dimensional visualization of myocardial motion and blood flow with cine-MR images

    International Nuclear Information System (INIS)

    Oshiro, Osamu; Matani, Ayumu; Chihara, Kunihiro; Mikami, Taisei; Kitabatake, Akira.

    1997-01-01

    This paper describes a three-dimensional (3D) reconstruction and presentation method to visualize myocardial motion and blood flow in a heart using cine-MR (magnetic resonance) images. Firstly, the region of myocardium and blood were segmented with certain threshold gray values. Secondly, some slices were interpolated linearly to reconstruct a 3D static image. Finally, a 3D dynamic image was presented with displaying the 3D static images sequentially. The experimental results indicate that this method enables to visualize not only normal but also abnormal blood flow in cine-mode. (author)

  10. Visual distinctiveness can enhance recency effects.

    Science.gov (United States)

    Bornstein, B H; Neely, C B; LeCompte, D C

    1995-05-01

    Experimental efforts to meliorate the modality effect have included attempts to make the visual stimulus more distinctive. McDowd and Madigan (1991) failed to find an enhanced recency effect in serial recall when the last item was made more distinct in terms of its color. In an attempt to extend this finding, three experiments were conducted in which visual distinctiveness was manipulated in a different manner, by combining the dimensions of physical size and coloration (i.e., whether the stimuli were solid or outlined in relief). Contrary to previous findings, recency was enhanced when the size and coloration of the last item differed from the other items in the list, regardless of whether the "distinctive" item was larger or smaller than the remaining items. The findings are considered in light of other research that has failed to obtain a similar enhanced recency effect, and their implications for current theories of the modality effect are discussed.

  11. Effects of virtual speaker density and room reverberation on spatiotemporal thresholds of audio-visual motion coherence.

    Directory of Open Access Journals (Sweden)

    Narayan Sankaran

    Full Text Available The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically "dry" stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1, and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2. A moving visual stimulus with invariant localization cues was generated by sequentially activating LED's along the same radial path as the virtual auditory motion. Stimuli were presented at 25°/s, 50°/s and 100°/s with a random spatial offset between audition and vision. In a 2AFC task, subjects made a judgment of the leading modality (auditory or visual. No significant differences were observed in the spatial threshold based on the point of subjective equivalence (PSE or the slope of psychometric functions (β across all three acoustic conditions. Additionally, both the PSE and β did not significantly differ across velocity, suggesting a fixed spatial window of audio-visual separation. Findings suggest that there was no loss in spatial information accompanying the reduction in spatial cues and reverberation levels tested, and establish a perceptual measure for assessing the veracity of motion generated from discrete locations and in echoic environments.

  12. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  13. Internal models and prediction of visual gravitational motion.

    Science.gov (United States)

    Zago, Myrka; McIntyre, Joseph; Senot, Patrice; Lacquaniti, Francesco

    2008-06-01

    Baurès et al. [Baurès, R., Benguigui, N., Amorim, M.-A., & Siegler, I. A. (2007). Intercepting free falling objects: Better use Occam's razor than internalize Newton's law. Vision Research, 47, 2982-2991] rejected the hypothesis that free-falling objects are intercepted using a predictive model of gravity. They argued instead for "a continuous guide for action timing" based on visual information updated till target capture. Here we show that their arguments are flawed, because they fail to consider the impact of sensori-motor delays on interception behaviour and the need for neural compensation of such delays. When intercepting a free-falling object, the delays can be overcome by a predictive model of the effects of gravity on target motion.

  14. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    Science.gov (United States)

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  15. Entropic Movement Complexity Reflects Subjective Creativity Rankings of Visualized Hand Motion Trajectories

    Science.gov (United States)

    Peng, Zhen; Braun, Daniel A.

    2015-01-01

    In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896

  16. Prefrontal Neurons Represent Motion Signals from Across the Visual Field But for Memory-Guided Comparisons Depend on Neurons Providing These Signals.

    Science.gov (United States)

    Wimmer, Klaus; Spinelli, Philip; Pasternak, Tatiana

    2016-09-07

    Visual decisions often involve comparisons of sequential stimuli that can appear at any location in the visual field. The lateral prefrontal cortex (LPFC) in nonhuman primates, shown to play an important role in such comparisons, receives information about contralateral stimuli directly from sensory neurons in the same hemisphere, and about ipsilateral stimuli indirectly from neurons in the opposite hemisphere. This asymmetry of sensory inputs into the LPFC poses the question of whether and how its neurons incorporate sensory information arriving from the two hemispheres during memory-guided comparisons of visual motion. We found that, although responses of individual LPFC neurons to contralateral stimuli were stronger and emerged 40 ms earlier, they carried remarkably similar signals about motion direction in the two hemifields, with comparable direction selectivity and similar direction preferences. This similarity was also apparent around the time of the comparison between the current and remembered stimulus because both ipsilateral and contralateral responses showed similar signals reflecting the remembered direction. However, despite availability in the LPFC of motion information from across the visual field, these "comparison effects" required for the comparison stimuli to appear at the same retinal location. This strict dependence on spatial overlap of the comparison stimuli suggests participation of neurons with localized receptive fields in the comparison process. These results suggest that while LPFC incorporates many key aspects of the information arriving from sensory neurons residing in opposite hemispheres, it continues relying on the interactions with these neurons at the time of generating signals leading to successful perceptual decisions. Visual decisions often involve comparisons of sequential visual motion that can appear at any location in the visual field. We show that during such comparisons, the lateral prefrontal cortex (LPFC) contains

  17. Comparison of visual biofeedback system with a guiding waveform and abdomen-chest motion self-control system for respiratory motion management

    International Nuclear Information System (INIS)

    Nakajima, Yujiro; Kadoya, Noriyuki; Kanai, Takayuki; Ito, Kengo; Sato, Kiyokazu; Dobashi, Suguru; Yamamoto, Takaya; Ishikawa, Yojiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2016-01-01

    Irregular breathing can influence the outcome of 4D computed tomography imaging and cause artifacts. Visual biofeedback systems associated with a patient-specific guiding waveform are known to reduce respiratory irregularities. In Japan, abdomen and chest motion self-control devices (Abches) (representing simpler visual coaching techniques without a guiding waveform) are used instead; however, no studies have compared these two systems to date. Here, we evaluate the effectiveness of respiratory coaching in reducing respiratory irregularities by comparing two respiratory management systems. We collected data from 11 healthy volunteers. Bar and wave models were used as visual biofeedback systems. Abches consisted of a respiratory indicator indicating the end of each expiration and inspiration motion. Respiratory variations were quantified as root mean squared error (RMSE) of displacement and period of breathing cycles. All coaching techniques improved respiratory variation, compared with free-breathing. Displacement RMSEs were 1.43 ± 0.84, 1.22 ± 1.13, 1.21 ± 0.86 and 0.98 ± 0.47 mm for free-breathing, Abches, bar model and wave model, respectively. Period RMSEs were 0.48 ± 0.42, 0.33 ± 0.31, 0.23 ± 0.18 and 0.17 ± 0.05 s for free-breathing, Abches, bar model and wave model, respectively. The average reduction in displacement and period RMSE compared with the wave model were 27% and 47%, respectively. For variation in both displacement and period, wave model was superior to the other techniques. Our results showed that visual biofeedback combined with a wave model could potentially provide clinical benefits in respiratory management, although all techniques were able to reduce respiratory irregularities

  18. The influence of visual motion on interceptive actions and perception.

    Science.gov (United States)

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.

    Science.gov (United States)

    Stöckl, A L; O'Carroll, D; Warrant, E J

    2017-06-28

    To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).

  20. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    Science.gov (United States)

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  1. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate

    Directory of Open Access Journals (Sweden)

    Chiba Shigeru

    2007-09-01

    Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  2. The effect of internal and external fields of view on visually induced motion sickness

    NARCIS (Netherlands)

    Bos, J.E.; Vries, S.C. de; Emmerik, M.L. van; Groen, E.L.

    2010-01-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between

  3. The influence of naturalistic, directionally non-specific motion on the spatial deployment of visual attention in right-hemispheric stroke.

    Science.gov (United States)

    Cazzoli, Dario; Hopfner, Simone; Preisig, Basil; Zito, Giuseppe; Vanbellingen, Tim; Jäger, Michael; Nef, Tobias; Mosimann, Urs; Bohlhalter, Stephan; Müri, René M; Nyffeler, Thomas

    2016-11-01

    An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: (a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, (b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Transcranial magnetic stimulation reveals the content of visual short-term memory in the visual cortex.

    Science.gov (United States)

    Silvanto, Juha; Cattaneo, Zaira

    2010-05-01

    Cortical areas involved in sensory analysis are also believed to be involved in short-term storage of that sensory information. Here we investigated whether transcranial magnetic stimulation (TMS) can reveal the content of visual short-term memory (VSTM) by bringing this information to visual awareness. Subjects were presented with two random-dot displays (moving either to the left or to the right) and they were required to maintain one of these in VSTM. In Experiment 1, TMS was applied over the motion-selective area V5/MT+ above phosphene threshold during the maintenance phase. The reported phosphene contained motion features of the memory item, when the phosphene spatially overlapped with memory item. Specifically, phosphene motion was enhanced when the memory item moved in the same direction as the subjects' V5/MT+ baseline phosphene, whereas it was reduced when the motion direction of the memory item was incongruent with that of the baseline V5/MT+ phosphene. There was no effect on phosphene reports when there was no spatial overlap between the phosphene and the memory item. In Experiment 2, VSTM maintenance did not influence the appearance of phosphenes induced from the lateral occipital region. These interactions between VSTM maintenance and phosphene appearance demonstrate that activity in V5/MT+ reflects the motion qualities of items maintained in VSTM. Furthermore, these results also demonstrate that information in VSTM can modulate the pattern of visual activation reaching awareness, providing evidence for the view that overlapping neuronal populations are involved in conscious visual perception and VSTM. 2010. Published by Elsevier Inc.

  5. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    Science.gov (United States)

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Motion-related resource allocation in dynamic wireless visual sensor network environments.

    Science.gov (United States)

    Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E

    2014-01-01

    This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.

  7. Novelty enhances visual perception.

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    Full Text Available The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.

  8. Novelty enhances visual perception.

    Science.gov (United States)

    Schomaker, Judith; Meeter, Martijn

    2012-01-01

    The effects of novelty on low-level visual perception were investigated in two experiments using a two-alternative forced-choice tilt detection task. A target, consisting of a Gabor patch, was preceded by a cue that was either a novel or a familiar fractal image. Participants had to indicate whether the Gabor stimulus was vertically oriented or slightly tilted. In the first experiment tilt angle was manipulated; in the second contrast of the Gabor patch was varied. In the first, we found that sensitivity was enhanced after a novel compared to a familiar cue, and in the second we found sensitivity to be enhanced for novel cues in later experimental blocks when participants became more and more familiarized with the familiar cue. These effects were not caused by a shift in the response criterion. This shows for the first time that novel stimuli affect low-level characteristics of perception. We suggest that novelty can elicit a transient attentional response, thereby enhancing perception.

  9. Normal aging affects movement execution but not visual motion working memory and decision-making delay during cue-dependent memory-based smooth-pursuit.

    Science.gov (United States)

    Fukushima, Kikuro; Barnes, Graham R; Ito, Norie; Olley, Peter M; Warabi, Tateo

    2014-07-01

    Aging affects virtually all functions including sensory/motor and cognitive activities. While retinal image motion is the primary input for smooth-pursuit, its efficiency/accuracy depends on cognitive processes. Elderly subjects exhibit gain decrease during initial and steady-state pursuit, but reports on latencies are conflicting. Using a cue-dependent memory-based smooth-pursuit task, we identified important extra-retinal mechanisms for initial pursuit in young adults including cue information priming and extra-retinal drive components (Ito et al. in Exp Brain Res 229:23-35, 2013). We examined aging effects on parameters for smooth-pursuit using the same tasks. Elderly subjects were tested during three task conditions as previously described: memory-based pursuit, simple ramp-pursuit just to follow motion of a single spot, and popping-out of the correct spot during memory-based pursuit to enhance retinal image motion. Simple ramp-pursuit was used as a task that did not require visual motion working memory. To clarify aging effects, we then compared the results with the previous young subject data. During memory-based pursuit, elderly subjects exhibited normal working memory of cue information. Most movement-parameters including pursuit latencies differed significantly between memory-based pursuit and simple ramp-pursuit and also between young and elderly subjects. Popping-out of the correct spot motion was ineffective for enhancing initial pursuit in elderly subjects. However, the latency difference between memory-based pursuit and simple ramp-pursuit in individual subjects, which includes decision-making delay in the memory task, was similar between the two groups. Our results suggest that smooth-pursuit latencies depend on task conditions and that, although the extra-retinal mechanisms were functional for initial pursuit in elderly subjects, they were less effective.

  10. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    Science.gov (United States)

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Motion sickness symptoms in a ship motion simulator: effects of inside, outside, and no view

    NARCIS (Netherlands)

    Bos, J.E.; MacKinnon, S.N.; Patterson, A.

    2005-01-01

    Vehicle motion characteristics differ between air, road, and sea environments, both vestibularly and visually. Effects of vision on motion sickness have been studied before, though less systematically in a naval setting. It is hypothesized that appropriate visual information on self-motion is

  12. Motion adaptation leads to parsimonious encoding of natural optic flow by blowfly motion vision system

    NARCIS (Netherlands)

    Heitwerth, J.; Kern, R.; Hateren, J.H. van; Egelhaaf, M.

    Neurons sensitive to visual motion change their response properties during prolonged motion stimulation. These changes have been interpreted as adaptive and were concluded, for instance, to adjust the sensitivity of the visual motion pathway to velocity changes or to increase the reliability of

  13. Application and API for Real-time Visualization of Ground-motions and Tsunami

    Science.gov (United States)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Kubo, T.; Nakamura, H.; Azuma, H.; Fujiwara, H.

    2015-12-01

    Due to the recent progress of seismograph and communication environment, real-time and continuous ground-motion observation becomes technically and economically feasible. K-NET and KiK-net, which are nationwide strong motion networks operated by NIED, cover all Japan by about 1750 stations in total. More than half of the stations transmit the ground-motion indexes and/or waveform data in every second. Traditionally, strong-motion data were recorded by event-triggering based instruments with non-continues telephone line which is connected only after an earthquake. Though the data from such networks mainly contribute to preparations for future earthquakes, huge amount of real-time data from dense network are expected to directly contribute to the mitigation of ongoing earthquake disasters through, e.g., automatic shutdown plants and helping decision-making for initial response. By generating the distribution map of these indexes and uploading them to the website, we implemented the real-time ground motion monitoring system, Kyoshin (strong-motion in Japanese) monitor. This web service (www.kyoshin.bosai.go.jp) started in 2008 and anyone can grasp the current ground motions of Japan. Though this service provides only ground-motion map in GIF format, to take full advantage of real-time strong-motion data to mitigate the ongoing disasters, digital data are important. We have developed a WebAPI to provide real-time data and related information such as ground motions (5 km-mesh) and arrival times estimated from EEW (earthquake early warning). All response data from this WebAPI are in JSON format and are easy to parse. We also developed Kyoshin monitor application for smartphone, 'Kmoni view' using the API. In this application, ground motions estimated from EEW are overlapped on the map with the observed one-second-interval indexes. The application can playback previous earthquakes for demonstration or disaster drill. In mobile environment, data traffic and battery are

  14. Pleasant music as a countermeasure against visually induced motion sickness.

    Science.gov (United States)

    Keshavarz, Behrang; Hecht, Heiko

    2014-05-01

    Visually induced motion sickness (VIMS) is a well-known side-effect in virtual environments or simulators. However, effective behavioral countermeasures against VIMS are still sparse. In this study, we tested whether music can reduce the severity of VIMS. Ninety-three volunteers were immersed in an approximately 14-minute-long video taken during a bicycle ride. Participants were randomly assigned to one of four experimental groups, either including relaxing music, neutral music, stressful music, or no music. Sickness scores were collected using the Fast Motion Sickness Scale and the Simulator Sickness Questionnaire. Results showed an overall trend for relaxing music to reduce the severity of VIMS. When factoring in the subjective pleasantness of the music, a significant reduction of VIMS occurred only when the presented music was perceived as pleasant, regardless of the music type. In addition, we found a gender effect with women reporting more sickness than men. We assume that the presentation of pleasant music can be an effective, low-cost, and easy-to-administer method to reduce VIMS. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories.

    Directory of Open Access Journals (Sweden)

    Suzanne A E Nooij

    Full Text Available This study investigated the role of vection (i.e., a visually induced sense of self-motion, optokinetic nystagmus (OKN, and inadvertent head movements in visually induced motion sickness (VIMS, evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent, and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent. In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48. Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS.

  16. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    Science.gov (United States)

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  17. Simulator study of the effect of visual-motion time delays on pilot tracking performance with an audio side task

    Science.gov (United States)

    Riley, D. R.; Miller, G. K., Jr.

    1978-01-01

    The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.

  18. Detection of motion artifacts in optical coherence tomography using the fundus enhancement system

    International Nuclear Information System (INIS)

    Schaudig, U.; Skevas, C.; Scholz, F.

    2007-01-01

    Extensive artefacts due to major eye movements are detectable in optical coherence tomography images (OCT), but frequency and extent of small eye movements have not been studied. To investigate frequency and extent of irregularities in OCT imaging due to eye movements during the scanning process in OCT. A fundus enhancement system (FES) originally designed to improve retinal image quality in a conventional OCT device (Zeiss Stratus OCT) in order to integrate OCT images into fluorescence angiographies was used to record the scanning process and review OCT-scan acquisition in slow motion. A horizontal and a vertical single line scan of 5 mm length through the center of fixation were obtained in 40 eyes of 20 normal healthy subjects, all with a visual acuity of at least 20/20. Scans were investigated for loss of fixation and eye movements during the scanning process. Outcome measures were presence or absence of eye movements during the scanning process and mean deviation from the intended scan position, measured in millimeter. 7 of 20 patients showed no eye movements in both eyes. 4 of the remaining 13 patients showed eye movements only in one eye. In the eyes with detectable movements, the mean deviation from the intended scan position was 0.2 mm in the horizontal and 0.28 mm in the vertical scans. Fundus imaging in conventional systems can be enhanced to detect artifacts due to minimal eye movements during the scanning process. In this first series with normal healthy subjects, minimal eye movements were present in the over half of the investigations. Although the distances from the intended scan positions seem to be small in normal eyes, further investigations of the phenomenon are necessary. OCT is increasingly used as the primary tool for controlling therapy in macular diseases and the extent of motion artefacts is not known. (author) [de

  19. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  20. Modeling and visualization of carrier motion in organic films by optical second harmonic generation and Maxwell-displacement current

    Science.gov (United States)

    Iwamoto, Mitsumasa; Manaka, Takaaki; Taguchi, Dai

    2015-09-01

    The probing and modeling of carrier motions in materials as well as in electronic devices is a fundamental research subject in science and electronics. According to the Maxwell electromagnetic field theory, carriers are a source of electric field. Therefore, by probing the dielectric polarization caused by the electric field arising from moving carriers and dipoles, we can find a way to visualize the carrier motions in materials and in devices. The techniques used here are an electrical Maxwell-displacement current (MDC) measurement and a novel optical method based on the electric field induced optical second harmonic generation (EFISHG) measurement. The MDC measurement probes changes of induced charge on electrodes, while the EFISHG probes nonlinear polarization induced in organic active layers due to the coupling of electron clouds of molecules and electro-magnetic waves of an incident laser beam in the presence of a DC field caused by electrons and holes. Both measurements allow us to probe dynamical carrier motions in solids through the detection of dielectric polarization phenomena originated from dipolar motions and electron transport. In this topical review, on the basis of Maxwell’s electro-magnetism theory of 1873, which stems from Faraday’s idea, the concept for probing electron and hole transport in solids by using the EFISHG is discussed in comparison with the conventional time of flight (TOF) measurement. We then visualize carrier transit in organic devices, i.e. organic field effect transistors, organic light emitting diodes, organic solar cells, and others. We also show that visualizing an EFISHG microscopic image is a novel way for characterizing anisotropic carrier transport in organic thin films. We also discuss the concept of the detection of rotational dipolar motions in monolayers by means of the MDC measurement, which is capable of probing the change of dielectric spontaneous polarization formed by dipoles in organic monolayers. Finally we

  1. Characterizing the effects of feature salience and top-down attention in the early visual system.

    Science.gov (United States)

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-07-01

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of

  2. Enhancement of vortex induced forces and motion through surface roughness control

    Science.gov (United States)

    Bernitsas, Michael M [Saline, MI; Raghavan, Kamaldev [Houston, TX

    2011-11-01

    Roughness is added to the surface of a bluff body in a relative motion with respect to a fluid. The amount, size, and distribution of roughness on the body surface is controlled passively or actively to modify the flow around the body and subsequently the Vortex Induced Forces and Motion (VIFM). The added roughness, when designed and implemented appropriately, affects in a predetermined way the boundary layer, the separation of the boundary layer, the level of turbulence, the wake, the drag and lift forces, and consequently the Vortex Induced Motion (VIM), and the fluid-structure interaction. The goal of surface roughness control is to increase Vortex Induced Forces and Motion. Enhancement is needed in such applications as harnessing of clean and renewable energy from ocean/river currents using the ocean energy converter VIVACE (Vortex Induced Vibration for Aquatic Clean Energy).

  3. Unilateral prefrontal lesions impair memory-guided comparisons of contralateral visual motion.

    Science.gov (United States)

    Pasternak, Tatiana; Lui, Leo L; Spinelli, Philip M

    2015-05-06

    The contribution of the lateral prefrontal cortex (LPFC) to working memory is the topic of active debate. On the one hand, it has been argued that the persistent delay activity in LPFC recorded during some working memory tasks is a reflection of sensory storage, the notion supported by some lesion studies. On the other hand, there is emerging evidence that the LPFC plays a key role in the maintenance of sensory information not by storing relevant visual signals but by allocating visual attention to such stimuli. In this study, we addressed this question by examining the effects of unilateral LPFC lesions during a working memory task requiring monkeys to compare directions of two moving stimuli, separated by a delay. The lesions resulted in impaired thresholds for contralesional stimuli at longer delays, and these deficits were most dramatic when the task required rapid reallocation of spatial attention. In addition, these effects were equally pronounced when the remembered stimuli were at threshold or moved coherently. The contralesional nature of the deficits points to the importance of the interactions between the LPFC and the motion processing neurons residing in extrastriate area MT. Delay-specificity of the deficit supports LPFC involvement in the maintenance stage of the comparison task. However, because this deficit was independent of stimulus features giving rise to the remembered direction and was most pronounced during rapid shifts of attention, its role is more likely to be attending and accessing the preserved motion signals rather than their storage. Copyright © 2015 the authors 0270-6474/15/357095-11$15.00/0.

  4. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    Science.gov (United States)

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.

  5. The effect of internal and external fields of view on visually induced motion sickness.

    Science.gov (United States)

    Bos, Jelte E; de Vries, Sjoerd C; van Emmerik, Martijn L; Groen, Eric L

    2010-07-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between iFOV and eFOV would lead to sickness. To that end we used a computer game environment with different iFOV and eFOV settings, and found the opposite effect. We speculate that the relative large differences between iFOV and eFOV used in this experiment caused the discrepancy, as may be explained by assuming an observer model controlling body motion. Copyright 2009 Elsevier Ltd. All rights reserved.

  6. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Science.gov (United States)

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  7. Roll motion stimuli : sensory conflict, perceptual weighting and motion sickness

    NARCIS (Netherlands)

    Graaf, B. de; Bles, W.; Bos, J.E.

    1998-01-01

    In an experiment with seventeen subjects interactions of visual roll motion stimuli and vestibular body tilt stimuli were examined in determining the subjective vertical. Interindi-vidual differences in weighting the visual information were observed, but in general visual and vestibular responses

  8. Enhancing online timeline visualizations with events and images

    Science.gov (United States)

    Pandya, Abhishek; Mulye, Aniket; Teoh, Soon Tee

    2011-01-01

    The use of timeline to visualize time-series data is one of the most intuitive and commonly used methods, and is used for widely-used applications such as stock market data visualization, and tracking of poll data of election candidates over time. While useful, these timeline visualizations are lacking in contextual information of events which are related or cause changes in the data. We have developed a system that enhances timeline visualization with display of relevant news events and their corresponding images, so that users can not only see the changes in the data, but also understand the reasons behind the changes. We have also conducted a user study to test the effectiveness of our ideas.

  9. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    Science.gov (United States)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  10. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  12. Self versus environment motion in postural control.

    Directory of Open Access Journals (Sweden)

    Kalpana Dokka

    2010-02-01

    Full Text Available To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.

  13. A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.

    Science.gov (United States)

    Ratzlaff, Michael; Nawrot, Mark

    2016-09-01

    The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.

  14. Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

    Directory of Open Access Journals (Sweden)

    Warrick Roseboom

    2011-04-01

    Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.

  15. Working memory can enhance unconscious visual perception.

    Science.gov (United States)

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  16. Emergence of realism: Enhanced visual artistry and high accuracy of visual numerosity representation after left prefrontal damage.

    Science.gov (United States)

    Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro

    2014-05-01

    Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Visual-vestibular interaction in motion perception

    NARCIS (Netherlands)

    Hosman, Ruud J A W; Cardullo, Frank M.; Bos, Jelte E.

    2011-01-01

    Correct perception of self motion is of vital importance for both the control of our position and posture when moving around in our environment. With the development of human controlled vehicles as bicycles, cars and aircraft motion perception became of interest for the understanding of vehicle

  18. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  19. Visual training improves perceptual grouping based on basic stimulus features.

    Science.gov (United States)

    Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M

    2017-10-01

    Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.

  20. Visual working memory contaminates perception.

    Science.gov (United States)

    Kang, Min-Suk; Hong, Sang Wook; Blake, Randolph; Woodman, Geoffrey F

    2011-10-01

    Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different motion direction in visual working memory. Control experiments showed that none of a variety of alternative explanations could account for this repulsion effect induced by working memory. Our findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms as those involved in processing basic sensory events.

  1. TH-CD-206-12: Image-Based Motion Estimation for Plaque Visualization in Coronary Computed Tomography Angiography

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X; Sisniega, A; Zbijewski, W; Stayman, J [Johns Hopkins University, Balitmore, MD (United States); Contijoch, F; McVeigh, E [University of California, San Diego, San Diego, CA (United States)

    2016-06-15

    Purpose: Visualization and quantification of coronary artery calcification and atherosclerotic plaque benefits from coronary artery motion (CAM) artifact elimination. This work applies a rigid linear motion model to a Volume of Interest (VoI) for estimating motion estimation and compensation of image degradation in Coronary Computed Tomography Angiography (CCTA). Methods: In both simulation and testbench experiments, translational CAM was generated by displacement of the imaging object (i.e. simulated coronary artery and explanted human heart) by ∼8 mm, approximating the motion of a main coronary branch. Rotation was assumed to be negligible. A motion degraded region containing a calcification was selected as the VoI. Local residual motion was assumed to be rigid and linear over the acquisition window, simulating motion observed during diastasis. The (negative) magnitude of the image gradient of the reconstructed VoI was chosen as the motion estimation objective and was minimized with Covariance Matrix Adaptation Evolution Strategy (CMAES). Results: Reconstruction incorporated the estimated CAM yielded signification recovery of fine calcification structures as well as reduced motion artifacts within the selected local region. The compensated reconstruction was further evaluated using two image similarity metrics, the structural similarity index (SSIM) and Root Mean Square Error (RMSE). At the calcification site, the compensated data achieved a 3% increase in SSIM and a 91.2% decrease in RMSE in comparison with the uncompensated reconstruction. Conclusion: Results demonstrate the feasibility of our image-based motion estimation method exploiting a local rigid linear model for CAM compensation. The method shows promising preliminary results for the application of such estimation in CCTA. Further work will involve motion estimation of complex motion corrupted patient data acquired from clinical CT scanner.

  2. WiseView: Visualizing motion and variability of faint WISE sources

    Science.gov (United States)

    Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume

    2018-06-01

    WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.

  3. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  4. Motion perception in motion : how we perceive object motion during smooth pursuit eye movements

    NARCIS (Netherlands)

    Souman, J.L.

    2005-01-01

    Eye movements change the retinal image motion of objects in the visual field. When we make an eye movement, the image of a stationary object will move across the retinae, while the retinal image of an object that we follow with the eyes is approximately stationary. To enable us to perceive motion in

  5. The enhanced nodal equilibrium ocean tide and polar motion

    Science.gov (United States)

    Sanchez, B. V.

    1979-01-01

    The tidal response of the ocean to long period forcing functions was investigated. The results indicate the possibility of excitation of a wobble component with the amplitude and frequency indicated by the data. An enhancement function for the equilibrium tide was postulated in the form of an expansion in zonal harmonics and the coefficients of such an expansion were estimated so as to obtain polar motion components of the required magnitude.

  6. Micro-calibration of space and motion by photoreceptors synchronized in parallel with cortical oscillations: A unified theory of visual perception.

    Science.gov (United States)

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike

    2018-01-01

    A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical

  7. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    Science.gov (United States)

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  9. Dynamic stereoscopic selective visual attention (dssva): integrating motion and shape with depth in video segmentation

    OpenAIRE

    López Bonal, María Teresa; Fernández Caballero, Antonio; Saiz Valverde, Sergio

    2008-01-01

    Depth inclusion as an important parameter for dynamic selective visual attention is presented in this article. The model introduced in this paper is based on two previously developed models, dynamic selective visual attention and visual stereoscopy, giving rise to the so-called dynamic stereoscopic selective visual attention method. The three models are based on the accumulative computation problem-solving method. This paper shows how software reusability enables enhancing results in vision r...

  10. Optimal margin and edge-enhanced intensity maps in the presence of motion and uncertainty

    International Nuclear Information System (INIS)

    Chan, Timothy C Y; Tsitsiklis, John N; Bortfeld, Thomas

    2010-01-01

    In radiation therapy, intensity maps involving margins have long been used to counteract the effects of dose blurring arising from motion. More recently, intensity maps with increased intensity near the edge of the tumour (edge enhancements) have been studied to evaluate their ability to offset similar effects that affect tumour coverage. In this paper, we present a mathematical methodology to derive margin and edge-enhanced intensity maps that aim to provide tumour coverage while delivering minimum total dose. We show that if the tumour is at most about twice as large as the standard deviation of the blurring distribution, the optimal intensity map is a pure scaling increase of the static intensity map without any margins or edge enhancements. Otherwise, if the tumour size is roughly twice (or more) the standard deviation of motion, then margins and edge enhancements are preferred, and we present formulae to calculate the exact dimensions of these intensity maps. Furthermore, we extend our analysis to include scenarios where the parameters of the motion distribution are not known with certainty, but rather can take any value in some range. In these cases, we derive a similar threshold to determine the structure of an optimal margin intensity map.

  11. Comparison of transient severe motion in gadoxetate disodium and gadopentetate dimeglumine-enhanced MRI. Effect of modified breath-holding method

    International Nuclear Information System (INIS)

    Song, Ji Soo; Choi, Eun Jung; Park, Eun Hae; Lee, Ju-Hyung

    2018-01-01

    To compare the occurrence of transient severe motion (TSM) between gadoxetate disodium- and gadopentetate dimeglumine-enhanced MRI and between gadoxetate disodium-enhanced MRI scans obtained with and without the application of a modified breath-holding technique. We reviewed 80 patients who underwent two magnetic resonance examinations (gadoxetate disodium-enhanced MRI and gadopentetate dimeglumine-enhanced MRI) with the application of a modified breath-holding technique (dual group). This group was compared with 100 patients who underwent gadoxetate disodium-enhanced MRI without the application of the modified breath-holding technique (single group). Patient risk factors and motion scores (1 [none] to 5 [non-diagnostic]) for each dynamic-phase imaging were analysed. In the dual group, mean motion scores did not differ significantly between gadoxetate disodium- and gadopentetate dimeglumine-enhanced MRI (p=0.096-0.807) in any phase. However, in all phases except the late dynamic phase, mean motion scores of the dual group were significantly lower than those in the single group. TSM incidence did not differ significantly between gadoxetate disodium- and gadopentetate dimeglumine-enhanced MRI in the dual group (3.8% vs. 1.3%, p=0.620). With proper application of the modified breath-holding technique, TSM occurrence with gadoxetate disodium-enhanced MRI was comparable to that associated with gadopentetate dimeglumine-enhanced MRI. (orig.)

  12. Comparison of transient severe motion in gadoxetate disodium and gadopentetate dimeglumine-enhanced MRI. Effect of modified breath-holding method

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ji Soo; Choi, Eun Jung; Park, Eun Hae [Chonbuk National University Medical School and Hospital, Department of Radiology, Jeonju (Korea, Republic of); Research Institute of Clinical Medicine of Chonbuk National University, Jeonju (Korea, Republic of); Biomedical Research Institute of Chonbuk National University Hospital, Jeonju (Korea, Republic of); Lee, Ju-Hyung [Chonbuk National University Medical School, Department of Preventive Medicine, Jeonju (Korea, Republic of)

    2018-03-15

    To compare the occurrence of transient severe motion (TSM) between gadoxetate disodium- and gadopentetate dimeglumine-enhanced MRI and between gadoxetate disodium-enhanced MRI scans obtained with and without the application of a modified breath-holding technique. We reviewed 80 patients who underwent two magnetic resonance examinations (gadoxetate disodium-enhanced MRI and gadopentetate dimeglumine-enhanced MRI) with the application of a modified breath-holding technique (dual group). This group was compared with 100 patients who underwent gadoxetate disodium-enhanced MRI without the application of the modified breath-holding technique (single group). Patient risk factors and motion scores (1 [none] to 5 [non-diagnostic]) for each dynamic-phase imaging were analysed. In the dual group, mean motion scores did not differ significantly between gadoxetate disodium- and gadopentetate dimeglumine-enhanced MRI (p=0.096-0.807) in any phase. However, in all phases except the late dynamic phase, mean motion scores of the dual group were significantly lower than those in the single group. TSM incidence did not differ significantly between gadoxetate disodium- and gadopentetate dimeglumine-enhanced MRI in the dual group (3.8% vs. 1.3%, p=0.620). With proper application of the modified breath-holding technique, TSM occurrence with gadoxetate disodium-enhanced MRI was comparable to that associated with gadopentetate dimeglumine-enhanced MRI. (orig.)

  13. Synchronous Sounds Enhance Visual Sensitivity without Reducing Target Uncertainty

    Directory of Open Access Journals (Sweden)

    Yi-Chuan Chen

    2011-10-01

    Full Text Available We examined the crossmodal effect of the presentation of a simultaneous sound on visual detection and discrimination sensitivity using the equivalent noise paradigm (Dosher & Lu, 1998. In each trial, a tilted Gabor patch was presented in either the first or second of two intervals consisting of dynamic 2D white noise with one of seven possible contrast levels. The results revealed that the sensitivity of participants' visual detection and discrimination performance were both enhanced by the presentation of a simultaneous sound, though only close to the noise level at which participants' target contrast thresholds started to increase with the increasing noise contrast. A further analysis of the psychometric function at this noise level revealed that the increase in sensitivity could not be explained by the reduction of participants' uncertainty regarding the onset time of the visual target. We suggest that this crossmodal facilitatory effect may be accounted for by perceptual enhancement elicited by a simultaneously-presented sound, and that the crossmodal facilitation was easier to observe when the visual system encountered a level of noise that happened to be close to the level of internal noise embedded within the system.

  14. Visual search for motion-form conjunctions: is form discriminated within the motion system?

    Science.gov (United States)

    von Mühlenen, A; Müller, H J

    2001-06-01

    Motion-form conjunction search can be more efficient when the target is moving (a moving 45 degrees tilted line among moving vertical and stationary 45 degrees tilted lines) rather than stationary. This asymmetry may be due to aspects of form being discriminated within a motion system representing only moving items, whereas discrimination of stationary items relies on a static form system (J. Driver & P. McLeod, 1992). Alternatively, it may be due to search exploiting differential motion velocity and direction signals generated by the moving-target and distractor lines. To decide between these alternatives, 4 experiments systematically varied the motion-signal information conveyed by the moving target and distractors while keeping their form difference salient. Moving-target search was found to be facilitated only when differential motion-signal information was available. Thus, there is no need to assume that form is discriminated within the motion system.

  15. Motion Learning Based on Bayesian Program Learning

    Directory of Open Access Journals (Sweden)

    Cheng Meng-Zhen

    2017-01-01

    Full Text Available The concept of virtual human has been highly anticipated since the 1980s. By using computer technology, Human motion simulation could generate authentic visual effect, which could cheat human eyes visually. Bayesian Program Learning train one or few motion data, generate new motion data by decomposing and combining. And the generated motion will be more realistic and natural than the traditional one.In this paper, Motion learning based on Bayesian program learning allows us to quickly generate new motion data, reduce workload, improve work efficiency, reduce the cost of motion capture, and improve the reusability of data.

  16. Balancing awareness: Vestibular signals modulate visual consciousness in the absence of awareness.

    Science.gov (United States)

    Salomon, Roy; Kaliuzhna, Mariia; Herbelin, Bruno; Blanke, Olaf

    2015-11-01

    The processing of visual and vestibular information is crucial for perceiving self-motion. Visual cues, such as optic flow, have been shown to induce and alter vestibular percepts, yet the role of vestibular information in shaping visual awareness remains unclear. Here we investigated if vestibular signals influence the access to awareness of invisible visual signals. Using natural vestibular stimulation (passive yaw rotations) on a vestibular self-motion platform, and optic flow masked through continuous flash suppression (CFS) we tested if congruent visual-vestibular information would break interocular suppression more rapidly than incongruent information. We found that when the unseen optic flow was congruent with the vestibular signals perceptual suppression as quantified with the CFS paradigm was broken more rapidly than when it was incongruent. We argue that vestibular signals impact the formation of visual awareness through enhanced access to awareness for congruent multisensory stimulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    Science.gov (United States)

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  18. Contextual effects on motion perception and smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  19. Helicopter flight simulation motion platform requirements

    Science.gov (United States)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  20. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging.

    Science.gov (United States)

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-08-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task interruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Applications of Phase-Based Motion Processing

    Science.gov (United States)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  2. Motion as a phenotype: the use of live-cell imaging and machine visual screening to characterize transcription-dependent chromosome dynamics

    Directory of Open Access Journals (Sweden)

    Silver Pamela A

    2006-04-01

    Full Text Available Abstract Background Gene transcriptional activity is well correlated with intra-nuclear position, especially relative to the nuclear periphery, which is a region classically associated with gene silencing. Recently however, actively transcribed genes have also been found localized to the nuclear periphery in the yeast Saccharomyces cerevisiae. When genes are activated, they become associated with the nuclear pore complex (NPC at the nuclear envelope. Furthermore, chromosomes are not static structures, but exhibit constrained diffusion in real-time, live-cell studies of particular loci. The relationship of chromosome motion with transcriptional activation and active-gene recruitment to the nuclear periphery has not yet been investigated. Results We have generated a yeast strain that enables us to observe the motion of the galactose-inducible GAL gene locus relative to the nuclear periphery in real-time under transcriptionally active and repressed conditions. Using segmented geometric particle tracking, we show that the repressed GAL locus undergoes constrained diffusive movement, and that transcriptional induction with galactose is associated with an enrichment in cells with GAL loci that are both associated with the nuclear periphery and much more constrained in their movement. Furthermore, we report that the mRNA export factor Sac3 is involved in this galactose-induced enrichment of GAL loci at the nuclear periphery. In parallel, using a novel machine visual screening technique, we find that the motion of constrained GAL loci correlates with the motion of the cognate nuclei in galactose-induced cells. Conclusion Transcriptional activation of the GAL genes is associated with their tethering and motion constraint at the nuclear periphery. We describe a model of gene recruitment to the nuclear periphery involving gene diffusion and the mRNA export factor Sac3 that can be used as a framework for further experimentation. In addition, we applied to

  3. Cholinergic pairing with visual activation results in long-term enhancement of visual evoked potentials.

    Directory of Open Access Journals (Sweden)

    Jun Il Kang

    Full Text Available Acetylcholine (ACh contributes to learning processes by modulating cortical plasticity in terms of intensity of neuronal activity and selectivity properties of cortical neurons. However, it is not known if ACh induces long term effects within the primary visual cortex (V1 that could sustain visual learning mechanisms. In the present study we analyzed visual evoked potentials (VEPs in V1 of rats during a 4-8 h period after coupling visual stimulation to an intracortical injection of ACh analog carbachol or stimulation of basal forebrain. To clarify the action of ACh on VEP activity in V1, we individually pre-injected muscarinic (scopolamine, nicotinic (mecamylamine, alpha7 (methyllycaconitine, and NMDA (CPP receptor antagonists before carbachol infusion. Stimulation of the cholinergic system paired with visual stimulation significantly increased VEP amplitude (56% during a 6 h period. Pre-treatment with scopolamine, mecamylamine and CPP completely abolished this long-term enhancement, while alpha7 inhibition induced an instant increase of VEP amplitude. This suggests a role of ACh in facilitating visual stimuli responsiveness through mechanisms comparable to LTP which involve nicotinic and muscarinic receptors with an interaction of NMDA transmission in the visual cortex.

  4. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    Science.gov (United States)

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  5. Long-term effects of serial anodal tDCS on motion perception in subjects with occipital stroke measured in the unaffected visual hemifield

    Directory of Open Access Journals (Sweden)

    Manuel C Olma

    2013-06-01

    Full Text Available Transcranial direct current stimulation (tDCS is a novel neuromodulatory tool that has seen early transition to clinical trials, although the high variability of these findings necessitates further studies in clincally-relevant populations. The majority of evidence into effects of repeated tDCS is based on research in the human motor system, but it is unclear whether the long-term effects of serial tDCS are motor-specific or transferable to other brain areas. This study aimed to examine whether serial anodal tDCS over the visual cortex can exogenously induce long-term neuroplastic changes in the visual cortex. However, when the visual cortex is affected by a cortical lesion, up-regulated endogenous neuroplastic adaptation processes may alter the susceptibility to tDCS. To this end, motion perception was investigated in the unaffected hemifield of subjects with unilateral visual cortex lesions. Twelve subjects with occipital ischaemic lesions participated in a within-subject, sham-controlled, double-blind study. MRI-registered sham or anodal tDCS (1.5 mA, 20 minutes was applied on five consecutive days over the visual cortex. Motion perception was tested before and after stimulation sessions and at 14- and 28-day follow-up. After a 16-day interval an identical study block with the other stimulation condition (anodal or sham tDCS followed. Serial anodal tDCS over the visual cortex resulted in an improvement in motion perception, a function attributed to MT/V5. This effect was still measurable at 14- and 28-day follow-up measurements. Thus, this may represent evidence for long-term tDCS-induced plasticity and has implications for the design of studies examining the time course of tDCS effects in both the visual and motor systems.

  6. Visualization of disciplinary profiles: Enhanced science overlay maps

    NARCIS (Netherlands)

    Carley, S.; Porter, A.L.; Rafols, I.; Leydesdorff, L.

    Purpose The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps.

  7. A preliminary study of MR sickness evaluation using visual motion aftereffect for advanced driver assistance systems.

    Science.gov (United States)

    Nakajima, Sawako; Ino, Shuichi; Ifukube, Tohru

    2007-01-01

    Mixed Reality (MR) technologies have recently been explored in many areas of Human-Machine Interface (HMI) such as medicine, manufacturing, entertainment and education. However MR sickness, a kind of motion sickness is caused by sensory conflicts between the real world and virtual world. The purpose of this paper is to find out a new evaluation method of motion and MR sickness. This paper investigates a relationship between the whole-body vibration related to MR technologies and the motion aftereffect (MAE) phenomenon in the human visual system. This MR environment is modeled after advanced driver assistance systems in near-future vehicles. The seated subjects in the MR simulator were shaken in the pitch direction ranging from 0.1 to 2.0 Hz. Results show that MAE is useful for evaluation of MR sickness incidence. In addition, a method to reduce the MR sickness by auditory stimulation is proposed.

  8. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    Science.gov (United States)

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Encoding audio motion: spatial impairment in early blind individuals

    Directory of Open Access Journals (Sweden)

    Sara eFinocchietti

    2015-09-01

    Full Text Available The consequence of blindness on auditory spatial localization has been an interesting issue of research in the last decade providing mixed results. Enhanced auditory spatial skills in individuals with visual impairment have been reported by multiple studies, while some aspects of spatial hearing seem to be impaired in the absence of vision. In this study, the ability to encode the trajectory of a 2 dimensional sound motion, reproducing the complete movement, and reaching the correct end-point sound position, is evaluated in 12 early blind individuals, 8 late blind individuals, and 20 age-matched sighted blindfolded controls. Early blind individuals correctly determine the direction of the sound motion on the horizontal axis, but show a clear deficit in encoding the sound motion in the lower side of the plane. On the contrary, late blind individuals and blindfolded controls perform much better with no deficit in the lower side of the plane. In fact the mean localization error resulted 271 ± 10 mm for early blind individuals, 65 ± 4 mm for late blind individuals, and 68 ± 2 mm for sighted blindfolded controls.These results support the hypothesis that i it exists a trade-off between the development of enhanced perceptual abilities and role of vision in the sound localization abilities of early blind individuals, and ii the visual information is fundamental in calibrating some aspects of the representation of auditory space in the brain.

  10. Neural Mechanisms of Illusory Motion: Evidence from ERP Study

    Directory of Open Access Journals (Sweden)

    Xu Y. A. N. Yun

    2011-05-01

    Full Text Available ERPs were used to examine the neural correlates of illusory motion, by presenting the Rice Wave illusion (CI, its two variants (WI and NI and a real motion video (RM. Results showed that: Firstly, RM elicited a more negative deflection than CI, NI and WI between 200–350ms. Secondly, between 500–600ms, CI elicited a more positive deflection than NI and WI, and RM elicited a more positive deflection than CI, what's more interesting was the sequential enhancement of brain activity with the corresponding motion strength. We inferred that the former component might reflect the successful encoding of the local motion signals in detectors at the lower stage; while the latter one might be involved in the intensive representations of visual input in real/illusory motion perception, this was the whole motion-signal organization in the later stage of motion perception. Finally, between 1185–1450 ms, a significant positive component was found between illusory/real motion tasks than NI (no motion. Overall, we demonstrated that there was a stronger deflection under the corresponding lager motion strength. These results reflected not only the different temporal patterns between illusory and real motion but also extending to their distinguishing working memory representation and storage.

  11. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis

    Czech Academy of Sciences Publication Activity Database

    Schafer, S.; Nylund, K.; Saevik, F.; Engjom, T.; Mézl, M.; Jiřík, Radovan; Dimcevski, G.; Gilja, O.H.; Tönnies, K.

    2015-01-01

    Roč. 63, AUG 1 (2015), s. 229-237 ISSN 0010-4825 R&D Projects: GA ČR GAP102/12/2380 Institutional support: RVO:68081731 Keywords : ultrasonography * motion analysis * motion compensation * registration * CEUS * contrast-enhanced ultrasound * perfusion * perfusion modeling Subject RIV: FS - Medical Facilities ; Equipment Impact factor: 1.521, year: 2015

  12. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    OpenAIRE

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in ...

  13. Visual working memory contaminates perception

    OpenAIRE

    Kang, Min-Suk; Hong, Sang Wook; Blake, Randolph; Woodman, Geoffrey F.

    2011-01-01

    Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different mot...

  14. Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit

    Science.gov (United States)

    Chiu, Jennifer L.; Linn, Marcia C.

    2014-01-01

    This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent…

  15. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    Science.gov (United States)

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  16. Dendro-dendritic interactions between motion-sensitive large-field neurons in the fly.

    Science.gov (United States)

    Haag, Juergen; Borst, Alexander

    2002-04-15

    For visual course control, flies rely on a set of motion-sensitive neurons called lobula plate tangential cells (LPTCs). Among these cells, the so-called CH (centrifugal horizontal) cells shape by their inhibitory action the receptive field properties of other LPTCs called FD (figure detection) cells specialized for figure-ground discrimination based on relative motion. Studying the ipsilateral input circuitry of CH cells by means of dual-electrode and combined electrical-optical recordings, we find that CH cells receive graded input from HS (large-field horizontal system) cells via dendro-dendritic electrical synapses. This particular wiring scheme leads to a spatial blur of the motion image on the CH cell dendrite, and, after inhibiting FD cells, to an enhancement of motion contrast. This could be crucial for enabling FD cells to discriminate object from self motion.

  17. Enhancing Sensitivity to Visual Motion.

    Science.gov (United States)

    1980-05-01

    for certain amblyopes, repeated testing enhianced sensitivity several fold. Amblyopia refers to any of a class of diseases in which there is a loss in...See SEKULER, 1980 for a full treatment of these models. The predictions for the Simultaneous and Random conditions from the different models are...Psychologia, 18, 35-50. COHEN, L.B. & SALAPATEK, P. Infant perception. From sensation to cognition. New York, Academic Press. CYNADER, M., BERMAN, N

  18. A material-independent cell–environment niche based on microreciprocating motion for cell growth enhancement

    International Nuclear Information System (INIS)

    Li, Ching-Wen; Wang, Gou-Jen

    2013-01-01

    In tissue engineering, cell–cell, cell–scaffold and cell–environment communication balances regulate how cell populations participate in tissue generation, maintenance and repair. These communication balances are called niches. In this study, an easily implemented and material-independent cell–environment niche based on microreciprocating motions is developed to enhance cell growth. A micropositioning piezoelectric lead zirconate titanate stage is used to provide precise microreciprocating shear stress motions. Various shear stresses were applied to bovine endothelial cells (BECs) that were cultured on the artificially synthesized materials to obtain the suitable shear stress for growth enhancement. It was found that the suitable shear stress for apparent enhancement of BEC growth ranges from 1.8 to 2.2 N m −2 . Biopolymers were further used to verify the feasibility of the proposed approach using the optimized shear stress obtained from the culture on artificially synthesized polymers. The results further confirmed that the growth of BECs was enhanced as expected under the calculated reciprocating frequencies based on the suitable shear stress. It is hoped that the proposed microshear-stress-based niche could be a more cost- and time-effective solution for the enhancement of cell growth in tissue engineering applications. (paper)

  19. Motion in images is essential to cause motion sickness symptoms, but not to increase postural sway

    NARCIS (Netherlands)

    Lubeck, A.J.A.; Bos, J.E.; Stins, J.F.

    2015-01-01

    Abstract Objective It is generally assumed that motion in motion images is responsible for increased postural sway as well as for visually induced motion sickness (VIMS). However, this has not yet been tested. To that end, we studied postural sway and VIMS induced by motion and still images. Method

  20. Rapid visualization of latent fingermarks using gold seed-mediated enhancement

    Directory of Open Access Journals (Sweden)

    Chia-Hao Su

    2016-11-01

    Full Text Available Abstract Background Fingermarks are one of the most important and useful forms of physical evidence in forensic investigations. However, latent fingermarks are not directly visible, but can be visualized due to the presence of other residues (such as inorganic salts, proteins, polypeptides, enzymes and human metabolites which can be detected or recognized through various strategies. Convenient and rapid techniques are still needed to provide obvious contrast between the background and the fingermark ridges and to then visualize latent fingermark with a high degree of selectivity and sensitivity. Results In this work, lysozyme-binding aptamer-conjugated Au nanoparticles (NPs are used to recognize and target lysozyme in the fingermark ridges, and Au+-complex solution is used as a growth agent to reduce Au+ from Au+ to Au0 on the surface of the Au NPs. Distinct fingermark patterns were visualized on a range of professional forensic within 3 min; the resulting images could be observed by the naked eye without background interference. The entire processes from fingermark collection to visualization only entails two steps and can be completed in less than 10 min. The proposed method provides cost and time savings over current fingermark visualization methods. Conclusions We report a simple, inexpensive, and fast method for the rapid visualization of latent fingermarks on the non-porous substrates using Au seed-mediated enhancement. Au seed-mediated enhancement is used to achieve the rapid visualization of latent fingermarks on non-porous substrates by the naked eye without the use of expensive or sophisticated instruments. The proposed approach offers faster detection and visualization of latent fingermarks than existing methods. The proposed method is expected to increase detection efficiency for latent fingermarks and reduce time requirements and costs for forensic investigations.

  1. A synchronous surround increases the motion strength gain of motion.

    Science.gov (United States)

    Linares, Daniel; Nishida, Shin'ya

    2013-11-12

    Coherent motion detection is greatly enhanced by the synchronous presentation of a static surround (Linares, Motoyoshi, & Nishida, 2012). To further understand this contextual enhancement, here we measured the sensitivity to discriminate motion strength for several pedestal strengths with and without a surround. We found that the surround improved discrimination of low and medium motion strengths, but did not improve or even impaired discrimination of high motion strengths. We used motion strength discriminability to estimate the perceptual response function assuming additive noise and found that the surround increased the motion strength gain, rather than the response gain. Given that eye and body movements continuously introduce transients in the retinal image, it is possible that this strength gain occurs in natural vision.

  2. Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System

    Science.gov (United States)

    Abdul-Kreem, Luma Issa; Neumann, Heiko

    2015-01-01

    The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells. PMID:26554589

  3. Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System.

    Directory of Open Access Journals (Sweden)

    Luma Issa Abdul-Kreem

    Full Text Available The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields. In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells.

  4. Enhanced dimension-specific visual working memory in grapheme–color synesthesia☆

    Science.gov (United States)

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-01-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme–color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. PMID:23892185

  5. Ultrawide Band Gap β-Ga2O3 Nanomechanical Resonators with Spatially Visualized Multimode Motion.

    Science.gov (United States)

    Zheng, Xu-Qian; Lee, Jaesung; Rafique, Subrina; Han, Lu; Zorman, Christian A; Zhao, Hongping; Feng, Philip X-L

    2017-12-13

    Beta gallium oxide (β-Ga 2 O 3 ) is an emerging ultrawide band gap (4.5 eV-4.9 eV) semiconductor with attractive properties for future power electronics, optoelectronics, and sensors for detecting gases and ultraviolet radiation. β-Ga 2 O 3 thin films made by various methods are being actively studied toward such devices. Here, we report on the experimental demonstration of single-crystal β-Ga 2 O 3 nanomechanical resonators using β-Ga 2 O 3 nanoflakes grown via low-pressure chemical vapor deposition (LPCVD). By investigating β-Ga 2 O 3 circular drumhead structures, we demonstrate multimode nanoresonators up to the sixth mode in high and very high frequency (HF/VHF) bands, and also realize spatial mapping and visualization of the multimode motion. These measurements reveal a Young's modulus of E Y = 261 GPa and anisotropic biaxial built-in tension of 37.5 MPa and 107.5 MPa. We find that thermal annealing can considerably improve the resonance characteristics, including ∼40% upshift in frequency and ∼90% enhancement in quality (Q) factor. This study lays a foundation for future exploration and development of mechanically coupled and tunable β-Ga 2 O 3 electronic, optoelectronic, and physical sensing devices.

  6. Schizophrenia and visual backward masking: a general deficit of target enhancement

    Directory of Open Access Journals (Sweden)

    Michael H Herzog

    2013-05-01

    Full Text Available The obvious symptoms of schizophrenia are of cognitive and psychopathological nature. However, schizophrenia affects also visual processing which becomes particularly evident when stimuli are presented for short durations and are followed by a masking stimulus. Visual deficits are of great interest because they might be related to the genetic variations underlying the disease (endophenotype concept. Visual masking deficits are usually attributed to specific dysfunctions of the visual system such as a hypo- or hyper-active magnocellular system. Here, we propose that visual deficits are a manifestation of a general deficit related to the enhancement of weak neural signals as occurring in all other sorts of information processing. We summarize previous findings with the shine-through masking paradigm where a shortly presented vernier target is followed by a masking grating. The mask deteriorates visual processing of schizophrenic patients by almost an order of magnitude compared to healthy controls. We propose that these deficits are caused by dysfunctions of attention and the cholinergic system leading to weak neural activity corresponding to the vernier. High density electrophysiological recordings (EEG show that indeed neural activity is strongly reduced in schizophrenic patients which we attribute to the lack of vernier enhancement. When only the masking grating is presented, EEG responses are roughly comparable between patients and control. Our hypothesis is supported by findings relating visual masking to genetic deviants of the nicotinic 7 receptor (CHRNA7.

  7. Contributions of the 12 neuron classes in the fly lamina to motion vision.

    Science.gov (United States)

    Tuthill, John C; Nern, Aljoscha; Holtz, Stephen L; Rubin, Gerald M; Reiser, Michael B

    2013-07-10

    Motion detection is a fundamental neural computation performed by many sensory systems. In the fly, local motion computation is thought to occur within the first two layers of the visual system, the lamina and medulla. We constructed specific genetic driver lines for each of the 12 neuron classes in the lamina. We then depolarized and hyperpolarized each neuron type and quantified fly behavioral responses to a diverse set of motion stimuli. We found that only a small number of lamina output neurons are essential for motion detection, while most neurons serve to sculpt and enhance these feedforward pathways. Two classes of feedback neurons (C2 and C3), and lamina output neurons (L2 and L4), are required for normal detection of directional motion stimuli. Our results reveal a prominent role for feedback and lateral interactions in motion processing and demonstrate that motion-dependent behaviors rely on contributions from nearly all lamina neuron classes. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  9. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner

    Science.gov (United States)

    Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.

    2013-01-01

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388

  10. Improved motion description for action classification

    NARCIS (Netherlands)

    Jain, M.; Jégou, H.; Bouthemy, P.

    2016-01-01

    Even though the importance of explicitly integrating motion characteristics in video descriptions has been demonstrated by several recent papers on action classification, our current work concludes that adequately decomposing visual motion into dominant and residual motions, i.e., camera and scene

  11. Integration of motion energy from overlapping random background noise increases perceived speed of coherently moving stimuli.

    Science.gov (United States)

    Chuang, Jason; Ausloos, Emily C; Schwebach, Courtney A; Huang, Xin

    2016-12-01

    The perception of visual motion can be profoundly influenced by visual context. To gain insight into how the visual system represents motion speed, we investigated how a background stimulus that did not move in a net direction influenced the perceived speed of a center stimulus. Visual stimuli were two overlapping random-dot patterns. The center stimulus moved coherently in a fixed direction, whereas the background stimulus moved randomly. We found that human subjects perceived the speed of the center stimulus to be significantly faster than its veridical speed when the background contained motion noise. Interestingly, the perceived speed was tuned to the noise level of the background. When the speed of the center stimulus was low, the highest perceived speed was reached when the background had a low level of motion noise. As the center speed increased, the peak perceived speed was reached at a progressively higher background noise level. The effect of speed overestimation required the center stimulus to overlap with the background. Increasing the background size within a certain range enhanced the effect, suggesting spatial integration. The speed overestimation was significantly reduced or abolished when the center stimulus and the background stimulus had different colors, or when they were placed at different depths. When the center- and background-stimuli were perceptually separable, speed overestimation was correlated with perceptual similarity between the center- and background-stimuli. These results suggest that integration of motion energy from random motion noise has a significant impact on speed perception. Our findings put new constraints on models regarding the neural basis of speed perception. Copyright © 2016 the American Physiological Society.

  12. Enhancing Nuclear Training with 3D Visualization

    International Nuclear Information System (INIS)

    Gagnon, V.; Gagnon, B.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  13. Clinical significance of perceptible fetal motion.

    Science.gov (United States)

    Rayburn, W F

    1980-09-15

    The monitoring of fetal activity during the last trimester of pregnancy has been proposed to be useful in assessing fetal welfare. The maternal perception of fetal activity was tested among 82 patients using real-time ultrasonography. All perceived fetal movements were visualized on the scanner and involved motion of the lower limbs. Conversely, 82% of all visualized motions of fetal limbs were perceived by the patients. All combined motions of fetal trunk with limbs were preceived by the patients and described as strong movements, whereas clusters of isolated, weak motions of the fetal limbs were less accurately perceived (56% accuracy). The number of fetal movements perceived during the 15-minute test period was significantly (p fetal motion was present (44 of 45 cases) than when it was absent (five of 10 cases). These findings reveal that perceived fetal motion is: (1) reliable; (2) related to the strength of lower limb motion; (3) increased with ruptured amniotic membranes; and (4) reassuring if considered to be active.

  14. Demonstrating the potential for dynamic auditory stimulation to contribute to motion sickness.

    Directory of Open Access Journals (Sweden)

    Behrang Keshavarz

    Full Text Available Auditory cues can create the illusion of self-motion (vection in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity, vection (latency, strength, duration, and postural steadiness (center of pressure were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as "auditorily induced motion sickness".

  15. Improved analysis and visualization of friction loop data: unraveling the energy dissipation of meso-scale stick-slip motion

    Science.gov (United States)

    Kokorian, Jaap; Merlijn van Spengen, W.

    2017-11-01

    In this paper we demonstrate a new method for analyzing and visualizing friction force measurements of meso-scale stick-slip motion, and introduce a method for extracting two separate dissipative energy components. Using a microelectromechanical system tribometer, we execute 2 million reciprocating sliding cycles, during which we measure the static friction force with a resolution of \

  16. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    Science.gov (United States)

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy

  17. Enhanced dimension-specific visual working memory in grapheme-color synesthesia.

    Science.gov (United States)

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-10-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme-color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Receptive fields for smooth pursuit eye movements and motion perception.

    Science.gov (United States)

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    Science.gov (United States)

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Visualization of the collective vortex-like motions in liquid argon and water: Molecular dynamics simulation

    Science.gov (United States)

    Anikeenko, A. V.; Malenkov, G. G.; Naberukhin, Yu. I.

    2018-03-01

    We propose a new measure of collectivity of molecular motion in the liquid: the average vector of displacement of the particles, ⟨ΔR⟩, which initially have been localized within a sphere of radius Rsph and then have executed the diffusive motion during a time interval Δt. The more correlated the motion of the particles is, the longer will be the vector ⟨ΔR⟩. We visualize the picture of collective motions in molecular dynamics (MD) models of liquids by constructing the ⟨ΔR⟩ vectors and pinning them to the sites of the uniform grid which divides each of the edges of the model box into equal parts. MD models of liquid argon and water have been studied by this method. Qualitatively, the patterns of ⟨ΔR⟩ vectors are similar for these two liquids but differ in minor details. The most important result of our research is the revealing of the aggregates of ⟨ΔR⟩ vectors which have the form of extended flows which sometimes look like the parts of vortices. These vortex-like clusters of ⟨ΔR⟩ vectors have the mesoscopic size (of the order of 10 nm) and persist for tens of picoseconds. Dependence of the ⟨ΔR⟩ vector field on parameters Rsph, Δt, and on the model size has been investigated. This field in the models of liquids differs essentially from that in a random-walk model.

  1. Visual motion imagery neurofeedback based on the hMT+/V5 complex: evidence for a feedback-specific neural circuit involving neocortical and cerebellar regions

    Science.gov (United States)

    Banca, Paula; Sousa, Teresa; Catarina Duarte, Isabel; Castelo-Branco, Miguel

    2015-12-01

    Objective. Current approaches in neurofeedback/brain-computer interface research often focus on identifying, on a subject-by-subject basis, the neural regions that are best suited for self-driven modulation. It is known that the hMT+/V5 complex, an early visual cortical region, is recruited during explicit and implicit motion imagery, in addition to real motion perception. This study tests the feasibility of training healthy volunteers to regulate the level of activation in their hMT+/V5 complex using real-time fMRI neurofeedback and visual motion imagery strategies. Approach. We functionally localized the hMT+/V5 complex to further use as a target region for neurofeedback. An uniform strategy based on motion imagery was used to guide subjects to neuromodulate hMT+/V5. Main results. We found that 15/20 participants achieved successful neurofeedback. This modulation led to the recruitment of a specific network as further assessed by psychophysiological interaction analysis. This specific circuit, including hMT+/V5, putative V6 and medial cerebellum was activated for successful neurofeedback runs. The putamen and anterior insula were recruited for both successful and non-successful runs. Significance. Our findings indicate that hMT+/V5 is a region that can be modulated by focused imagery and that a specific cortico-cerebellar circuit is recruited during visual motion imagery leading to successful neurofeedback. These findings contribute to the debate on the relative potential of extrinsic (sensory) versus intrinsic (default-mode) brain regions in the clinical application of neurofeedback paradigms. This novel circuit might be a good target for future neurofeedback approaches that aim, for example, the training of focused attention in disorders such as ADHD.

  2. Motion-guided attention promotes adaptive communications during social navigation.

    Science.gov (United States)

    Lemasson, B H; Anderson, J J; Goodwin, R A

    2013-03-07

    Animals are capable of enhanced decision making through cooperation, whereby accurate decisions can occur quickly through decentralized consensus. These interactions often depend upon reliable social cues, which can result in highly coordinated activities in uncertain environments. Yet information within a crowd may be lost in translation, generating confusion and enhancing individual risk. As quantitative data detailing animal social interactions accumulate, the mechanisms enabling individuals to rapidly and accurately process competing social cues remain unresolved. Here, we model how motion-guided attention influences the exchange of visual information during social navigation. We also compare the performance of this mechanism to the hypothesis that robust social coordination requires individuals to numerically limit their attention to a set of n-nearest neighbours. While we find that such numerically limited attention does not generate robust social navigation across ecological contexts, several notable qualities arise from selective attention to motion cues. First, individuals can instantly become a local information hub when startled into action, without requiring changes in neighbour attention level. Second, individuals can circumvent speed-accuracy trade-offs by tuning their motion thresholds. In turn, these properties enable groups to collectively dampen or amplify social information. Lastly, the minority required to sway a group's short-term directional decisions can change substantially with social context. Our findings suggest that motion-guided attention is a fundamental and efficient mechanism underlying collaborative decision making during social navigation.

  3. The efficacy of airflow and seat vibration on reducing visually induced motion sickness.

    Science.gov (United States)

    D'Amour, Sarah; Bos, Jelte E; Keshavarz, Behrang

    2017-09-01

    Visually induced motion sickness (VIMS) is a well-known sensation in virtual environments and simulators, typically characterized by a variety of symptoms such as pallor, sweating, dizziness, fatigue, and/or nausea. Numerous methods to reduce VIMS have been previously introduced; however, a reliable countermeasure is still missing. In the present study, the effect of airflow and seat vibration to alleviate VIMS was investigated. Eighty-two participants were randomly assigned to one of four groups (airflow, vibration, combined airflow and vibration, and control) and then exposed to a 15 min long video of a bicycle ride shot from first-person view. VIMS was measured using the Fast Motion Sickness Scale (FMS) and the Simulator Sickness Questionnaire (SSQ). Results showed that the exposure of airflow significantly reduced VIMS, whereas the presence of seat vibration, in contrast, did not have an impact on VIMS. Additionally, we found that females reported higher FMS scores than males, however, this sex difference was not found in the SSQ scores. Our findings demonstrate that airflow can be an effective and easy-to-apply technique to reduce VIMS in virtual environments and simulators, while vibration applied to the seat is not a successful method.

  4. Modeling a space-variant cortical representation for apparent motion.

    Science.gov (United States)

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  5. Effects of regular aerobic exercise on visual perceptual learning.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A multistage motion vector processing method for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  7. Enhanced visual statistical learning in adults with autism

    Science.gov (United States)

    Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József

    2014-01-01

    Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115

  8. Tracking without perceiving: a dissociation between eye movements and motion perception.

    Science.gov (United States)

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  9. Motion tracking-enhanced MART for tomographic PIV

    International Nuclear Information System (INIS)

    Novara, Matteo; Scarano, Fulvio; Batenburg, Kees Joost

    2010-01-01

    A novel technique to increase the accuracy of multiplicative algebraic reconstruction technique (MART) reconstruction from tomographic particle image velocimetry (PIV) recordings at higher seeding density than currently possible is presented. The motion tracking enhancement (MTE) method is based on the combined utilization of images from two or more exposures to enhance the reconstruction of individual intensity fields. The working principle is first introduced qualitatively, and the mathematical background is given that explains how the MART reconstruction can be improved on the basis of an improved first guess object obtained from the combination of non-simultaneous views reduced to the same time instant deforming the 3D objects by an estimate of the particle motion field. The performances of MTE are quantitatively evaluated by numerical simulation of the imaging, reconstruction and image correlation processes. The cases of two or more exposures obtained from time-resolved experiments are considered. The iterative application of MTE appears to significantly improve the reconstruction quality, first by decreasing the intensity of the ghost images and second, by increasing the intensity and the reconstruction precision for the actual particles. Based on computer simulations, the maximum imaged seeding density that can be dealt with is tripled with respect to the MART analysis applied to a single exposure. The analysis also illustrates that the maximum effect of the MTE method is comparable to that of doubling the number of cameras in the tomographic system. Experiments performed on a transitional jet at Re = 5000 apply the MTE method to double-frame recordings. The velocity measurement precision is increased for a system with fewer views (two or three cameras compared with four cameras). The ghost particles' intensity is also visibly reduced although to a lesser extent with respect to the computer simulations. The velocity and vorticity field obtained from a three

  10. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    Science.gov (United States)

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  11. The Posture of Putting One's Palms Together Modulates Visual Motion Event Perception.

    Science.gov (United States)

    Saito, Godai; Gyoba, Jiro

    2018-02-01

    We investigated the effect of an observer's hand postures on visual motion perception using the stream/bounce display. When two identical visual objects move across collinear horizontal trajectories toward each other in a two-dimensional display, observers perceive them as either streaming or bouncing. In our previous study, we found that when observers put their palms together just below the coincidence point of the two objects, the percentage of bouncing responses increased, mainly depending on the proprioceptive information from their own hands. However, it remains unclear if the tactile or haptic (force) information produced by the postures mostly influences the stream/bounce perception. We solved this problem by changing the tactile and haptic information on the palms of the hands. Experiment 1 showed that the promotion of bouncing perception was observed only when the posture of directly putting one's palms together was used, while there was no effect when a brick was sandwiched between the participant's palms. Experiment 2 demonstrated that the strength of force used when putting the palms together had no effect on increasing bounce perception. Our findings indicate that the hands-induced bounce effect derives from the tactile information produced by the direct contact between both palms.

  12. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  13. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    Science.gov (United States)

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  14. Adaptive order search and tangent-weighted trade-off for motion estimation in H.264

    Directory of Open Access Journals (Sweden)

    Srinivas Bachu

    2018-04-01

    Full Text Available Motion estimation and compensation play a major role in video compression to reduce the temporal redundancies of the input videos. A variety of block search patterns have been developed for matching the blocks with reduced computational complexity, without affecting the visual quality. In this paper, block motion estimation is achieved through integrating the square as well as the hexagonal search patterns with adaptive order. The proposed algorithm is called, AOSH (Adaptive Order Square Hexagonal Search algorithm, and it finds the best matching block with a reduced number of search points. The searching function is formulated as a trade-off criterion here. Hence, the tangent-weighted function is newly developed to evaluate the matching point. The proposed AOSH search algorithm and the tangent-weighted trade-off criterion are effectively applied to the block estimation process to enhance the visual quality and the compression performance. The proposed method is validated using three videos namely, football, garden and tennis. The quantitative performance of the proposed method and the existing methods is analysed using the Structural SImilarity Index (SSIM and the Peak Signal to Noise Ratio (PSNR. The results prove that the proposed method offers good visual quality than the existing methods. Keywords: Block motion estimation, Square search, Hexagon search, H.264, Video coding

  15. Impaired Velocity Processing Reveals an Agnosia for Motion in Depth.

    Science.gov (United States)

    Barendregt, Martijn; Dumoulin, Serge O; Rokers, Bas

    2016-11-01

    Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers' ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion. © The Author(s) 2016.

  16. Modulation of neuronal responses during covert search for visual feature conjunctions.

    Science.gov (United States)

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  17. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    Science.gov (United States)

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  18. Enhancement and suppression in the visual field under perceptual load.

    Science.gov (United States)

    Parks, Nathan A; Beck, Diane M; Kramer, Arthur F

    2013-01-01

    The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  19. Enhancement and Suppression in the Visual Field under Perceptual Load

    Directory of Open Access Journals (Sweden)

    Nathan A Parks

    2013-05-01

    Full Text Available The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task – greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs in conjunction with time-domain event-related potentials (ERPs to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2°, 6°, or 11° during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3Hz was attenuated under high perceptual load (relative to low load at the most proximal (2° eccentricity but not at more eccentric locations (6˚ or 11˚. Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.

  20. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    Directory of Open Access Journals (Sweden)

    Eiji Watanabe

    2018-03-01

    Full Text Available The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  1. Attention and apparent motion.

    Science.gov (United States)

    Horowitz, T; Treisman, A

    1994-01-01

    Two dissociations between short- and long-range motion in visual search are reported. Previous research has shown parallel processing for short-range motion and apparently serial processing for long-range motion. This finding has been replicated and it has also been found that search for short-range targets can be impaired both by using bicontrast stimuli, and by prior adaptation to the target direction of motion. Neither factor impaired search in long-range motion displays. Adaptation actually facilitated search with long-range displays, which is attributed to response-level effects. A feature-integration account of apparent motion is proposed. In this theory, short-range motion depends on specialized motion feature detectors operating in parallel across the display, but subject to selective adaptation, whereas attention is needed to link successive elements when they appear at greater separations, or across opposite contrasts.

  2. Oswestry Disability Index is a better indicator of lumbar motion than the Visual Analogue Scale.

    Science.gov (United States)

    Ruiz, Ferrin K; Bohl, Daniel D; Webb, Matthew L; Russo, Glenn S; Grauer, Jonathan N

    2014-09-01

    Lumbar pathology is often associated with axial pain or neurologic complaints. It is often presumed that such pain is associated with decreased lumbar motion; however, this correlation is not well established. The utility of various outcome measures that are used in both research and clinical practice have been studied, but the connection with range of motion (ROM) has not been well documented. The current study was performed to assess objectively the postulated correlation of lumbar complaints (based on standardized outcome measures) with extremes of lumbar ROM and functional ROM (fROM) with activities of daily living (ADLs) as assessed with an electrogoniometer. This study was a clinical cohort study. Subjects slated to undergo a lumbar intervention (injection, decompression, and/or fusion) were enrolled voluntarily in the study. The two outcome measures used in the study were the Visual Analogue Scale (VAS) for axial extremity, lower extremity, and combined axial and lower extremity, as well as the Oswestry Disability Index (ODI). Pain and disability scores were assessed with the VAS score and ODI. A previously validated electrogoniometer was used to measure ROM (extremes of motion in three planes) and fROM (functional motion during 15 simulated activities of daily living). Pain and disability scores were analyzed for statistically significant association with the motion assessments using linear regression analyses. Twenty-eight men and 39 women were enrolled, with an average age of 55.6 years (range, 18-79 years). The ODI and VAS were associated positively (p<.001). Combined axial and lower extremity VAS scores were associated with lateral and rotational ROM (p<.05), but not with flexion/extension or any fROM. Similar findings were noted for separately analyzed axial and lower extremity VAS scores. On the other hand, the ODI correlated inversely with ROM in all planes, and fROM in at least one plane for 10 of 15 ADLs (p<.05). Extremes of lumbar motion and

  3. Ambiguity in Tactile Apparent Motion Perception.

    Directory of Open Access Journals (Sweden)

    Emanuela Liaci

    Full Text Available In von Schiller's Stroboscopic Alternative Motion (SAM stimulus two visually presented diagonal dot pairs, located on the corners of an imaginary rectangle, alternate with each other and induce either horizontal, vertical or, rarely, rotational motion percepts. SAM motion perception can be described by a psychometric function of the dot aspect ratio ("AR", i.e. the relation between vertical and horizontal dot distances. Further, with equal horizontal and vertical dot distances (AR = 1 perception is biased towards vertical motion. In a series of five experiments, we presented tactile SAM versions and studied the role of AR and of different reference frames for the perception of tactile apparent motion.We presented tactile SAM stimuli and varied the ARs, while participants reported the perceived motion directions. Pairs of vibration stimulators were attached to the participants' forearms and stimulator distances were varied within and between forearms. We compared straight and rotated forearm conditions with each other in order to disentangle the roles of exogenous and endogenous reference frames.Increasing the tactile SAM's AR biased perception towards vertical motion, but the effect was weak compared to the visual modality. We found no horizontal disambiguation, even for very small tactile ARs. A forearm rotation by 90° kept the vertical bias, even though it was now coupled with small ARs. A 45° rotation condition with crossed forearms, however, evoked a strong horizontal motion bias.Existing approaches to explain the visual SAM bias fail to explain the current tactile results. Particularly puzzling is the strong horizontal bias in the crossed-forearm conditions. In the case of tactile apparent motion, there seem to be no fixed priority rule for perceptual disambiguation. Rather the weighting of available evidence seems to depend on the degree of stimulus ambiguity, the current situation and on the perceptual strategy of the individual

  4. Tuning self-motion perception in virtual reality with visual illusions.

    Science.gov (United States)

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  5. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography

    Science.gov (United States)

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-01

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H2+ , the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  6. Attention directed by expectations enhances receptive fields in cortical area MT.

    Science.gov (United States)

    Ghose, Geoffrey M; Bearl, David W

    2010-02-22

    Expectations, especially those formed on the basis of extensive training, can substantially enhance visual performance. However, it is not clear that the physiological mechanisms underlying this enhancement are identical to those examined by experiments in which attention is directed by explicit instructions rather than strong expectations. To study the changes in visual representations associated with strong expectations, we trained animals to detect a brief motion pulse that was embedded in noise. Because the nature of the pulse and the statistics of its appearance were well known to the animals, they formed strong expectations which determined their behavioral performance. We used white-noise methods to infer the receptive field structure of single neurons in area MT while they were performing this task. Incorporating non-linearities, we compared receptive fields during periods of time when the animals were expecting the motion pulse with periods of time when they were not. We found receptive field changes consistent with an increased reliability in signaling pulse occurrence. Moreover, these changes were not consistent with a simple gain modulation. The results suggest that strong expectations can create very specific changes in the visual representations at a cellular level to enhance performance. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Motion simulator with exchangeable unit

    NARCIS (Netherlands)

    Mulder, J.A.; Beukers, A.; Baarspul, M.; Van Tooren, M.J.; De Winter, S.E.E.

    2001-01-01

    A motion simulator provided with a movable housing, preferably carried by a number of length-adjustable legs, in which housing projection means are arranged for visual information supply, while in the housing a control environment of a motion apparatus to be simulated is situated, the control

  8. Extrapolation of vertical target motion through a brief visual occlusion.

    Science.gov (United States)

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  9. Free-breathing motion-corrected late-gadolinium-enhancement imaging improves image quality in children

    International Nuclear Information System (INIS)

    Olivieri, Laura; O'Brien, Kendall J.; Cross, Russell; Xue, Hui; Kellman, Peter; Hansen, Michael S.

    2016-01-01

    The value of late-gadolinium-enhancement (LGE) imaging in the diagnosis and management of pediatric and congenital heart disease is clear; however current acquisition techniques are susceptible to error and artifacts when performed in children because of children's higher heart rates, higher prevalence of sinus arrhythmia, and inability to breath-hold. Commonly used techniques in pediatric LGE imaging include breath-held segmented FLASH (segFLASH) and steady-state free precession-based (segSSFP) imaging. More recently, single-shot SSFP techniques with respiratory motion-corrected averaging have emerged. This study tested and compared single-shot free-breathing LGE techniques with standard segmented breath-held techniques in children undergoing LGE imaging. Thirty-two consecutive children underwent clinically indicated late-enhancement imaging using intravenous gadobutrol 0.15 mmol/kg. Breath-held segSSFP, breath-held segFLASH, and free-breathing single-shot SSFP LGE sequences were performed in consecutive series in each child. Two blinded reviewers evaluated the quality of the images and rated them on a scale of 1-5 (1 = poor, 5 = superior) based on blood pool-myocardial definition, presence of cardiac motion, presence of respiratory motion artifacts, and image acquisition artifact. We used analysis of variance (ANOVA) to compare groups. Patients ranged in age from 9 months to 18 years, with a mean +/- standard deviation (SD) of 13.3 +/- 4.8 years. R-R interval at the time of acquisition ranged 366-1,265 milliseconds (ms) (47-164 beats per minute [bpm]), mean +/- SD of 843+/-231 ms (72+/-21 bpm). Mean +/- SD quality ratings for long-axis imaging for segFLASH, segSSFP and single-shot SSFP were 3.1+/-0.9, 3.4+/-0.9 and 4.0+/-0.9, respectively (P < 0.01 by ANOVA). Mean +/- SD quality ratings for short-axis imaging for segFLASH, segSSFP and single-shot SSFP were 3.4+/-1, 3.8+/-0.9 and 4.3+/-0.7, respectively (P < 0.01 by ANOVA). Single-shot late-enhancement

  10. Free-breathing motion-corrected late-gadolinium-enhancement imaging improves image quality in children

    Energy Technology Data Exchange (ETDEWEB)

    Olivieri, Laura; O' Brien, Kendall J. [Children' s National Health System, Division of Cardiology, Washington, DC (United States); National Heart, Lung and Blood Institute, National Institutes of Health, Bethesda, MD (United States); Cross, Russell [Children' s National Health System, Division of Cardiology, Washington, DC (United States); Xue, Hui; Kellman, Peter; Hansen, Michael S. [National Heart, Lung and Blood Institute, National Institutes of Health, Bethesda, MD (United States)

    2016-06-15

    The value of late-gadolinium-enhancement (LGE) imaging in the diagnosis and management of pediatric and congenital heart disease is clear; however current acquisition techniques are susceptible to error and artifacts when performed in children because of children's higher heart rates, higher prevalence of sinus arrhythmia, and inability to breath-hold. Commonly used techniques in pediatric LGE imaging include breath-held segmented FLASH (segFLASH) and steady-state free precession-based (segSSFP) imaging. More recently, single-shot SSFP techniques with respiratory motion-corrected averaging have emerged. This study tested and compared single-shot free-breathing LGE techniques with standard segmented breath-held techniques in children undergoing LGE imaging. Thirty-two consecutive children underwent clinically indicated late-enhancement imaging using intravenous gadobutrol 0.15 mmol/kg. Breath-held segSSFP, breath-held segFLASH, and free-breathing single-shot SSFP LGE sequences were performed in consecutive series in each child. Two blinded reviewers evaluated the quality of the images and rated them on a scale of 1-5 (1 = poor, 5 = superior) based on blood pool-myocardial definition, presence of cardiac motion, presence of respiratory motion artifacts, and image acquisition artifact. We used analysis of variance (ANOVA) to compare groups. Patients ranged in age from 9 months to 18 years, with a mean +/- standard deviation (SD) of 13.3 +/- 4.8 years. R-R interval at the time of acquisition ranged 366-1,265 milliseconds (ms) (47-164 beats per minute [bpm]), mean +/- SD of 843+/-231 ms (72+/-21 bpm). Mean +/- SD quality ratings for long-axis imaging for segFLASH, segSSFP and single-shot SSFP were 3.1+/-0.9, 3.4+/-0.9 and 4.0+/-0.9, respectively (P < 0.01 by ANOVA). Mean +/- SD quality ratings for short-axis imaging for segFLASH, segSSFP and single-shot SSFP were 3.4+/-1, 3.8+/-0.9 and 4.3+/-0.7, respectively (P < 0.01 by ANOVA). Single-shot late-enhancement

  11. Evaluating Effectiveness of Modeling Motion System Feedback in the Enhanced Hess Structural Model of the Human Operator

    Science.gov (United States)

    Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.

    2009-01-01

    In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.

  12. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  13. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow.

    Science.gov (United States)

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L; Migliore, Elaina M; Chipps, Esther M; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today's dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives.

  14. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    Science.gov (United States)

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion

  15. Abnormal Size-Dependent Modulation of Motion Perception in Children with Autism Spectrum Disorder (ASD).

    Science.gov (United States)

    Sysoeva, Olga V; Galuta, Ilia A; Davletshina, Maria S; Orekhova, Elena V; Stroganova, Tatiana A

    2017-01-01

    Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD-a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD.

  16. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  17. Cholinergic enhancement reduces orientation-specific surround suppression but not visual crowding

    Directory of Open Access Journals (Sweden)

    Anna A. Kosovicheva

    2012-09-01

    Full Text Available Acetylcholine (ACh reduces the spatial spread of excitatory fMRI responses in early visual cortex and the receptive field sizes of V1 neurons. We investigated the perceptual consequences of these physiological effects of ACh with surround suppression and crowding, two tasks that involve spatial interactions between visual field locations. Surround suppression refers to the reduction in perceived stimulus contrast by a high-contrast surround stimulus. For grating stimuli, surround suppression is selective for the relative orientations of the center and surround, suggesting that it results from inhibitory interactions in early visual cortex. Crowding refers to impaired identification of a peripheral stimulus in the presence of flankers and is thought to result from excessive integration of visual features. We increased synaptic ACh levels by administering the cholinesterase inhibitor donepezil to healthy human subjects in a placebo-controlled, double-blind design. In Exp. 1, we measured surround suppression of a central grating using a contrast discrimination task with three conditions: 1 surround grating with the same orientation as the center (parallel, 2 surround orthogonal to the center, or 3 no surround. Contrast discrimination thresholds were higher in the parallel than in the orthogonal condition, demonstrating orientation-specific surround suppression (OSSS. Cholinergic enhancement reduced thresholds only in the parallel condition, thereby reducing OSSS. In Exp. 2, subjects performed a crowding task in which they reported the identity of a peripheral letter flanked by letters on either side. We measured the critical spacing between the target and flanking letters that allowed reliable identification. Cholinergic enhancement had no effect on critical spacing. Our findings suggest that ACh reduces spatial interactions in tasks involving segmentation of visual field locations but that these effects may be limited to early visual cortical

  18. Teaching Motion with the Global Positioning System

    Science.gov (United States)

    Budisa, Marko; Planinsic, Gorazd

    2003-01-01

    We have used the GPS receiver and a PC interface to track different types of motion. Various hands-on experiments that enlighten the physics of motion at the secondary school level are suggested (visualization of 2D and 3D motion, measuring car drag coefficient and fuel consumption). (Contains 8 figures.)

  19. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    Science.gov (United States)

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  20. Motion discrimination under uncertainty and ambiguity.

    NARCIS (Netherlands)

    Kalisvaart, J.P.; Klaver, I.; Goossens, J.

    2011-01-01

    Speed and accuracy of visual motion discrimination depend systematically on motion strength. This behavior is traditionally explained by diffusion models that assume accumulation of sensory evidence over time to a decision bound. However, how does the brain decide when sensory evidence is ambiguous,

  1. Visual Input Enhancement and Grammar Learning: A Meta-Analytic Review

    Science.gov (United States)

    Lee, Sang-Ki; Huang, Hung-Tzu

    2008-01-01

    Effects of pedagogical interventions with visual input enhancement on grammar learning have been investigated by a number of researchers during the past decade and a half. The present review delineates this research domain via a systematic synthesis of 16 primary studies (comprising 20 unique study samples) retrieved through an exhaustive…

  2. Evaluation of the Leap Motion Controller during the performance of visually-guided upper limb movements.

    Science.gov (United States)

    Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James

    2018-01-01

    Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between

  3. Mental imagery of gravitational motion.

    Science.gov (United States)

    Gravano, Silvio; Zago, Myrka; Lacquaniti, Francesco

    2017-10-01

    There is considerable evidence that gravitational acceleration is taken into account in the interaction with falling targets through an internal model of Earth gravity. Here we asked whether this internal model is accessed also when target motion is imagined rather than real. In the main experiments, naïve participants grasped an imaginary ball, threw it against the ceiling, and caught it on rebound. In different blocks of trials, they had to imagine that the ball moved under terrestrial gravity (1g condition) or under microgravity (0g) as during a space flight. We measured the speed and timing of the throwing and catching actions, and plotted ball flight duration versus throwing speed. Best-fitting duration-speed curves estimate the laws of ball motion implicit in the participant's performance. Surprisingly, we found duration-speed curves compatible with 0g for both the imaginary 0g condition and the imaginary 1g condition, despite the familiarity with Earth gravity effects and the added realism of performing the throwing and catching actions. In a control experiment, naïve participants were asked to throw the imaginary ball vertically upwards at different heights, without hitting the ceiling, and to catch it on its way down. All participants overestimated ball flight durations relative to the durations predicted by the effects of Earth gravity. Overall, the results indicate that mental imagery of motion does not have access to the internal model of Earth gravity, but resorts to a simulation of visual motion. Because visual processing of accelerating/decelerating motion is poor, visual imagery of motion at constant speed or slowly varying speed appears to be the preferred mode to perform the tasks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Visual information transfer. 1: Assessment of specific information needs. 2: The effects of degraded motion feedback. 3: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1984-01-01

    Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.

  5. Effect of Power Point Enhanced Teaching (Visual Input) on Iranian Intermediate EFL Learners' Listening Comprehension Ability

    Science.gov (United States)

    Sehati, Samira; Khodabandehlou, Morteza

    2017-01-01

    The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…

  6. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  7. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Countermeasures to Enhance Sensorimotor Adaptability

    Science.gov (United States)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. C.; Miller, C. A.; Cohen, H. S.

    2011-01-01

    During exploration-class missions, sensorimotor disturbances may lead to disruption in the ability to ambulate and perform functional tasks during the initial introduction to a novel gravitational environment following a landing on a planetary surface. The goal of our current project is to develop a sensorimotor adaptability (SA) training program to facilitate rapid adaptation to novel gravitational environments. We have developed a unique training system comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to enhance sensorimotor adaptability. We have conducted a series of studies that have shown: Training using a combination of modified visual flow and support surface motion during treadmill walking enhances locomotor adaptability to a novel sensorimotor environment. Trained individuals become more proficient at performing multiple competing tasks while walking during adaptation to novel discordant sensorimotor conditions. Trained subjects can retain their increased level of adaptability over a six months period. SA training is effective in producing increased adaptability in a more complex over-ground ambulatory task on an obstacle course. This confirms that for a complex task like walking, treadmill training contains enough of the critical features of overground walking to be an effective training modality. The structure of individual training sessions can be optimized to promote fast/strategic motor learning. Training sessions that each contain short-duration exposures to multiple perturbation stimuli allows subjects to acquire a greater ability to rapidly reorganize appropriate response strategies when encountering a novel sensory environment. Individual sensory biases (i.e. increased visual dependency) can predict adaptive responses to novel sensory environments suggesting that customized training prescriptions can be developed to enhance

  9. Perception of biological motion from size-invariant body representations

    Directory of Open Access Journals (Sweden)

    Markus eLappe

    2015-03-01

    Full Text Available The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  10. Does 3D produce more symptoms of visually induced motion sickness?

    Science.gov (United States)

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  11. Scientific visualization for enhanced interpretation and communication of geoscientific information

    International Nuclear Information System (INIS)

    Vorauer, A.; Cotesta, L.

    2006-01-01

    Ontario Power Generation's Deep Geologic Repository Technology Program has undertaken applied research into the application of scientific visualization technologies to: i) improve the interpretation and synthesis of complex geoscientific field data; ii) facilitate the development of defensible conceptual site descriptive models; and iii) enhance communication between multi-disciplinary site investigation teams and other stakeholders. Two scientific visualization projects are summarized that benefited from the use of the Gocad earth modelling software and were supported by an immersive virtual reality laboratory: i) the Moderately Fractured Rock experiment at the 125,000 m 3 block scale; and ii) the Sub-regional Flow System Modelling Project at the 100 km 2 scale. (author)

  12. Visual Attention to Movement and Color in Children with Cortical Visual Impairment

    Science.gov (United States)

    Cohen-Maitre, Stacey Ann; Haerich, Paul

    2005-01-01

    This study investigated the ability of color and motion to elicit and maintain visual attention in a sample of children with cortical visual impairment (CVI). It found that colorful and moving objects may be used to engage children with CVI, increase their motivation to use their residual vision, and promote visual learning.

  13. Enhancements to VTK enabling Scientific Visualization in Immersive Environments

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish; Sherman, William; Martin, Ken; Lonie, David; Whiting, Eric; Money, James

    2017-04-01

    Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.

  14. A Live-Time Relation: Motion Graphics meets Classical Music

    DEFF Research Database (Denmark)

    Steijn, Arthur

    2014-01-01

    , liveness and atmosphere. The design model will be a framework for both academic analytical studies as well as for designing time-based narratives and visual concepts involving motion graphics in spatial contexts. I focus on cases in which both pre-rendered, and live generated motion graphics are designed......In our digital age, we frequently meet fine examples of live performances of classical music with accompanying visuals. Yet, we find very little theoretical or analytical work on the relation between classical music and digital temporal visuals, nor on the process of creating them. In this paper, I...... present segments of my work toward a working model for the process of design of visuals and motion graphics applied in spatial contexts. I show how various design elements and components: line and shape, tone and colour, time and timing, rhythm and movement interact with conceptualizations of space...

  15. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Science.gov (United States)

    Meyer, Georg F; Shao, Fei; White, Mark D; Hopkins, Carl; Robotham, Antony J

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  16. A novel role for visual perspective cues in the neural computation of depth.

    Science.gov (United States)

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  17. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    Science.gov (United States)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  18. Visualization of Disciplinary Profiles: Enhanced Science Overlay Maps

    Directory of Open Access Journals (Sweden)

    Stephen Carley

    2017-08-01

    Full Text Available Purpose: The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps. Design/methodology/approach: We use the combined set of 2015 Journal Citation Reports for the Science Citation Index (n of journals = 8,778 and the Social Sciences Citation Index (n = 3,212 for a total of 11,365 journals. The set of Web of Science Categories in the Science Citation Index and the Social Sciences Citation Index increased from 224 in 2010 to 227 in 2015. Using dedicated software, a matrix of 227 × 227 cells is generated on the basis of whole-number citation counting. We normalize this matrix using the cosine function. We first develop the citing-side, cosine-normalized map using 2015 data and VOSviewer visualization with default parameter values. A routine for making overlays on the basis of the map (“wc15.exe” is available at http://www.leydesdorff.net/wc15/index.htm. Findings: Findings appear in the form of visuals throughout the manuscript. In Figures 1–9 we provide basemaps of science and science overlay maps for a number of companies, universities, and technologies. Research limitations: As Web of Science Categories change and/or are updated so is the need to update the routine we provide. Also, to apply the routine we provide users need access to the Web of Science. Practical implications: Visualization of science overlay maps is now more accurate and true to the 2015 Journal Citation Reports than was the case with the previous version of the routine advanced in our paper. Originality/value: The routine we advance allows users to visualize science overlay maps in VOSviewer using data from more recent Journal Citation Reports.

  19. Communicative interactions improve visual detection of biological motion.

    Directory of Open Access Journals (Sweden)

    Valeria Manera

    Full Text Available BACKGROUND: In the context of interacting activities requiring close-body contact such as fighting or dancing, the actions of one agent can be used to predict the actions of the second agent. In the present study, we investigated whether interpersonal predictive coding extends to interactive activities--such as communicative interactions--in which no physical contingency is implied between the movements of the interacting individuals. METHODOLOGY/PRINCIPAL FINDINGS: Participants observed point-light displays of two agents (A and B performing separate actions. In the communicative condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the individual condition, agent A's communicative action was substituted with a non-communicative action. Using a simultaneous masking detection task, we demonstrate that observing the communicative gesture performed by agent A enhanced visual discrimination of agent B. CONCLUSIONS/SIGNIFICANCE: Our finding complements and extends previous evidence for interpersonal predictive coding, suggesting that the communicative gestures of one agent can serve as a predictor for the expected actions of the respondent, even if no physical contact between agents is implied.

  20. Figure-ground segregation modulates apparent motion.

    Science.gov (United States)

    Ramachandran, V S; Anstis, S

    1986-01-01

    We explored the relationship between figure-ground segmentation and apparent motion. Results suggest that: static elements in the surround can eliminate apparent motion of a cluster of dots in the centre, but only if the cluster and surround have similar "grain" or texture; outlines that define occluding surfaces are taken into account by the motion mechanism; the brain uses a hierarchy of precedence rules in attributing motion to different segments of the visual scene. Being designated as "figure" confers a high rank in this scheme of priorities.

  1. Implementation of Motion Simulation Software and Visual-Auditory Electronics for Use in a Low Gravity Robotic Testbed

    Science.gov (United States)

    Martin, William Campbell

    2011-01-01

    The Jet Propulsion Laboratory (JPL) is developing the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) to assist in manned space missions. One of the proposed targets for this robotic vehicle is a near-Earth asteroid (NEA), which typically exhibit a surface gravity of only a few micro-g. In order to properly test ATHLETE in such an environment, the development team has constructed an inverted Stewart platform testbed that acts as a robotic motion simulator. This project focused on creating physical simulation software that is able to predict how ATHLETE will function on and around a NEA. The corresponding platform configurations are calculated and then passed to the testbed to control ATHLETE's motion. In addition, imitation attitude, imitation attitude control thrusters were designed and fabricated for use on ATHLETE. These utilize a combination of high power LEDs and audio amplifiers to provide visual and auditory cues that correspond to the physics simulation.

  2. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    Science.gov (United States)

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  3. Breaking cover: neural responses to slow and fast camouflage-breaking motion.

    Science.gov (United States)

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei

    2015-08-22

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.

  4. Unpredictable visual changes cause temporal memory averaging.

    Science.gov (United States)

    Ohyama, Junji; Watanabe, Katsumi

    2007-09-01

    Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change.

  5. Transient Severe Motion Artifact Related to Gadoxetate Disodium-Enhanced Liver MRI: Frequency and Risk Evaluation at a German Institution.

    Science.gov (United States)

    Well, Lennart; Rausch, Vanessa Hanna; Adam, Gerhard; Henes, Frank Oliver; Bannas, Peter

    2017-07-01

    Purpose  Varying frequencies (5 - 18 %) of contrast-related transient severe motion (TSM) imaging artifacts during gadoxetate disodium-enhanced arterial phase liver MRI have been reported. Since previous reports originated from the United States and Japan, we aimed to determine the frequency of TSM at a German institution and to correlate it with potential risk factors and previously published results. Materials and Methods  Two age- and sex-matched groups were retrospectively selected (gadoxetate disodium n = 89; gadobenate dimeglumine n = 89) from dynamic contrast-enhanced MRI examinations in a single center. Respiratory motion-related artifacts in non-enhanced and dynamic phases were assessed independently by two readers blinded to contrast agents on a 4-point scale. Scores of ≥ 3 were considered as severe motion artifacts. Severe motion artifacts in arterial phases were considered as TSM if scores in all other phases were risk factors for TSM were evaluated via logistic regression analysis. Results  For gadoxetate disodium, the mean score for respiratory motion artifacts was significantly higher in the arterial phase (2.2 ± 0.9) compared to all other phases (1.6 ± 0.7) (p risk factors (all p > 0.05). Conclusion  We revealed a high frequency of TSM after injection of gadoxetate disodium at a German institution, substantiating the importance of a diagnosis-limiting phenomenon that so far has only been reported from the United States and Japan. In accordance with previous studies, we did not identify associated risk factors for TSM. Key Points:   · Gadoxetate disodium causes TSM in a relevant number of patients.. · The frequency of TSM is similar between the USA, Japan and Germany.. · To date, no validated risk factors for TSM could be identified.. Citation Format · Well L, Rausch VH, Adam G et al. Transient Severe Motion Artifact Related to Gadoxetate Disodium-Enhanced Liver MRI: Frequency and Risk Evaluation at a

  6. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    Science.gov (United States)

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular

  7. Short-term visual deprivation does not enhance passive tactile spatial acuity.

    Directory of Open Access Journals (Sweden)

    Michael Wong

    Full Text Available An important unresolved question in sensory neuroscience is whether, and if so with what time course, tactile perception is enhanced by visual deprivation. In three experiments involving 158 normally sighted human participants, we assessed whether tactile spatial acuity improves with short-term visual deprivation over periods ranging from under 10 to over 110 minutes. We used an automated, precisely controlled two-interval forced-choice grating orientation task to assess each participant's ability to discern the orientation of square-wave gratings pressed against the stationary index finger pad of the dominant hand. A two-down one-up staircase (Experiment 1 or a Bayesian adaptive procedure (Experiments 2 and 3 was used to determine the groove width of the grating whose orientation each participant could reliably discriminate. The experiments consistently showed that tactile grating orientation discrimination does not improve with short-term visual deprivation. In fact, we found that tactile performance degraded slightly but significantly upon a brief period of visual deprivation (Experiment 1 and did not improve over periods of up to 110 minutes of deprivation (Experiments 2 and 3. The results additionally showed that grating orientation discrimination tends to improve upon repeated testing, and confirmed that women significantly outperform men on the grating orientation task. We conclude that, contrary to two recent reports but consistent with an earlier literature, passive tactile spatial acuity is not enhanced by short-term visual deprivation. Our findings have important theoretical and practical implications. On the theoretical side, the findings set limits on the time course over which neural mechanisms such as crossmodal plasticity may operate to drive sensory changes; on the practical side, the findings suggest that researchers who compare tactile acuity of blind and sighted participants should not blindfold the sighted participants.

  8. Global motion perception is associated with motor function in 2-year-old children.

    Science.gov (United States)

    Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E

    2017-09-29

    The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, Pmotor scores (r 2 =0.06, pmotor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A TMS study on the contribution of visual area V5 to the perception of implied motion in art and its appreciation.

    Science.gov (United States)

    Cattaneo, Zaira; Schiavi, Susanna; Silvanto, Juha; Nadal, Marcos

    2017-01-01

    Over the last decade, researchers have sought to understand the brain mechanisms involved in the appreciation of art. Previous studies reported an increased activity in sensory processing regions for artworks that participants find more appealing. Here we investigated the intriguing possibility that activity in cortical area V5-a region in the occipital cortex mediating physical and implied motion detection-is related not only to the generation of a sense of motion from visual cues used in artworks, but also to the appreciation of those artworks. Art-naïve participants viewed a series of paintings and quickly judged whether or not the paintings conveyed a sense of motion, and whether or not they liked them. Triple-pulse TMS applied over V5 while viewing the paintings significantly decreased the perceived sense of motion, and also significantly reduced liking of abstract (but not representational) paintings. Our data demonstrate that V5 is involved in extracting motion information even when the objects whose motion is implied are pictorial representations (as opposed to photographs or film frames), and even in the absence of any figurative content. Moreover, our study suggests that, in the case of untrained people, V5 activity plays a causal role in the appreciation of abstract but not of representational art.

  10. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    Science.gov (United States)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  11. CREATING AUDIO VISUAL DIALOGUE TASK AS STUDENTS’ SELF ASSESSMENT TO ENHANCE THEIR SPEAKING ABILITY

    Directory of Open Access Journals (Sweden)

    Novia Trisanti

    2017-04-01

    Full Text Available The study is about giving overview of employing audio visual dialogue task as students creativity task and self assessment in EFL speaking class of tertiary education to enhance the students speaking ability. The qualitative research was done in one of the speaking classes at English Department, Semarang State University, Central Java, Indonesia. The results that can be seen from the rubric of self assessment show that the oral performance through audio visual recorded tasks done by the students as their self assessment gave positive evidences. The audio visual dialogue task can be very beneficial since it can motivate the students learning and increase their learning experiences. The self-assessment can be a valuable additional means to improve their speaking ability since it is one of the motives that drive self- evaluatioan, along with self- verification and self- enhancement.

  12. Trunk motion visual feedback during walking improves dynamic balance in older adults: Assessor blinded randomized controlled trial.

    Science.gov (United States)

    Anson, Eric; Ma, Lei; Meetam, Tippawan; Thompson, Elizabeth; Rathore, Roshita; Dean, Victoria; Jeka, John

    2018-05-01

    Virtual reality and augmented feedback have become more prevalent as training methods to improve balance. Few reports exist on the benefits of providing trunk motion visual feedback (VFB) during treadmill walking, and most of those reports only describe within session changes. To determine whether trunk motion VFB treadmill walking would improve over-ground balance for older adults with self-reported balance problems. 40 adults (75.8 years (SD 6.5)) with self-reported balance difficulties or a history of falling were randomized to a control or experimental group. Everyone walked on a treadmill at a comfortable speed 3×/week for 4 weeks in 2 min bouts separated by a seated rest. The control group was instructed to look at a stationary bulls-eye target while the experimental group also saw a moving cursor superimposed on the stationary bulls-eye that represented VFB of their walking trunk motion. The experimental group was instructed to keep the cursor in the center of the bulls-eye. Somatosensory (monofilaments and joint position testing) and vestibular function (canal specific clinical head impulses) was evaluated prior to intervention. Balance and mobility were tested before and after the intervention using Berg Balance Test, BESTest, mini-BESTest, and Six Minute Walk. There were no significant differences between groups before the intervention. The experimental group significantly improved on the BESTest (p = 0.031) and the mini-BEST (p = 0.019). The control group did not improve significantly on any measure. Individuals with more profound sensory impairments had a larger improvement on dynamic balance subtests of the BESTest. Older adults with self-reported balance problems improve their dynamic balance after training using trunk motion VFB treadmill walking. Individuals with worse sensory function may benefit more from trunk motion VFB during walking than individuals with intact sensory function. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Enhanced visual short-term memory in action video game players.

    Science.gov (United States)

    Blacker, Kara J; Curby, Kim M

    2013-08-01

    Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.

  14. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    Science.gov (United States)

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    Science.gov (United States)

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  16. Projectile Motion Hoop Challenge

    Science.gov (United States)

    Jordan, Connor; Dunn, Amy; Armstrong, Zachary; Adams, Wendy K.

    2018-04-01

    Projectile motion is a common phenomenon that is used in introductory physics courses to help students understand motion in two dimensions. Authors have shared a range of ideas for teaching this concept and the associated kinematics in The Physics Teacher; however, the "Hoop Challenge" is a new setup not before described in TPT. In this article an experiment is illustrated to explore projectile motion in a fun and challenging manner that has been used with both high school and university students. With a few simple materials, students have a vested interest in being able to calculate the height of the projectile at a given distance from its launch site. They also have an exciting visual demonstration of projectile motion when the lab is over.

  17. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  18. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    Science.gov (United States)

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  19. Utilizing visual art to enhance the clinical observation skills of medical students.

    Science.gov (United States)

    Jasani, Sona K; Saks, Norma S

    2013-07-01

    Clinical observation is fundamental in practicing medicine, but these skills are rarely taught. Currently no evidence-based exercises/courses exist for medical student training in observation skills. The goal was to develop and teach a visual arts-based exercise for medical students, and to evaluate its usefulness in enhancing observation skills in clinical diagnosis. A pre- and posttest and evaluation survey were developed for a three-hour exercise presented to medical students just before starting clerkships. Students were provided with questions to guide discussion of both representational and non-representational works of art. Quantitative analysis revealed that the mean number of observations between pre- and posttests was not significantly different (n=70: 8.63 vs. 9.13, p=0.22). Qualitative analysis of written responses identified four themes: (1) use of subjective terminology, (2) scope of interpretations, (3) speculative thinking, and (4) use of visual analogies. Evaluative comments indicated that students felt the exercise enhanced both mindfulness and skills. Using visual art images with guided questions can train medical students in observation skills. This exercise can be replicated without specially trained personnel or art museum partnerships.

  20. X-ray visualization of a mosquito's head

    International Nuclear Information System (INIS)

    Kikuchi, Kenji; Mochizuki, Osamu

    2007-01-01

    A technology to visualize an internal anatomy of living animals has developed for a medical diagnostics and biology by using Synchrotron x-ray produced in a Photon Factory. A dynamic motion of organ, muscles and respiratory of small insects is difficult to observe by using conventional x-ray imaging because of luck of special and temporal resolution. We visualized motions of pumps located in a mosquito's head through a Phase-contrast X-ray imaging technique by using a synchrotron X-ray. Isovue370 was fed with a 10% dilute glucose solution to visualize a flow. We found that the phase difference between the motions of an oral cavity pump and pharynx pump was 180 degrees. (author)

  1. Artificial horizon effects on motion sickness and performance.

    Science.gov (United States)

    Tal, Dror; Gonen, Adi; Wiener, Guy; Bar, Ronen; Gil, Amnon; Nachum, Zohar; Shupak, Avi

    2012-07-01

    To investigate whether the projection of Earth-referenced scenes during provocative motion can alleviate motion sickness severity and prevent motion sickness-induced degradation of performance. Exposure to unfamiliar motion patterns commonly results in motion sickness and decreased performance. Thirty subjects with moderate-to-severe motion sickness susceptibility were exposed to the recorded motion profile of a missile boat under moderate sea conditions in a 3-degrees-of-freedom ship motion simulator. During a 120-minute simulated voyage, the study participants were repeatedly put through a performance test battery and completed a motion sickness susceptibility questionnaire, while self-referenced and Earth-referenced visual scenes were projected inside the closed simulator cabin. A significant decrease was found in the maximal motion sickness severity score, from 9.83 ± 9.77 (mean ± standard deviation) to 7.23 ± 7.14 (p pitch, and heave movements of the simulator. Although there was a significant decrease in sickness severity, substantial symptoms still persisted. Decision making, vision, concentration, memory, simple reasoning, and psychomotor skills all deteriorated under the motion conditions. However, no significant differences between the projection conditions could be found in the scores of any of the performance tests. Visual information regarding the vessel's movement provided by an artificial horizon device might decrease motion sickness symptoms. However, although this device might be suitable for passive transportation, the continued deterioration in performance measures indicates that it provides no significant advantage for personnel engaged in the active operation of modern vessels.

  2. Pharmacological Mechanisms of Cortical Enhancement Induced by the Repetitive Pairing of Visual/Cholinergic Stimulation.

    Directory of Open Access Journals (Sweden)

    Jun-Il Kang

    Full Text Available Repetitive visual training paired with electrical activation of cholinergic projections to the primary visual cortex (V1 induces long-term enhancement of cortical processing in response to the visual training stimulus. To better determine the receptor subtypes mediating this effect the selective pharmacological blockade of V1 nicotinic (nAChR, M1 and M2 muscarinic (mAChR or GABAergic A (GABAAR receptors was performed during the training session and visual evoked potentials (VEPs were recorded before and after training. The training session consisted of the exposure of awake, adult rats to an orientation-specific 0.12 CPD grating paired with an electrical stimulation of the basal forebrain for a duration of 1 week for 10 minutes per day. Pharmacological agents were infused intracortically during this period. The post-training VEP amplitude was significantly increased compared to the pre-training values for the trained spatial frequency and to adjacent spatial frequencies up to 0.3 CPD, suggesting a long-term increase of V1 sensitivity. This increase was totally blocked by the nAChR antagonist as well as by an M2 mAChR subtype and GABAAR antagonist. Moreover, administration of the M2 mAChR antagonist also significantly decreased the amplitude of the control VEPs, suggesting a suppressive effect on cortical responsiveness. However, the M1 mAChR antagonist blocked the increase of the VEP amplitude only for the high spatial frequency (0.3 CPD, suggesting that M1 role was limited to the spread of the enhancement effect to a higher spatial frequency. More generally, all the drugs used did block the VEP increase at 0.3 CPD. Further, use of each of the aforementioned receptor antagonists blocked training-induced changes in gamma and beta band oscillations. These findings demonstrate that visual training coupled with cholinergic stimulation improved perceptual sensitivity by enhancing cortical responsiveness in V1. This enhancement is mainly mediated by n

  3. Visual field measurement with motion sensitivity screening test

    African Journals Online (AJOL)

    has been shown that early ocular lesions which manifest as visual field defects or ... easy-to-understand computer perimetry that could be useful in monitoring visual field changes in onchocer- .... education with the equivalent of ordinary level.

  4. The Effect of Auditory and Visual Motion Picture Descriptive Modalities in Teaching Perceptual-Motor Skills Used in the Grading of Cereal Grains.

    Science.gov (United States)

    Hannemann, James William

    This study was designed to discover whether a student learns to imitate the skills demonstrated in a motion picture more accurately when the supportive descriptive terminology is presented in an auditory (spoken) form or in a visual (captions) form. A six-minute color 16mm film was produced--"Determining the Test Weight per Bushel of Yellow Corn".…

  5. The Perception of the Higher Derivatives of Visual Motion.

    Science.gov (United States)

    1986-06-24

    the two runs the motion was uniform. It was found that sensitivity to acceleration (as indicated by proportion of correct dis- criminations ) decreased...that dis- whose size alternately expanded or contracted at a fixed rate, crimination of direction of motion in depth has submaxima with the transition...stereoknetici. Archivo Italiano di Psicologia . tection: Comparison of postadaptation thresholds. Journal of the 1924.3. 105-120. Optical Society of America. 1983

  6. A model for the pilot's use of motion cues in roll-axis tracking tasks

    Science.gov (United States)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  7. A combined brain-computer interface based on P300 potentials and motion-onset visual evoked potentials.

    Science.gov (United States)

    Jin, Jing; Allison, Brendan Z; Wang, Xingyu; Neuper, Christa

    2012-04-15

    Brain-computer interfaces (BCIs) allow users to communicate via brain activity alone. Many BCIs rely on the P300 and other event-related potentials (ERPs) that are elicited when target stimuli flash. Although there have been considerable research exploring ways to improve P300 BCIs, surprisingly little work has focused on new ways to change visual stimuli to elicit more recognizable ERPs. In this paper, we introduce a "combined" BCI based on P300 potentials and motion-onset visual evoked potentials (M-VEPs) and compare it with BCIs based on each simple approach (P300 and M-VEP). Offline data suggested that performance would be best in the combined paradigm. Online tests with adaptive BCIs confirmed that our combined approach is practical in an online BCI, and yielded better performance than the other two approaches (P<0.05) without annoying or overburdening the subject. The highest mean classification accuracy (96%) and practical bit rate (26.7bit/s) were obtained from the combined condition. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Transient severe motion artifact related to gadoxetate disodium-enhanced liver MRI. Frequency and risk evaluation at a German institution

    Energy Technology Data Exchange (ETDEWEB)

    Well, Lennart; Rausch, Vanessa Hanna; Adam, Gerhard; Henes, Frank Oliver; Bannas, Peter [Univ. Medical Center Hamburg-Eppendorf, Hamburg (Germany). Dept. for Diagnostic and Interventional Radiology and Nuclear Medicine

    2017-07-15

    Varying frequencies (5 - 18%) of contrast-related transient severe motion (TSM) imaging artifacts during gadoxetate disodium-enhanced arterial phase liver MRI have been reported. Since previous reports originated from the United States and Japan, we aimed to determine the frequency of TSM at a German institution and to correlate it with potential risk factors and previously published results. Two age- and sex-matched groups were retrospectively selected (gadoxetate disodium n = 89; gadobenate dimeglumine n = 89) from dynamic contrast-enhanced MRI examinations in a single center. Respiratory motion-related artifacts in non-enhanced and dynamic phases were assessed independently by two readers blinded to contrast agents on a 4-point scale. Scores of ≥3 were considered as severe motion artifacts. Severe motion artifacts in arterial phases were considered as TSM if scores in all other phases were < 3. Potential risk factors for TSM were evaluated via logistic regression analysis. For gadoxetate disodium, the mean score for respiratory motion artifacts was significantly higher in the arterial phase (2.2 ± 0.9) compared to all other phases (1.6 ± 0.7) (p < 0.05). The frequency of TSM was significantly higher with gadoxetate disodium (n = 19; 21.1 %) than with gadobenate dimeglumine (n = 1; 1.1%) (p < 0.001). The frequency of TSM at our institution is similar to some, but not all previously published findings. Logistic regression analysis did not show any significant correlation between TSM and risk factors (all p>0.05). We revealed a high frequency of TSM after injection of gadoxetate disodium at a German institution, substantiating the importance of a diagnosis-limiting phenomenon that so far has only been reported from the United States and Japan. In accordance with previous studies, we did not identify associated risk factors for TSM.

  9. Decision-level adaptation in motion perception.

    Science.gov (United States)

    Mather, George; Sharman, Rebecca J

    2015-12-01

    Prolonged exposure to visual stimuli causes a bias in observers' responses to subsequent stimuli. Such adaptation-induced biases are usually explained in terms of changes in the relative activity of sensory neurons in the visual system which respond selectively to the properties of visual stimuli. However, the bias could also be due to a shift in the observer's criterion for selecting one response rather than the alternative; adaptation at the decision level of processing rather than the sensory level. We investigated whether adaptation to implied motion is best attributed to sensory-level or decision-level bias. Three experiments sought to isolate decision factors by changing the nature of the participants' task while keeping the sensory stimulus unchanged. Results showed that adaptation-induced bias in reported stimulus direction only occurred when the participants' task involved a directional judgement, and disappeared when adaptation was measured using a non-directional task (reporting where motion was present in the display, regardless of its direction). We conclude that adaptation to implied motion is due to decision-level bias, and that a propensity towards such biases may be widespread in sensory decision-making.

  10. Subjective Vertical Conflict Theory and Space Motion Sickness.

    Science.gov (United States)

    Chen, Wei; Chao, Jian-Gang; Wang, Jin-Kun; Chen, Xue-Wen; Tan, Cheng

    2016-02-01

    Space motion sickness (SMS) remains a troublesome problem during spaceflight. The subjective vertical (SV) conflict theory postulates that all motion sickness provoking situations are characterized by a condition in which the SV sensed from gravity and visual and idiotropic cues differs from the expected vertical. This theory has been successfully used to predict motion sickness in different vehicles on Earth. We have summarized the most outstanding and recent studies on the illusions and characteristics associated with spatial disorientation and SMS during weightlessness, such as cognitive map and mental rotation, the visual reorientation and inversion illusions, and orientation preferences between visual scenes and the internal z-axis of the body. The relationships between the SV and the incidence of and susceptibility to SMS as well as spatial disorientation were addressed. A consistent framework was presented to understand and explain SMS characteristics in more detail on the basis of the SV conflict theory, which is expected to be more advantageous in SMS prediction, prevention, and training.

  11. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    Science.gov (United States)

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  12. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    Science.gov (United States)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input

  13. Modality-dependent effect of motion information in sensory-motor synchronised tapping.

    Science.gov (United States)

    Ono, Kentaro

    2018-05-14

    Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. S4-1: Motion Detection Based on Recurrent Network Dynamics

    Directory of Open Access Journals (Sweden)

    Bart Krekelberg

    2012-10-01

    Full Text Available The detection of a sequence of events requires memory. The detection of visual motion is a well-studied example; there the memory allows the comparison of current with earlier visual input. This comparison results in an estimate of direction and speed of motion. The dominant model of motion detection in primates—the motion energy model—assumes that this memory resides in subclasses of cells with slower temporal dynamics. It is not clear, however, how such slow dynamics could arise. We used extracellularly recorded responses of neurons in the macaque middle temporal area to train an artificial neural network with recurrent connectivity. The trained network successfully reproduced the population response, and had many properties also found in the visual cortex (e.g., Gabor-like receptive fields, a hierarchy of simple and complex cells, motion opponency. When probed with reverse-correlation methods, the network's response was very similar to that of a feed-forward motion energy model, even though recurrent feedback is an essential part of its architecture. These findings show that a strongly recurrent network can masquerade as a feed-forward network. Moreover, they suggest a conceptually novel role for recurrent network connectivity: the creation of flexible temporal delays to implement short term memory and compute velocity.

  15. Density of Visual Input Enhancement and Grammar Learning: A Research Proposal

    Science.gov (United States)

    Tran, Thu Hoang

    2009-01-01

    Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…

  16. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.

    Science.gov (United States)

    Wright, W Geoffrey

    2014-01-01

    Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  17. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds

    Directory of Open Access Journals (Sweden)

    W. Geoffrey Wright

    2014-04-01

    Full Text Available Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS. This mini-review focuses on the use of virtual environments (VE to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  18. How to use body tilt for the simulation of linear self motion

    NARCIS (Netherlands)

    Groen, E.L.; Bles, W.

    2004-01-01

    We examined to what extent body tilt may augment the perception of visually simulated linear self acceleration. Fourteen subjects judged visual motion profiles of fore-aft motion at four different frequencies between 0.04-0.33 Hz, and at three different acceleration amplitudes (0.44, 0.88 and 1.76

  19. Effects of auditory vection speed and directional congruence on perceptions of visual vection

    Science.gov (United States)

    Gagliano, Isabella Alexis

    Spatial disorientation is a major contributor to aircraft mishaps. One potential contributing factor is vection, an illusion of self-motion. Although vection is commonly thought of as a visual illusion, it can also be produced through audition. The purpose of the current experiment was to explore interactions between conflicting visual and auditory vection cues, specifically with regard to the speed and direction of rotation. The ultimate goal was to explore the extent to which aural vection could diminish or enhance the perception of visual vection. The study used a 3 x 2 within-groups factorial design. Participants were exposed to three levels of aural rotation velocity (slower, matched, and faster, relative to visual rotation speed) and two levels of aural rotational congruence (congruent or incongruent rotation) including two control conditions (visual and aural-only). Dependent measures included vection onset time, vection direction judgements, subjective vection strength ratings, vection speed ratings, and horizontal nystagmus frequency. Subjective responses to motion were assessed pre and post treatment, and oculomotor responses were assessed before, during, and following exposure to circular vection. The results revealed a significant effect of stimulus condition on vection strength. Specifically, directionally-congruent aural-visual vection resulted in significantly stronger vection than visual and aural vection alone. Perceptions of directionally-congruent aural-visual vection were slightly stronger vection than directionally-incongruent aural-visual vection, but not significantly so. No significant effects of aural rotation velocity on vection strength were observed. The results suggest directionally-incongruent aural vection could be used as a countermeasure for visual vection and directionally-congruent aural vection could be used to improve vection in virtual environments, provided further research is done.

  20. New insights into the role of motion and form vision in neurodevelopmental disorders.

    Science.gov (United States)

    Johnston, Richard; Pitchford, Nicola J; Roach, Neil W; Ledgeway, Timothy

    2017-12-01

    A selective deficit in processing the global (overall) motion, but not form, of spatially extensive objects in the visual scene is frequently associated with several neurodevelopmental disorders, including preterm birth. Existing theories that proposed to explain the origin of this visual impairment are, however, challenged by recent research. In this review, we explore alternative hypotheses for why deficits in the processing of global motion, relative to global form, might arise. We describe recent evidence that has utilised novel tasks of global motion and global form to elucidate the underlying nature of the visual deficit reported in different neurodevelopmental disorders. We also examine the role of IQ and how the sex of an individual can influence performance on these tasks, as these are factors that are associated with performance on global motion tasks, but have not been systematically controlled for in previous studies exploring visual processing in clinical populations. Finally, we suggest that a new theoretical framework is needed for visual processing in neurodevelopmental disorders and present recommendations for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. A Visual Arts Education pedagogical approach for enhancing quality of life for persons with dementia (innovative practice).

    Science.gov (United States)

    Tietyen, Ann C; Richards, Allan G

    2017-01-01

    A new and innovative pedagogical approach that administers hands-on visual arts activities to persons with dementia based on the field of Visual Arts Education is reported in this paper. The aims of this approach are to enhance cognition and improve quality of life. These aims were explored in a small qualitative study with eight individuals with moderate dementia, and the results are published as a thesis. In this paper, we summarize and report the results of this small qualitative study and expand upon the rationale for the Visual Arts Education pedagogical approach that has shown promise for enhancing cognitive processes and improving quality of life for persons with dementia.

  2. Effects of aging on perception of motion

    Science.gov (United States)

    Kaur, Manpreet; Wilder, Joseph; Hung, George; Julesz, Bela

    1997-09-01

    Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.

  3. Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals

    DEFF Research Database (Denmark)

    Matteau, Isabelle; Kupers, Ron; Ricciardi, Emiliano

    2010-01-01

    imaging (fMRI), we investigated brain responses in eight congenitally blind and nine sighted volunteers who had been trained to use the tongue display unit (TDU), a sensory substitution device which converts visual information into electrotactile pulses delivered to the tongue, to resolve a tactile motion...... discrimination task. Stimuli consisted of either static dots, dots moving coherently or dots moving in random directions. Both groups learned the task at the same rate and activated the hMT+ complex during tactile motion discrimination, although at different anatomical locations. Furthermore, the congenitally...

  4. Four-dimensional microscope- integrated optical coherence tomography to enhance visualization in glaucoma surgeries.

    Science.gov (United States)

    Pasricha, Neel Dave; Bhullar, Paramjit Kaur; Shieh, Christine; Viehland, Christian; Carrasco-Zevallos, Oscar Mijail; Keller, Brenton; Izatt, Joseph Adam; Toth, Cynthia Ann; Challa, Pratap; Kuo, Anthony Nanlin

    2017-01-01

    We report the first use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT) capable of live four-dimensional (4D) (three-dimensional across time) imaging intraoperatively to directly visualize tube shunt placement and trabeculectomy surgeries in two patients with severe open-angle glaucoma and elevated intraocular pressure (IOP) that was not adequately managed by medical intervention or prior surgery. We performed tube shunt placement and trabeculectomy surgery and used SS-MIOCT to visualize and record surgical steps that benefitted from the enhanced visualization. In the case of tube shunt placement, SS-MIOCT successfully visualized the scleral tunneling, tube shunt positioning in the anterior chamber, and tube shunt suturing. For the trabeculectomy, SS-MIOCT successfully visualized the scleral flap creation, sclerotomy, and iridectomy. Postoperatively, both patients did well, with IOPs decreasing to the target goal. We found the benefit of SS-MIOCT was greatest in surgical steps requiring depth-based assessments. This technology has the potential to improve clinical outcomes.

  5. Visual working memory enhances the neural response to matching visual input

    NARCIS (Netherlands)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-01-01

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw

  6. [Vestibular testing abnormalities in individuals with motion sickness].

    Science.gov (United States)

    Ma, Yan; Ou, Yongkang; Chen, Ling; Zheng, Yiqing

    2009-08-01

    To evaluate the vestibular function of motion sickness. VNG, which tests the vestibular function of horizontal semicircular canal, and CPT, which tests vestibulospinal reflex and judge proprioceptive, visual and vestibular status, were performed in 30 motion sickness patients and 20 healthy volunteers (control group). Graybiel score was recorded at the same time. Two groups' Graybiel score (12.67 +/- 11.78 vs 2.10 +/- 6.23; rank test P<0.05), caloric test labyrinth value [(19.02 +/- 8.59) degrees/s vs (13.58 +/- 5.25) degrees/s; t test P<0.05], caloric test labyrinth value of three patients in motion sickness group exceeded 75 degrees/s. In computerized posturography testing (CPT), motion sickness patients were central type (66.7%) and disperse type (23.3%); all of control group were central type. There was statistical significance in two groups' CTP area, and motion sickness group was obviously higher than control group. While stimulating vestibulum in CPT, there was abnormality (35%-50%) in motion sickness group and none in control group. Generally evaluating CPT, there was only 2 proprioceptive hypofunction, 3 visual hypofunction, and no vestibular hypofunction, but none hypofunction in control group. Motion sickness patients have high vestibular susceptible, some with vestibular hyperfunction. In posturography, a large number of motion sickness patients are central type but no vestibular hypofunction, but it is hard to keep balance when stimulating vestibulum.

  7. Motion of the esophagus due to cardiac motion.

    Directory of Open Access Journals (Sweden)

    Jacob Palmer

    Full Text Available When imaging studies (e.g. CT are used to quantify morphological changes in an anatomical structure, it is necessary to understand the extent and source of motion which can give imaging artifacts (e.g. blurring or local distortion. The objective of this study was to assess the magnitude of esophageal motion due to cardiac motion. We used retrospective electrocardiogram-gated contrast-enhanced computed tomography angiography images for this study. The anatomic region from the carina to the bottom of the heart was taken at deep-inspiration breath hold with the patients' arms raised above their shoulders, in a position similar to that used for radiation therapy. The esophagus was delineated on the diastolic phase of cardiac motion, and deformable registration was used to sequentially deform the images in nearest-neighbor phases among the 10 cardiac phases, starting from the diastolic phase. Using the 10 deformation fields generated from the deformable registration, the magnitude of the extreme displacements was then calculated for each voxel, and the mean and maximum displacement was calculated for each computed tomography slice for each patient. The average maximum esophageal displacement due to cardiac motion for all patients was 5.8 mm (standard deviation: 1.6 mm, maximum: 10.0 mm in the transverse direction. For 21 of 26 patients, the largest esophageal motion was found in the inferior region of the heart; for the other patients, esophageal motion was approximately independent of superior-inferior position. The esophagus motion was larger at cardiac phases where the electrocardiogram R-wave occurs. In conclusion, the magnitude of esophageal motion near the heart due to cardiac motion is similar to that due to other sources of motion, including respiratory motion and intra-fraction motion. A larger cardiac motion will result into larger esophagus motion in a cardiac cycle.

  8. Effects of Different Heave Motion Components on Pilot Pitch Control Behavior

    Science.gov (United States)

    Zaal, Petrus M. T.; Zavala, Melinda A.

    2016-01-01

    The study described in this paper had two objectives. The first objective was to investigate if a different weighting of heave motion components decomposed at the center of gravity, allowing for a higher fidelity of individual components, would result in pilot manual pitch control behavior and performance closer to that observed with full aircraft motion. The second objective was to investigate if decomposing the heave components at the aircraft's instantaneous center of rotation rather than at the center of gravity could result in additional improvements in heave motion fidelity. Twenty-one general aviation pilots performed a pitch attitude control task in an experiment conducted on the Vertical Motion Simulator at NASA Ames under different hexapod motion conditions. The large motion capability of the Vertical Motion Simulator also allowed for a full aircraft motion condition, which served as a baseline. The controlled dynamics were of a transport category aircraft trimmed close to the stall point. When the ratio of center of gravity pitch heave to center of gravity heave increased in the hexapod motion conditions, pilot manual control behavior and performance became increasingly more similar to what is observed with full aircraft motion. Pilot visual and motion gains significantly increased, while the visual lead time constant decreased. The pilot visual and motion time delays remained approximately constant and decreased, respectively. The neuromuscular damping and frequency both decreased, with their values more similar to what is observed with real aircraft motion when there was an equal weighting of the heave of the center of gravity and heave due to rotations about the center of gravity. In terms of open- loop performance, the disturbance and target crossover frequency increased and decreased, respectively, and their corresponding phase margins remained constant and increased, respectively. The decomposition point of the heave components only had limited

  9. Enhancing Nuclear Newcomer Training with 3D Visualization Learning Tools

    International Nuclear Information System (INIS)

    Gagnon, V.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  10. Embodied learning of a generative neural model for biological motion perception and inference.

    Science.gov (United States)

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  11. Embodied Learning of a Generative Neural Model for Biological Motion Perception and Inference

    Directory of Open Access Journals (Sweden)

    Fabian eSchrodt

    2015-07-01

    Full Text Available Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  12. Motion Pattern Extraction and Event Detection for Automatic Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Benabbas Yassine

    2011-01-01

    Full Text Available Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominant motion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed.

  13. Echocardiogram enhancement using supervised manifold denoising.

    Science.gov (United States)

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    Science.gov (United States)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  15. Experiential Learning in Vehicle Dynamics Education via Motion Simulation and Interactive Gaming

    Directory of Open Access Journals (Sweden)

    Kevin Hulme

    2009-01-01

    Full Text Available Creating active, student-centered learning situations in postsecondary education is an ongoing challenge for engineering educators. Contemporary students familiar with visually engaging and fast-paced games can find traditional classroom methods of lecture and guided laboratory experiments limiting. This paper presents a methodology that incorporates driving simulation, motion simulation, and educational practices into an engaging, gaming-inspired simulation framework for a vehicle dynamics curriculum. The approach is designed to promote active student participation in authentic engineering experiences that enhance learning about road vehicle dynamics. The paper presents the student use of physical simulation and large-scale visualization to discover the impact that design decisions have on vehicle design using a gaming interface. The approach is evaluated using two experiments incorporated into a sequence of two upper level mechanical engineering courses.

  16. Visual Enhancement for Sports Entertainment by Vision-Based Augmented Reality

    OpenAIRE

    Uematsu, Yuko; Saito, Hideo

    2008-01-01

    This paper presents visually enhanced sports entertainment applications: AR Baseball Presentation System and Interactive AR Bowling System. We utilize vision-based augmented reality for getting immersive feeling. First application is an observation system of a virtual baseball game on the tabletop. 3D virtual players are playing a game on a real baseball field model, so that users can observe the game from favorite view points through a handheld monitor with a web camera....

  17. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  18. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  19. The Effects of Using the Kinect Motion-Sensing Interactive System to Enhance English Learning for Elementary Students

    Science.gov (United States)

    Pan, Wen Fu

    2017-01-01

    The objective of this study was to test whether the Kinect motion-sensing interactive system (KMIS) enhanced students' English vocabulary learning, while also comparing the system's effectiveness against a traditional computer-mouse interface. Both interfaces utilized an interactive game with a questioning strategy. One-hundred and twenty…

  20. Functional roles of 10 Hz alpha-band power modulating engagement and disengagement of cortical networks in a complex visual motion task.

    Directory of Open Access Journals (Sweden)

    Kunjan D Rana

    Full Text Available Alpha band power, particularly at the 10 Hz frequency, is significantly involved in sensory inhibition, attention modulation, and working memory. However, the interactions between cortical areas and their relationship to the different functional roles of the alpha band oscillations are still poorly understood. Here we examined alpha band power and the cortico-cortical interregional phase synchrony in a psychophysical task involving the detection of an object moving in depth by an observer in forward self-motion. Wavelet filtering at the 10 Hz frequency revealed differences in the profile of cortical activation in the visual processing regions (occipital and parietal lobes and in the frontoparietal regions. The alpha rhythm driving the visual processing areas was found to be asynchronous with the frontoparietal regions. These findings suggest a decoupling of the 10 Hz frequency into separate functional roles: sensory inhibition in the visual processing regions and spatial attention in the frontoparietal regions.

  1. Altered Insular and Occipital Responses to Simulated Vertical Self-Motion in Patients with Persistent Postural-Perceptual Dizziness

    Directory of Open Access Journals (Sweden)

    Roberta Riccelli

    2017-10-01

    Full Text Available BackgroundPersistent postural-perceptual dizziness (PPPD is a common functional vestibular disorder characterized by persistent symptoms of non-vertiginous dizziness and unsteadiness that are exacerbated by upright posture, self-motion, and exposure to complex or moving visual stimuli. Recent physiologic and neuroimaging data suggest that greater reliance on visual cues for postural control (as opposed to vestibular cues—a phenomenon termed visual dependence and dysfunction in central visuo-vestibular networks may be important pathophysiologic mechanisms underlying PPPD. Dysfunctions are thought to involve insular regions that encode recognition of the visual effects of motion in the gravitational field.MethodsWe tested for altered activity in vestibular and visual cortices during self-motion simulation obtained via a visual virtual-reality rollercoaster stimulation using functional magnetic resonance imaging in 15 patients with PPPD and 15 healthy controls (HCs. We compared between groups differences in brain responses to simulated displacements in vertical vs horizontal directions and correlated the difference in directional responses with dizziness handicap in patients with PPPD.ResultsHCs showed increased activity in the anterior bank of the central insular sulcus during vertical relative to horizontal motion, which was not seen in patients with PPPD. However, for the same comparison, dizziness handicap correlated positively with activity in the visual cortex (V1, V2, and V3 in patients with PPPD.ConclusionWe provide novel insight into the pathophysiologic mechanisms underlying PPPD, including functional alterations in brain processes that affect balance control and reweighting of space-motion inputs to favor visual cues. For patients with PPPD, difficulties using visual data to discern the effects of gravity on self-motion may adversely affect balance control, particularly for individuals who simultaneously rely too heavily on visual

  2. Motion depending on the strategies of players enhances cooperation in a co-evolutionary prisoner's dilemma game

    International Nuclear Information System (INIS)

    Cheng Hongyan; Li Haihong; Dai Qionglin; Zhu Yun; Yang Junzhong

    2010-01-01

    In the evolution of cooperation, the motion of players plays an important role. In this paper, we incorporate, into an evolutionary prisoner dilemma's game on networks, a new factor that cooperators and defectors move with different probabilities. By investigating the dependence of the cooperator frequency on the moving probabilities of cooperators and defectors, μ c and μ d , we find that cooperation is greatly enhanced in the parameter regime of μ c d . The snapshots of strategy pattern and the evolutions of cooperator clusters and defector clusters reveal that either the fast motion of defectors or the slow motion of cooperators always favors the formation of large cooperator clusters. The model is investigated on different types of networks such as square lattices, Erdoes-Renyi networks and scale-free networks and with different types of strategy-updating rules such as the richest-following rule and the Fermi rule. The numerical results show that the observed phenomena are robust to different networks and to different strategy-updating rules.

  3. Motion perception tasks as potential correlates to driving difficulty in the elderly

    Science.gov (United States)

    Raghuram, A.; Lakshminarayanan, V.

    2006-09-01

    Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.

  4. N1 enhancement in synesthesia during visual and audio-visual perception in semantic cross-modal conflict situations: an ERP study

    Directory of Open Access Journals (Sweden)

    Christopher eSinke

    2014-01-01

    Full Text Available Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and inanimated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found an enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.

  5. Visualization techniques in plasma numerical simulations

    International Nuclear Information System (INIS)

    Kulhanek, P.; Smetana, M.

    2004-01-01

    Numerical simulations of plasma processes usually yield a huge amount of raw numerical data. Information about electric and magnetic fields and particle positions and velocities can be typically obtained. There are two major ways of elaborating these data. First of them is called plasma diagnostics. We can calculate average values, variances, correlations of variables, etc. These results may be directly comparable with experiments and serve as the typical quantitative output of plasma simulations. The second possibility is the plasma visualization. The results are qualitative only, but serve as vivid display of phenomena in the plasma followed-up. An experience with visualizing electric and magnetic fields via Line Integral Convolution method is described in the first part of the paper. The LIC method serves for visualization of vector fields in two dimensional section of the three dimensional plasma. The field values can be known only in grid points of three-dimensional grid. The second part of the paper is devoted to the visualization techniques of the charged particle motion. The colour tint can be used for particle temperature representation. The motion can be visualized by a trace fading away with the distance from the particle. In this manner the impressive animations of the particle motion can be achieved. (author)

  6. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    Science.gov (United States)

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  7. Task-specific impairments and enhancements induced by magnetic stimulation of human visual area V5.

    OpenAIRE

    Walsh, V; Ellison, A; Battelli, L; Cowey, A

    1998-01-01

    Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subje...

  8. Markov Processes: Exploring the Use of Dynamic Visualizations to Enhance Student Understanding

    Science.gov (United States)

    Pfannkuch, Maxine; Budgett, Stephanie

    2016-01-01

    Finding ways to enhance introductory students' understanding of probability ideas and theory is a goal of many first-year probability courses. In this article, we explore the potential of a prototype tool for Markov processes using dynamic visualizations to develop in students a deeper understanding of the equilibrium and hitting times…

  9. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  10. Enhanced visual memory during hypnosis as mediated by hypnotic responsiveness and cognitive strategies.

    Science.gov (United States)

    Crawford, H J; Allen, S N

    1983-12-01

    To investigate the hypothesis that hypnosis has an enhancing effect on imagery processing, as mediated by hypnotic responsiveness and cognitive strategies, four experiments compared performance of low and high, or low, medium, and high, hypnotically responsive subjects in waking and hypnosis conditions on a successive visual memory discrimination task that required detecting differences between successively presented picture pairs in which one member of the pair was slightly altered. Consistently, hypnotically responsive individuals showed enhanced performance during hypnosis, whereas nonresponsive ones did not. Hypnotic responsiveness correlated .52 (p less than .001) with enhanced performance during hypnosis, but it was uncorrelated with waking performance (Experiment 3). Reaction time was not affected by hypnosis, although high hypnotizables were faster than lows in their responses (Experiments 1 and 2). Subjects reported enhanced imagery vividness on the self-report Vividness of Visual Imagery Questionnaire during hypnosis. The differential effect between lows and highs was in the anticipated direction but not significant (Experiments 1 and 2). As anticipated, hypnosis had no significant effect on a discrimination task that required determining whether there were differences between pairs of simultaneously presented pictures. Two cognitive strategies that appeared to mediate visual memory performance were reported: (a) detail strategy, which involved the memorization and rehearsal of individual details for memory, and (b) holistic strategy, which involved looking at and remembering the whole picture with accompanying imagery. Both lows and highs reported similar predominantly detail-oriented strategies during waking; only highs shifted to a significantly more holistic strategy during hypnosis. These findings suggest that high hypnotizables have a greater capacity for cognitive flexibility (Batting, 1979) than do lows. Results are discussed in terms of several

  11. Are Visual Peripheries Forever Young?

    Directory of Open Access Journals (Sweden)

    Kalina Burnat

    2015-01-01

    Full Text Available The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion is more affected by binocular visual deprivation than central visual processing (spatial resolution. In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  12. Are visual peripheries forever young?

    Science.gov (United States)

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  13. Enhancing performance expectancies through visual illusions facilitates motor learning in children.

    Science.gov (United States)

    Bahmani, Moslem; Wulf, Gabriele; Ghadiri, Farhad; Karimi, Saeed; Lewthwaite, Rebecca

    2017-10-01

    In a recent study by Chauvel, Wulf, and Maquestiaux (2015), golf putting performance was found to be affected by the Ebbinghaus illusion. Specifically, adult participants demonstrated more effective learning when they practiced with a hole that was surrounded by small circles, making it look larger, than when the hole was surrounded by large circles, making it look smaller. The present study examined whether this learning advantage would generalize to children who are assumed to be less sensitive to the visual illusion. Two groups of 10-year olds practiced putting golf balls from a distance of 2m, with perceived larger or smaller holes resulting from the visual illusion. Self-efficacy was increased in the group with the perceived larger hole. The latter group also demonstrated more accurate putting performance during practice. Importantly, learning (i.e., delayed retention performance without the illusion) was enhanced in the group that practiced with the perceived larger hole. The findings replicate previous results with adult learners and are in line with the notion that enhanced performance expectancies are key to optimal motor learning (Wulf & Lewthwaite, 2016). Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Sensitivity to synchronicity of biological motion in normal and amblyopic vision

    Science.gov (United States)

    Luu, Jennifer Y.; Levi, Dennis M.

    2017-01-01

    Amblyopia is a developmental disorder of spatial vision that results from abnormal early visual experience usually due to the presence of strabismus, anisometropia, or both strabismus and anisometropia. Amblyopia results in a range of visual deficits that cannot be corrected by optics because the deficits reflect neural abnormalities. Biological motion refers to the motion patterns of living organisms, and is normally displayed as points of lights positioned at the major joints of the body. In this experiment, our goal was twofold. We wished to examine whether the human visual system in people with amblyopia retained the higher-level processing capabilities to extract visual information from the synchronized actions of others, therefore retaining the ability to detect biological motion. Specifically, we wanted to determine if the synchronized interaction of two agents performing a dancing routine allowed the amblyopic observer to use the actions of one agent to predict the expected actions of a second agent. We also wished to establish whether synchronicity sensitivity (detection of synchronized versus desynchronized interactions) is impaired in amblyopic observers relative to normal observers. The two aims are differentiated in that the first aim looks at whether synchronized actions result in improved expected action predictions while the second aim quantitatively compares synchronicity sensitivity, or the ratio of desynchronized to synchronized detection sensitivities, to determine if there is a difference between normal and amblyopic observers. Our results show that the ability to detect biological motion requires more samples in both eyes of amblyopes than in normal control observers. The increased sample threshold is not the result of low-level losses but may reflect losses in feature integration due to undersampling in the amblyopic visual system. However, like normal observers, amblyopes are more sensitive to synchronized versus desynchronized interactions

  15. Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.

    Science.gov (United States)

    White, Claire; Brown, John; Edwards, Mark

    2014-07-01

    Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.

  16. Perception of animacy from the motion of a single sound object

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-01-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused...... that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain...

  17. Chaos in balance: non-linear measures of postural control predict individual variations in visual illusions of motion.

    Directory of Open Access Journals (Sweden)

    Deborah Apthorp

    Full Text Available Visually-induced illusions of self-motion (vection can be compelling for some people, but they are subject to large individual variations in strength. Do these variations depend, at least in part, on the extent to which people rely on vision to maintain their postural stability? We investigated by comparing physical posture measures to subjective vection ratings. Using a Bertec balance plate in a brightly-lit room, we measured 13 participants' excursions of the centre of foot pressure (CoP over a 60-second period with eyes open and with eyes closed during quiet stance. Subsequently, we collected vection strength ratings for large optic flow displays while seated, using both verbal ratings and online throttle measures. We also collected measures of postural sway (changes in anterior-posterior CoP in response to the same visual motion stimuli while standing on the plate. The magnitude of standing sway in response to expanding optic flow (in comparison to blank fixation periods was predictive of both verbal and throttle measures for seated vection. In addition, the ratio between eyes-open and eyes-closed CoP excursions during quiet stance (using the area of postural sway significantly predicted seated vection for both measures. Interestingly, these relationships were weaker for contracting optic flow displays, though these produced both stronger vection and more sway. Next we used a non-linear analysis (recurrence quantification analysis, RQA of the fluctuations in anterior-posterior position during quiet stance (both with eyes closed and eyes open; this was a much stronger predictor of seated vection for both expanding and contracting stimuli. Given the complex multisensory integration involved in postural control, our study adds to the growing evidence that non-linear measures drawn from complexity theory may provide a more informative measure of postural sway than the conventional linear measures.

  18. Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.

    Science.gov (United States)

    Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong

    2016-08-01

    The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.

  19. Training Effectiveness of Visual and Motion Simulation

    Science.gov (United States)

    1981-01-01

    and checkride scores. No statistical differeLes between the two groups were found. Creelman (1959) reported that students trained in theSNJ Link with...simulated and aircraft hvurs or sorsies (Dricisom a Burger, 1976; Brown. Matheny, & Fleaman. 1951; Creelman , 1959; Gray et al., 1969- Payne at al., 1976...reirtionohip between flight simulator motion and trainiag requirmumenia. Human Factors. 1979. 2). 493-50)1. Creelman , J.A. Evaluation of approach

  20. Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.

    Science.gov (United States)

    Ibbotson, M R

    2017-01-23

    The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Prediction of visual saliency in video with deep CNNs

    Science.gov (United States)

    Chaabouni, Souad; Benois-Pineau, Jenny; Hadar, Ofer

    2016-09-01

    Prediction of visual saliency in images and video is a highly researched topic. Target applications include Quality assessment of multimedia services in mobile context, video compression techniques, recognition of objects in video streams, etc. In the framework of mobile and egocentric perspectives, visual saliency models cannot be founded only on bottom-up features, as suggested by feature integration theory. The central bias hypothesis, is not respected neither. In this case, the top-down component of human visual attention becomes prevalent. Visual saliency can be predicted on the basis of seen data. Deep Convolutional Neural Networks (CNN) have proven to be a powerful tool for prediction of salient areas in stills. In our work we also focus on sensitivity of human visual system to residual motion in a video. A Deep CNN architecture is designed, where we incorporate input primary maps as color values of pixels and magnitude of local residual motion. Complementary contrast maps allow for a slight increase of accuracy compared to the use of color and residual motion only. The experiments show that the choice of the input features for the Deep CNN depends on visual task:for th eintersts in dynamic content, the 4K model with residual motion is more efficient, and for object recognition in egocentric video the pure spatial input is more appropriate.

  2. P1-17: Pseudo-Haptics Using Motion-in-Depth Stimulus and Second-Order Motion Stimulus

    Directory of Open Access Journals (Sweden)

    Shuichi Sato

    2012-10-01

    Full Text Available Modification of motion of the computer cursor during the manipulation by the observer evokes illusory haptic sensation (Lecuyer et al., 2004 ACM SIGCHI '04 239–246. This study investigates the pseudo-haptics using motion-in-depth and second-order motion. A stereoscopic display and a PHANTOM were used in the first experiment. A subject was asked to move a visual target at a constant speed in horizontal, vertical, or front-back direction. During the manipulation, the speed was reduced to 50% for 500 msec. The haptic sensation was measured using the magnitude estimation method. The result indicates that perceived haptic sensation from motion-in-depth was about 30% of that from horizontal or vertical motion. A 2D display and the PHANTOM were used in the second experiment. The motion cue was second order—in each frame, dots in a square patch reverses in contrast (i.e., all black dots become white and all white dots become black. The patch was moved in a horizontal direction. The result indicates that perceived haptic sensation from second-order motion was about 90% of that from first-order motion.

  3. Illusory Speed is Retained in Memory during Invisible Motion

    Directory of Open Access Journals (Sweden)

    Luca Battaglini

    2013-05-01

    Full Text Available The brain can retain speed information in early visual short-term memory in an astonishingly precise manner. We investigated whether this (early visual memory system is active during the extrapolation of occluded motion and whether it reflects speed misperception due to contrast and size. Experiments 1A and 2A showed that reducing target contrast or increasing its size led to an illusory speed underestimation. Experiments 1B, 2B, and 3 showed that this illusory phenomenon is reflected in the memory of speed during occluded motion, independent of the range of visible speeds, of the length of the visible trajectory or the invisible trajectory, and of the type of task. These results suggest that illusory speed is retained in memory during invisible motion.

  4. Neural dynamics of motion processing and speed discrimination.

    Science.gov (United States)

    Chey, J; Grossberg, S; Mingolla, E

    1998-09-01

    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

  5. Learning Program for Enhancing Visual Literacy for Non-Design Students Using a CMS to Share Outcomes

    Science.gov (United States)

    Ariga, Taeko; Watanabe, Takashi; Otani, Toshio; Masuzawa, Toshimitsu

    2016-01-01

    This study proposes a basic learning program for enhancing visual literacy using an original Web content management system (Web CMS) to share students' outcomes in class as a blog post. It seeks to reinforce students' understanding and awareness of the design of visual content. The learning program described in this research focuses on to address…

  6. Defining the computational structure of the motion detector in Drosophila.

    Science.gov (United States)

    Clark, Damon A; Bursztyn, Limor; Horowitz, Mark A; Schnitzer, Mark J; Clandinin, Thomas R

    2011-06-23

    Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Slushy weightings for the optimal pilot model. [considering visual tracking task

    Science.gov (United States)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  8. In Vivo Evaluation of the Visual Pathway in Streptozotocin-Induced Diabetes by Diffusion Tensor MRI and Contrast Enhanced MRI.

    Directory of Open Access Journals (Sweden)

    Swarupa Kancherla

    Full Text Available Visual function has been shown to deteriorate prior to the onset of retinopathy in some diabetic patients and experimental animal models. This suggests the involvement of the brain's visual system in the early stages of diabetes. In this study, we tested this hypothesis by examining the integrity of the visual pathway in a diabetic rat model using in vivo multi-modal magnetic resonance imaging (MRI. Ten-week-old Sprague-Dawley rats were divided into an experimental diabetic group by intraperitoneal injection of 65 mg/kg streptozotocin in 0.01 M citric acid, and a sham control group by intraperitoneal injection of citric acid only. One month later, diffusion tensor MRI (DTI was performed to examine the white matter integrity in the brain, followed by chromium-enhanced MRI of retinal integrity and manganese-enhanced MRI of anterograde manganese transport along the visual pathway. Prior to MRI experiments, the streptozotocin-induced diabetic rats showed significantly smaller weight gain and higher blood glucose level than the control rats. DTI revealed significantly lower fractional anisotropy and higher radial diffusivity in the prechiasmatic optic nerve of the diabetic rats compared to the control rats. No apparent difference was observed in the axial diffusivity of the optic nerve, the chromium enhancement in the retina, or the manganese enhancement in the lateral geniculate nucleus and superior colliculus between groups. Our results suggest that streptozotocin-induced diabetes leads to early injury in the optic nerve when no substantial change in retinal integrity or anterograde transport along the visual pathways was observed in MRI using contrast agent enhancement. DTI may be a useful tool for detecting and monitoring early pathophysiological changes in the visual system of experimental diabetes non-invasively.

  9. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments.

    Science.gov (United States)

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a

  10. Enhanced Visualization of Hematoxylin and Eosin Stained Pathological Characteristics by Phasor Approach.

    Science.gov (United States)

    Luo, Teng; Lu, Yuan; Liu, Shaoxiong; Lin, Danying; Qu, Junle

    2017-09-05

    The phasor approach to fluorescence lifetime imaging microscopy (FLIM) is used to identify different types of tissues from hematoxylin and eosin (H&E) stained basal cell carcinoma (BCC) sections. The results suggest that working directly on the phasor space with the clustering assignment achieves immunofluorescence like simultaneous five or six-color imaging by using multiplexed fluorescence lifetimes of H&E. The phase approach is of particular effectiveness for enhanced visualization of the abnormal morphology of a suspected nidus. Moreover, the phasor approach to H&E FLIM data can determine the actual paths or the infiltrating trajectories of basophils and immune cells associated with the preneoplastic or neoplastic skin lesions. The integration of the phasor approach with routine histology proved its available value for skin cancer prevention and early detection. We therefore believe that the phasor analysis of H&E tissue sections is an enhanced visualization tool with the potential to simplify the preparation process of special staining and serve as color contrast aided imaging in clinical pathological examination.

  11. Signal enhancement, not active suppression, follows the contingent capture of visual attention.

    Science.gov (United States)

    Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J

    2017-02-01

    Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  13. Is Visual Selective Attention in Deaf Individuals Enhanced or Deficient? The Case of the Useful Field of View

    Science.gov (United States)

    Dye, Matthew W. G.; Hauser, Peter C.; Bavelier, Daphne

    2009-01-01

    Background Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals. Methodology/Principal Findings We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age. Conclusions/Significance This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task. PMID:19462009

  14. Orientation and direction-of-motion response in the middle temporal visual area (MT of New World owl monkeys as revealed by intrinsic-signal optical imaging

    Directory of Open Access Journals (Sweden)

    Peter M Kaskan

    2010-07-01

    Full Text Available Intrinsic-signal optical imaging was used to evaluate relationships of domains of neurons in visual area MT selective for stimulus orientation and direction of motion. Maps of activation were elicited in MT of owl monkeys by gratings drifting back-and-forth, flashed stationary gratings and unidirectionally drifting fields of random dots. Drifting gratings, typically used to reveal orientation preference domains, contain a motion component that may be represented in MT. Consequently, this stimulus could activate groups of cells responsive to the motion of the grating, its orientation or a combination of both. Domains elicited from either moving or static gratings were remarkably similar, indicating that these groups of cells are responding to orientation, although they may also encode information about motion. To assess the relationship between domains defined by drifting oriented gratings and those responsive to direction of motion, the response to drifting fields of random dots was measured within domains defined from thresholded maps of activation elicited by the drifting gratings. The optical response elicited by drifting fields of random dots was maximal in a direction orthogonal to the map of orientation preference. Thus, neurons in domains selective for stimulus orientation are also selective for motion orthogonal to the preferred stimulus orientation.

  15. Shared sensory estimates for human motion perception and pursuit eye movements.

    Science.gov (United States)

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  16. Visual Guided Navigation

    National Research Council Canada - National Science Library

    Banks, Martin

    1999-01-01

    .... Similarly, the problem of visual navigation is the recovery of an observer's self-motion with respect to the environment from the moving pattern of light reaching the eyes and the complex of extra...

  17. Altered modulation of gamma oscillation frequency by speed of visual motion in children with autism spectrum disorders.

    Science.gov (United States)

    Stroganova, Tatiana A; Butorina, Anna V; Sysoeva, Olga V; Prokofyev, Andrey O; Nikolaeva, Anastasia Yu; Tsetlin, Marina M; Orekhova, Elena V

    2015-01-01

    Recent studies link autism spectrum disorders (ASD) with an altered balance between excitation and inhibition (E/I balance) in cortical networks. The brain oscillations in high gamma-band (50-120 Hz) are sensitive to the E/I balance and may appear useful biomarkers of certain ASD subtypes. The frequency of gamma oscillations is mediated by level of excitation of the fast-spiking inhibitory basket cells recruited by increasing strength of excitatory input. Therefore, the experimental manipulations affecting gamma frequency may throw light on inhibitory networks dysfunction in ASD. Here, we used magnetoencephalography (MEG) to investigate modulation of visual gamma oscillation frequency by speed of drifting annular gratings (1.2, 3.6, 6.0 °/s) in 21 boys with ASD and 26 typically developing boys aged 7-15 years. Multitaper method was used for analysis of spectra of gamma power change upon stimulus presentation and permutation test was applied for statistical comparisons. We also assessed in our participants visual orientation discrimination thresholds, which are thought to depend on excitability of inhibitory networks in the visual cortex. Although frequency of the oscillatory gamma response increased with increasing velocity of visual motion in both groups of participants, the velocity effect was reduced in a substantial proportion of children with ASD. The range of velocity-related gamma frequency modulation correlated inversely with the ability to discriminate oblique line orientation in the ASD group, while no such correlation has been observed in the group of typically developing participants. Our findings suggest that abnormal velocity-related gamma frequency modulation in ASD may constitute a potential biomarker for reduced excitability of fast-spiking inhibitory neurons in a subset of children with ASD.

  18. Motor Simulation without Motor Expertise: Enhanced Corticospinal Excitability in Visually Experienced Dance Spectators

    Science.gov (United States)

    Jola, Corinne; Abedian-Amiri, Ali; Kuppuswamy, Annapoorna; Pollick, Frank E.; Grosbras, Marie-Hélène

    2012-01-01

    The human “mirror-system” is suggested to play a crucial role in action observation and execution, and is characterized by activity in the premotor and parietal cortices during the passive observation of movements. The previous motor experience of the observer has been shown to enhance the activity in this network. Yet visual experience could also have a determinant influence when watching more complex actions, as in dance performances. Here we tested the impact visual experience has on motor simulation when watching dance, by measuring changes in corticospinal excitability. We also tested the effects of empathic abilities. To fully match the participants' long-term visual experience with the present experimental setting, we used three live solo dance performances: ballet, Indian dance, and non-dance. Participants were either frequent dance spectators of ballet or Indian dance, or “novices” who never watched dance. None of the spectators had been physically trained in these dance styles. Transcranial magnetic stimulation was used to measure corticospinal excitability by means of motor-evoked potentials (MEPs) in both the hand and the arm, because the hand is specifically used in Indian dance and the arm is frequently engaged in ballet dance movements. We observed that frequent ballet spectators showed larger MEP amplitudes in the arm muscles when watching ballet compared to when they watched other performances. We also found that the higher Indian dance spectators scored on the fantasy subscale of the Interpersonal Reactivity Index, the larger their MEPs were in the arms when watching Indian dance. Our results show that even without physical training, corticospinal excitability can be enhanced as a function of either visual experience or the tendency to imaginatively transpose oneself into fictional characters. We suggest that spectators covertly simulate the movements for which they have acquired visual experience, and that empathic abilities heighten

  19. Coherent Motion Sensitivity Predicts Individual Differences in Subtraction

    Science.gov (United States)

    Boets, Bart; De Smedt, Bert; Ghesquiere, Pol

    2011-01-01

    Recent findings suggest deficits in coherent motion sensitivity, an index of visual dorsal stream functioning, in children with poor mathematical skills or dyscalculia, a specific learning disability in mathematics. We extended these data using a longitudinal design to unravel whether visual dorsal stream functioning is able to "predict"…

  20. Motion extrapolation in the central fovea.

    Directory of Open Access Journals (Sweden)

    Zhuanghua Shi

    Full Text Available Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination. A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea.

  1. Visual Processing of Object Velocity and Acceleration

    Science.gov (United States)

    1994-02-04

    A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364

  2. More than visual literacy: art and the enhancement of tolerance for ambiguity and empathy.

    Science.gov (United States)

    Bentwich, Miriam Ethel; Gilbey, Peter

    2017-11-10

    Comfort with ambiguity, mostly associated with the acceptance of multiple meanings, is a core characteristic of successful clinicians. Yet past studies indicate that medical students and junior physicians feel uncomfortable with ambiguity. Visual Thinking Strategies (VTS) is a pedagogic approach involving discussions of art works and deciphering the different possible meanings entailed in them. However, the contribution of art to the possible enhancement of the tolerance for ambiguity among medical students has not yet been adequately investigated. We aimed to offer a novel perspective on the effect of art, as it is experienced through VTS, on medical students' tolerance of ambiguity and its possible relation to empathy. Quantitative method utilizing a short survey administered after an interactive VTS session conducted within mandatory medical humanities course for first-year medical students. The intervention consisted of a 90-min session in the form of a combined lecture and interactive discussions about art images. The VTS session and survey were filled by 67 students in two consecutive rounds of first-year students. 67% of the respondents thought that the intervention contributed to their acceptance of multiple possible meanings, 52% thought their visual observation ability was enhanced and 34% thought that their ability to feel the sufferings of other was being enhanced. Statistically significant moderate-to-high correlations were found between the contribution to ambiguity tolerance and contribution to empathy (0.528-0.744; p ≤ 0.01). Art may contribute especially to the development of medical students' tolerance of ambiguity, also related to the enhancement of empathy. The potential contribution of visual art works used in VTS to the enhancement of tolerance for ambiguity and empathy is explained based on relevant literature regarding the embeddedness of ambiguity within art works, coupled with reference to John Dewey's theory of learning. Given the

  3. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    Science.gov (United States)

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  4. The Role of Visual Cues in Microgravity Spatial Orientation

    Science.gov (United States)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body

  5. UROKIN: A Software to Enhance Our Understanding of Urogenital Motion.

    Science.gov (United States)

    Czyrnyj, Catriona S; Labrosse, Michel R; Graham, Ryan B; McLean, Linda

    2018-05-01

    Transperineal ultrasound (TPUS) allows for objective quantification of mid-sagittal urogenital mechanics, yet current practice omits dynamic motion information in favor of analyzing only a rest and a peak motion frame. This work details the development of UROKIN, a semi-automated software which calculates kinematic curves of urogenital landmark motion. A proof of concept analysis, performed using UROKIN on TPUS video recorded from 20 women with and 10 women without stress urinary incontinence (SUI) performing maximum voluntary contraction of the pelvic floor muscles. The anorectal angle and bladder neck were tracked while the motion of the pubic symphysis was used to compensate for the error incurred by TPUS probe motion during imaging. Kinematic curves of landmark motion were generated for each video and curves were smoothed, time normalized, and averaged within groups. Kinematic data yielded by the UROKIN software showed statistically significant differences between women with and without SUI in terms of magnitude and timing characteristics of the kinematic curves depicting landmark motion. Results provide insight into the ways in which UROKIN may be useful to study differences in pelvic floor muscle contraction mechanics between women with and without SUI and other pelvic floor disorders. The UROKIN software improves on methods described in the literature and provides unique capacity to further our understanding of urogenital biomechanics.

  6. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    OpenAIRE

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2010-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adapta...

  7. Impaired Velocity Processing Reveals an Agnosia for Motion in Depth

    NARCIS (Netherlands)

    Barendregt, Martijn; Dumoulin, Serge O.; Rokers, Bas

    2016-01-01

    Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a

  8. Secondary motion in three-dimensional branching networks

    Science.gov (United States)

    Guha, Abhijit; Pradhan, Kaustav

    2017-06-01

    (ES/P, δ S F , and δ G n ) for a quantitative description of the overall features of the secondary flow field. δ S F represents a non-uniformity index of the secondary flow in an individual branch, ES/P represents the mass-flow-averaged relative kinetic energy of the secondary motion in an individual branch, and δ G n provides a measure of the non-uniformity of the secondary flow between various branches of the same generation Gn. The repeated enhancement of the secondary kinetic energy in the bifurcation modules is responsible for the occurrence of significant values of ES/P even in generation G5. For both configurations, it is found that for any bifurcation module, the value of ES/P is greater in that daughter branch in which the mass-flow rate is greater. Even though the various contour plots of the complex secondary flow structure appear visually very different from one another, the values of δ S F are found to lie within a small range ( 0.37 ≤ δ S F ≤ 0.66 ) for the six-generation networks studied. It is shown that δ G n grows as the generation number Gn increases. It is established that the out-of-plane configuration, in general, creates more secondary kinetic energy (higher ES/P), a similar level of non-uniformity in the secondary flow in an individual branch (similar δ S F ), and a significantly lower level of non-uniformity in the distribution of secondary motion among various branches of the same generation (much lower δ G n ), as compared to the in-plane arrangement of the same branches.

  9. Capturing Motion and Depth Before Cinematography.

    Science.gov (United States)

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth.

  10. Addition of visual noise boosts evoked potential-based brain-computer interface.

    Science.gov (United States)

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  11. Perception Enhancement using Visual Attributes in Sequence Motif Visualization

    OpenAIRE

    Oon, Yin; Lee, Nung; Kok, Wei

    2016-01-01

    Sequence logo is a well-accepted scientific method to visualize the conservation characteristics of biological sequence motifs. Previous studies found that using sequence logo graphical representation for scientific evidence reports or arguments could seriously cause biases and misinterpretation by users. This study investigates on the visual attributes performance of a sequence logo in helping users to perceive and interpret the information based on preattentive theories and Gestalt principl...

  12. Enhancement of spin Hall effect induced torques for current-driven magnetic domain wall motion: Inner interface effect

    KAUST Repository

    Bang, Do; Yu, Jiawei; Qiu, Xuepeng; Wang, Yi; Awano, Hiroyuki; Manchon, Aurelien; Yang, Hyunsoo

    2016-01-01

    We investigate the current-induced domain wall motion in perpendicular magnetized Tb/Co wires with structure inversion asymmetry and different layered structures. We find that the critical current density to drive domain wall motion strongly depends on the layered structure. The lowest critical current density ∼15MA/cm2 and the highest slope of domain wall velocity curve are obtained for the wire having thin Co sublayers and more inner Tb/Co interfaces, while the largest critical current density ∼26MA/cm2 required to drive domain walls is observed in the Tb-Co alloy magnetic wire. It is found that the Co/Tb interface contributes negligibly to Dzyaloshinskii-Moriya interaction, while the effective spin-orbit torque strongly depends on the number of Tb/Co inner interfaces (n). An enhancement of the antidamping torques by extrinsic spin Hall effect due to Tb rare-earth impurity-induced skew scattering is suggested to explain the high efficiency of current-induced domain wall motion.

  13. Enhancement of spin Hall effect induced torques for current-driven magnetic domain wall motion: Inner interface effect

    KAUST Repository

    Bang, Do

    2016-05-23

    We investigate the current-induced domain wall motion in perpendicular magnetized Tb/Co wires with structure inversion asymmetry and different layered structures. We find that the critical current density to drive domain wall motion strongly depends on the layered structure. The lowest critical current density ∼15MA/cm2 and the highest slope of domain wall velocity curve are obtained for the wire having thin Co sublayers and more inner Tb/Co interfaces, while the largest critical current density ∼26MA/cm2 required to drive domain walls is observed in the Tb-Co alloy magnetic wire. It is found that the Co/Tb interface contributes negligibly to Dzyaloshinskii-Moriya interaction, while the effective spin-orbit torque strongly depends on the number of Tb/Co inner interfaces (n). An enhancement of the antidamping torques by extrinsic spin Hall effect due to Tb rare-earth impurity-induced skew scattering is suggested to explain the high efficiency of current-induced domain wall motion.

  14. Parkinson-related changes of activation in visuomotor brain regions during perceived forward self-motion.

    Directory of Open Access Journals (Sweden)

    Anouk van der Hoorn

    Full Text Available Radial expanding optic flow is a visual consequence of forward locomotion. Presented on screen, it generates illusionary forward self-motion, pointing at a close vision-gait interrelation. As particularly parkinsonian gait is vulnerable to external stimuli, effects of optic flow on motor-related cerebral circuitry were explored with functional magnetic resonance imaging in healthy controls (HC and patients with Parkinson's disease (PD. Fifteen HC and 22 PD patients, of which 7 experienced freezing of gait (FOG, watched wide-field flow, interruptions by narrowing or deceleration and equivalent control conditions with static dots. Statistical parametric mapping revealed that wide-field flow interruption evoked activation of the (pre-supplementary motor area (SMA in HC, which was decreased in PD. During wide-field flow, dorsal occipito-parietal activations were reduced in PD relative to HC, with stronger functional connectivity between right visual motion area V5, pre-SMA and cerebellum (in PD without FOG. Non-specific 'changes' in stimulus patterns activated dorsolateral fronto-parietal regions and the fusiform gyrus. This attention-associated network was stronger activated in HC than in PD. PD patients thus appeared compromised in recruiting medial frontal regions facilitating internally generated virtual locomotion when visual motion support falls away. Reduced dorsal visual and parietal activations during wide-field optic flow in PD were explained by impaired feedforward visual and visuomotor processing within a magnocellular (visual motion functional chain. Compensation of impaired feedforward processing by distant fronto-cerebellar circuitry in PD is consistent with motor responses to visual motion stimuli being either too strong or too weak. The 'change'-related activations pointed at covert (stimulus-driven attention.

  15. Criterion-free measurement of motion transparency perception at different speeds

    Science.gov (United States)

    Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.

    2018-01-01

    Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154

  16. Designing and testing scene enhancement algorithms for patients with retina degenerative disorders

    Directory of Open Access Journals (Sweden)

    Downes Susan M

    2010-06-01

    Full Text Available Abstract Background Retina degenerative disorders represent the primary cause of blindness in UK and in the developed world. In particular, Age Related Macular Degeneration (AMD and Retina Pigmentosa (RP diseases are of interest to this study. We have therefore created new image processing algorithms for enhancing the visual scenes for them. Methods In this paper we present three novel image enhancement techniques aimed at enhancing the remaining visual information for patients suffering from retina dystrophies. Currently, the only effective way to test novel technology for visual enhancement is to undergo testing on large numbers of patients. To test our techniques, we have therefore built a retinal image processing model and compared the results to data from patient testing. In particular we focus on the ability of our image processing techniques to achieve improved face detection and enhanced edge perception. Results Results from our model are compared to actual data obtained from testing the performance of these algorithms on 27 patients with an average visual acuity of 0.63 and an average contrast sensitivity of 1.22. Results show that Tinted Reduced Outlined Nature (TRON and Edge Overlaying algorithms are most beneficial for dynamic scenes such as motion detection. Image Cartoonization was most beneficial for spatial feature detection such as face detection. Patient's stated that they would most like to see Cartoonized images for use in daily life. Conclusions Results obtained from our retinal model and from patients show that there is potential for these image processing techniques to improve visual function amongst the visually impaired community. In addition our methodology using face detection and efficiency of perceived edges in determining potential benefit derived from different image enhancement algorithms could also prove to be useful in quantitatively assessing algorithms in future studies.

  17. Sunglasses with thick temples and frame constrict temporal visual field.

    Science.gov (United States)

    Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric

    2013-12-01

    Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p thick sunglasses" than with the "thin sunglasses" (p thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.

  18. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    He-Yuan Lin

    2008-03-01

    Full Text Available A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  19. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Li Hsin-Te

    2008-01-01

    Full Text Available Abstract A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  20. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  1. An Investigation of the Differential Effects of Visual Input Enhancement on the Vocabulary Learning of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Zhila Mohammadnia

    2014-07-01

    Full Text Available This study investigated the effect of visual input enhancement on the vocabulary learning of Iranian EFL learners. One hundred and thirty-two EFL learners from elementary, intermediate and advanced proficiency levels were assigned to six groups, two groups at each proficiency level with one being an experimental and the other a control group. The study employed pretests, treatment reading texts, and posttests. T-test was used for the analysis of the data. The results revealed positive effects for visual input enhancement in the advanced level based on within group and between groups’ comparisons. However this positive effect was not found for the elementary and intermediate levels based on between groups’ comparisons. It was concluded that although visual input enhancement may have beneficial effects for elementary and intermediate levels, it is much more effective for the advanced EFL learners. This study may provide useful guiding principles for EFL teachers and syllabus designers.

  2. Local and global limits on visual processing in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Marc S Tibber

    Full Text Available Schizophrenia has been linked to impaired performance on a range of visual processing tasks (e.g. detection of coherent motion and contour detection. It has been proposed that this is due to a general inability to integrate visual information at a global level. To test this theory, we assessed the performance of people with schizophrenia on a battery of tasks designed to probe voluntary averaging in different visual domains. Twenty-three outpatients with schizophrenia (mean age: 40±8 years; 3 female and 20 age-matched control participants (mean age 39±9 years; 3 female performed a motion coherence task and three equivalent noise (averaging tasks, the latter allowing independent quantification of local and global limits on visual processing of motion, orientation and size. All performance measures were indistinguishable between the two groups (ps>0.05, one-way ANCOVAs, with one exception: participants with schizophrenia pooled fewer estimates of local orientation than controls when estimating average orientation (p = 0.01, one-way ANCOVA. These data do not support the notion of a generalised visual integration deficit in schizophrenia. Instead, they suggest that distinct visual dimensions are differentially affected in schizophrenia, with a specific impairment in the integration of visual orientation information.

  3. Synchronizing the tracking eye movements with the motion of a visual target: Basic neural processes.

    Science.gov (United States)

    Goffart, Laurent; Bourrelly, Clara; Quinet, Julie

    2017-01-01

    In primates, the appearance of an object moving in the peripheral visual field elicits an interceptive saccade that brings the target image onto the foveae. This foveation is then maintained more or less efficiently by slow pursuit eye movements and subsequent catch-up saccades. Sometimes, the tracking is such that the gaze direction looks spatiotemporally locked onto the moving object. Such a spatial synchronism is quite spectacular when one considers that the target-related signals are transmitted to the motor neurons through multiple parallel channels connecting separate neural populations with different conduction speeds and delays. Because of the delays between the changes of retinal activity and the changes of extraocular muscle tension, the maintenance of the target image onto the fovea cannot be driven by the current retinal signals as they correspond to past positions of the target. Yet, the spatiotemporal coincidence observed during pursuit suggests that the oculomotor system is driven by a command estimating continuously the current location of the target, i.e., where it is here and now. This inference is also supported by experimental perturbation studies: when the trajectory of an interceptive saccade is experimentally perturbed, a correction saccade is produced in flight or after a short delay, and brings the gaze next to the location where unperturbed saccades would have landed at about the same time, in the absence of visual feedback. In this chapter, we explain how such correction can be supported by previous visual signals without assuming "predictive" signals encoding future target locations. We also describe the basic neural processes which gradually yield the synchronization of eye movements with the target motion. When the process fails, the gaze is driven by signals related to past locations of the target, not by estimates to its upcoming locations, and a catch-up is made to reinitiate the synchronization. © 2017 Elsevier B.V. All rights

  4. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    OpenAIRE

    Arshad, Q; Nigmatullina, Y; Roberts, RE; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, AS; Pettorossi, VE; Cohen-Kadosh, R; Malhotra, PA; Bronstein, AM

    2016-01-01

    Although a direct relationship between numerical-allocation and spatial-attention has been proposed, recent research suggests these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion-paradigms also (i) elicit compensatory eye-movements which themselves can influence numer...

  5. Computed and experimental motion picture determination of bubble and solids motion in a two-dimensional fluidized-bed with a jet and immersed obstacle

    International Nuclear Information System (INIS)

    Lyczkowski, R.W.; Bouillard, J.; Gidaspow, D.

    1986-01-01

    Bubble and solids motion in a two-dimensional rectangular fluidized-bed having a high speed central jet with a rectangular obstacle above it and secondary air flow at minimum fluidization have been computer modeled. Computer generated motion pictures have been found to be necessary to analyze the computations since there are such a large number of time-dependent complex phenomena difficult to comprehend otherwise. Comparison of the computer generated motion pictures with high speed motion pictures of a flow visualization experiment reveal good agreement