WorldWideScience

Sample records for motion perception

  1. Perception of biological motion in visual agnosia

    Directory of Open Access Journals (Sweden)

    Elisabeth eHuberle

    2012-08-01

    Full Text Available Over the past twenty-five years, visual processing has been discussed in the context of the dual stream hypothesis consisting of a ventral (‘what' and a dorsal ('where' visual information processing pathway. Patients with brain damage of the ventral pathway typically present with signs of visual agnosia, the inability to identify and discriminate objects by visual exploration, but show normal perception of motion perception. A dissociation between the perception of biological motion and non-biological motion has been suggested: Perception of biological motion might be impaired when 'non-biological' motion perception is intact and vice versa. The impact of object recognition on the perception of biological motion remains unclear. We thus investigated this question in a patient with severe visual agnosia, who showed normal perception of non-biological motion. The data suggested that the patient's perception of biological motion remained largely intact. However, when tested with objects constructed of coherently moving dots (‘Shape-from-Motion’, recognition was severely impaired. The results are discussed in the context of possible mechanisms of biological motion perception.

  2. Deficient Biological Motion Perception in Schizophrenia: Results from a Motion Noise Paradigm

    Directory of Open Access Journals (Sweden)

    Jejoong eKim

    2013-07-01

    Full Text Available Background: Schizophrenia patients exhibit deficient processing of perceptual and cognitive information. However, it is not well understood how basic perceptual deficits contribute to higher level cognitive problems in this mental disorder. Perception of biological motion, a motion-based cognitive recognition task, relies on both basic visual motion processing and social cognitive processing, thus providing a useful paradigm to evaluate the potentially hierarchical relationship between these two levels of information processing. Methods: In this study, we designed a biological motion paradigm in which basic visual motion signals were manipulated systematically by incorporating different levels of motion noise. We measured the performances of schizophrenia patients (n=21 and healthy controls (n=22 in this biological motion perception task, as well as in coherent motion detection, theory of mind, and a widely used biological motion recognition task. Results: Schizophrenia patients performed the biological motion perception task with significantly lower accuracy than healthy controls when perceptual signals were moderately degraded by noise. A more substantial degradation of perceptual signals, through using additional noise, impaired biological motion perception in both groups. Performance levels on biological motion recognition, coherent motion detection and theory of mind tasks were also reduced in patients. Conclusion: The results from the motion-noise biological motion paradigm indicate that in the presence of visual motion noise, the processing of biological motion information in schizophrenia is deficient. Combined with the results of poor basic visual motion perception (coherent motion task and biological motion recognition, the association between basic motion signals and biological motion perception suggests a need to incorporate the improvement of visual motion perception in social cognitive remediation.

  3. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  4. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    Science.gov (United States)

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (pperception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion

  5. Visual motion perception predicts driving hazard perception ability.

    Science.gov (United States)

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  6. Ambiguity in Tactile Apparent Motion Perception.

    Directory of Open Access Journals (Sweden)

    Emanuela Liaci

    Full Text Available In von Schiller's Stroboscopic Alternative Motion (SAM stimulus two visually presented diagonal dot pairs, located on the corners of an imaginary rectangle, alternate with each other and induce either horizontal, vertical or, rarely, rotational motion percepts. SAM motion perception can be described by a psychometric function of the dot aspect ratio ("AR", i.e. the relation between vertical and horizontal dot distances. Further, with equal horizontal and vertical dot distances (AR = 1 perception is biased towards vertical motion. In a series of five experiments, we presented tactile SAM versions and studied the role of AR and of different reference frames for the perception of tactile apparent motion.We presented tactile SAM stimuli and varied the ARs, while participants reported the perceived motion directions. Pairs of vibration stimulators were attached to the participants' forearms and stimulator distances were varied within and between forearms. We compared straight and rotated forearm conditions with each other in order to disentangle the roles of exogenous and endogenous reference frames.Increasing the tactile SAM's AR biased perception towards vertical motion, but the effect was weak compared to the visual modality. We found no horizontal disambiguation, even for very small tactile ARs. A forearm rotation by 90° kept the vertical bias, even though it was now coupled with small ARs. A 45° rotation condition with crossed forearms, however, evoked a strong horizontal motion bias.Existing approaches to explain the visual SAM bias fail to explain the current tactile results. Particularly puzzling is the strong horizontal bias in the crossed-forearm conditions. In the case of tactile apparent motion, there seem to be no fixed priority rule for perceptual disambiguation. Rather the weighting of available evidence seems to depend on the degree of stimulus ambiguity, the current situation and on the perceptual strategy of the individual

  7. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    Directory of Open Access Journals (Sweden)

    Steven David Rosenblatt

    Full Text Available A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37 participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001 and rotation (p0.1 for both. Thus, although a true moving visual field can induce self-motion, results of this

  8. Visual-vestibular interaction in motion perception

    NARCIS (Netherlands)

    Hosman, Ruud J A W; Cardullo, Frank M.; Bos, Jelte E.

    2011-01-01

    Correct perception of self motion is of vital importance for both the control of our position and posture when moving around in our environment. With the development of human controlled vehicles as bicycles, cars and aircraft motion perception became of interest for the understanding of vehicle

  9. The role of human ventral visual cortex in motion perception

    Science.gov (United States)

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  10. Impaired Perception of Biological Motion in Parkinson’s Disease

    Science.gov (United States)

    Jaywant, Abhishek; Shiffrar, Maggie; Roy, Serge; Cronin-Golomb, Alice

    2016-01-01

    Objective We examined biological motion perception in Parkinson’s disease (PD). Biological motion perception is related to one’s own motor function and depends on the integrity of brain areas affected in PD, including posterior superior temporal sulcus. If deficits in biological motion perception exist, they may be specific to perceiving natural/fast walking patterns that individuals with PD can no longer perform, and may correlate with disease-related motor dysfunction. Method 26 non-demented individuals with PD and 24 control participants viewed videos of point-light walkers and scrambled versions that served as foils, and indicated whether each video depicted a human walking. Point-light walkers varied by gait type (natural, parkinsonian) and speed (0.5, 1.0, 1.5 m/s). Participants also completed control tasks (object motion, coherent motion perception), a contrast sensitivity assessment, and a walking assessment. Results The PD group demonstrated significantly less sensitivity to biological motion than the control group (pperception (p=.02, Cohen’s d=.68). There was no group difference in coherent motion perception. Although individuals with PD had slower walking speed and shorter stride length than control participants, gait parameters did not correlate with biological motion perception. Contrast sensitivity and coherent motion perception also did not correlate with biological motion perception. Conclusion PD leads to a deficit in perceiving biological motion, which is independent of gait dysfunction and low-level vision changes, and may therefore arise from difficulty perceptually integrating form and motion cues in posterior superior temporal sulcus. PMID:26949927

  11. Visual Motion Perception

    Science.gov (United States)

    1991-08-15

    displace- ment limit for motion in random dots," Vision Res., 24, 293-300. Pantie , A. & K. Turano (1986) "Direct comparisons of apparent motions...Hicks & AJ, Pantie (1978) "Apparent movement of successively generated subjec. uve figures," Perception, 7, 371-383. Ramachandran. V.S. & S.M. Anstis...thanks think deaf girl until world uncle flag home talk finish short thee our screwdiver sonry flower wrCstlir~g plan week wait accident guilty tree

  12. Neck proprioception shapes body orientation and perception of motion.

    Science.gov (United States)

    Pettorossi, Vito Enrico; Schieppati, Marco

    2014-01-01

    This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject's mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes.

  13. Neck proprioception shapes body orientation and perception of motion

    Directory of Open Access Journals (Sweden)

    Vito Enrico Pettorossi

    2014-11-01

    Full Text Available This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead, and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers.We first remind the early findings on human balance, gait trajectory, subjective straight-ahead, induced by limb and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, subjective straight-ahead and walking trajectory.Neck vibration also induces persistent aftereffects on the subjective straight-ahead and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck proprioceptive input may induce persistent influences on the subject's mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes.

  14. The perception of object versus objectless motion.

    Science.gov (United States)

    Hock, Howard S; Nichols, David F

    2013-05-01

    Wertheimer, M. (Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 61:161-265, 1912) classical distinction between beta (object) and phi (objectless) motion is elaborated here in a series of experiments concerning competition between two qualitatively different motion percepts, induced by sequential changes in luminance for two-dimensional geometric objects composed of rectangular surfaces. One of these percepts is of spreading-luminance motion that continuously sweeps across the entire object; it exhibits shape invariance and is perceived most strongly for fast speeds. Significantly for the characterization of phi as objectless motion, the spreading luminance does not involve surface boundaries or any other feature; the percept is driven solely by spatiotemporal changes in luminance. Alternatively, and for relatively slow speeds, a discrete series of edge motions can be perceived in the direction opposite to spreading-luminance motion. Akin to beta motion, the edges appear to move through intermediate positions within the object's changing surfaces. Significantly for the characterization of beta as object motion, edge motion exhibits shape dependence and is based on the detection of oppositely signed changes in contrast (i.e., counterchange) for features essential to the determination of an object's shape, the boundaries separating its surfaces. These results are consistent with area MT neurons that differ with respect to speed preference Newsome et al (Journal of Neurophysiology, 55:1340-1351, 1986) and shape dependence Zeki (Journal of Physiology, 236:549-573, 1974).

  15. A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.

    Science.gov (United States)

    Ratzlaff, Michael; Nawrot, Mark

    2016-09-01

    The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.

  16. Tracking without perceiving: a dissociation between eye movements and motion perception.

    Science.gov (United States)

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  17. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  18. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  19. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  20. Self-motion perception: assessment by real-time computer-generated animations

    Science.gov (United States)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  1. Criterion-free measurement of motion transparency perception at different speeds

    Science.gov (United States)

    Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.

    2018-01-01

    Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154

  2. IQ Predicts Biological Motion Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Rutherford, M. D.; Troje, Nikolaus F.

    2012-01-01

    Biological motion is easily perceived by neurotypical observers when encoded in point-light displays. Some but not all relevant research shows significant deficits in biological motion perception among those with ASD, especially with respect to emotional displays. We tested adults with and without ASD on the perception of masked biological motion…

  3. Contextual effects on motion perception and smooth pursuit eye movements.

    Science.gov (United States)

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  4. A research on motion design for APP's loading pages based on time perception

    Science.gov (United States)

    Cao, Huai; Hu, Xiaoyun

    2018-04-01

    Due to restrictions caused by objective reasons like network bandwidth, hardware performance and etc., waiting is still an inevitable phenomenon that appears in our using mobile-terminal products. Relevant researches show that users' feelings in a waiting scenario can affect their evaluations on the whole product and services the product provides. With the development of user experience and inter-facial design subjects, the role of motion effect in the interface design has attracted more and more scholars' attention. In the current studies, the research theory of motion design in a waiting scenario is imperfect. This article will use the basic theory and experimental research methods of cognitive psychology to explore the motion design's impact on user's time perception when users are waiting for loading APP pages. Firstly, the article analyzes the factors that affect waiting experience of loading APP pages based on the theory of time perception, and then discusses motion design's impact on the level of time-perception when loading pages and its design strategy. Moreover, by the operation analysis of existing loading motion designs, the article classifies the existing loading motions and designs an experiment to verify the impact of different types of motions on the user's time perception. The result shows that the waiting time perception of mobile's terminals' APPs is related to the loading motion types, the combination type of loading motions can effectively shorten the waiting time perception as it scores a higher mean value in the length of time perception.

  5. Clinical significance of perceptible fetal motion.

    Science.gov (United States)

    Rayburn, W F

    1980-09-15

    The monitoring of fetal activity during the last trimester of pregnancy has been proposed to be useful in assessing fetal welfare. The maternal perception of fetal activity was tested among 82 patients using real-time ultrasonography. All perceived fetal movements were visualized on the scanner and involved motion of the lower limbs. Conversely, 82% of all visualized motions of fetal limbs were perceived by the patients. All combined motions of fetal trunk with limbs were preceived by the patients and described as strong movements, whereas clusters of isolated, weak motions of the fetal limbs were less accurately perceived (56% accuracy). The number of fetal movements perceived during the 15-minute test period was significantly (p fetal motion was present (44 of 45 cases) than when it was absent (five of 10 cases). These findings reveal that perceived fetal motion is: (1) reliable; (2) related to the strength of lower limb motion; (3) increased with ruptured amniotic membranes; and (4) reassuring if considered to be active.

  6. Eye Movements in Darkness Modulate Self-Motion Perception.

    Science.gov (United States)

    Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter

    2017-01-01

    During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.

  7. Global motion perception is associated with motor function in 2-year-old children.

    Science.gov (United States)

    Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E

    2017-09-29

    The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, Pmotor scores (r 2 =0.06, pmotor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Biological Motion Perception in Autism

    Directory of Open Access Journals (Sweden)

    J Cusack

    2011-04-01

    Full Text Available Typically developing adults can readily recognize human actions, even when conveyed to them via point-like markers placed on the body of the actor (Johansson, 1973. Previous research has suggested that children affected by autism spectrum disorder (ASD are not equally sensitive to this type of visual information (Blake et al, 2003, but it remains unknown why ASD would impact the ability to perceive biological motion. We present evidence which looks at how adolescents and adults with autism are affected by specific factors which are important in biological motion perception, such as (eg, inter-agent synchronicity, upright/inverted, etc.

  9. Being moved by the self and others: influence of empathy on self-motion perception.

    Directory of Open Access Journals (Sweden)

    Christophe Lopez

    Full Text Available BACKGROUND: The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. METHODOLOGY/PRINCIPAL FINDINGS: We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. CONCLUSIONS/SIGNIFICANCE: The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a "vestibular mirror neuron system".

  10. An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity.

    Science.gov (United States)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities-0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking.

  11. Balancing bistable perception during self-motion.

    Science.gov (United States)

    van Elk, Michiel; Blanke, Olaf

    2012-10-01

    In two experiments we investigated whether bistable visual perception is influenced by passive own body displacements due to vestibular stimulation. For this we passively rotated our participants around the vertical (yaw) axis while observing different rotating bistable stimuli (bodily or non-bodily) with different ambiguous motion directions. Based on previous work on multimodal effects on bistable perception, we hypothesized that vestibular stimulation should alter bistable perception and that the effects should differ for bodily versus non-bodily stimuli. In the first experiment, it was found that the rotation bias (i.e., the difference between the percentage of time that a CW or CCW rotation was perceived) was selectively modulated by vestibular stimulation: the perceived duration of the bodily stimuli was longer for the rotation direction congruent with the subject's own body rotation, whereas the opposite was true for the non-bodily stimulus (Necker cube). The results found in the second experiment extend the findings from the first experiment and show that these vestibular effects on bistable perception only occur when the axis of rotation of the bodily stimulus matches the axis of passive own body rotation. These findings indicate that the effect of vestibular stimulation on the rotation bias depends on the stimulus that is presented and the rotation axis of the stimulus. Although most studies on vestibular processing have traditionally focused on multisensory signal integration for posture, balance, and heading direction, the present data show that vestibular self-motion influences the perception of bistable bodily stimuli revealing the importance of vestibular mechanisms for visual consciousness.

  12. Motion perception and driving: predicting performance through testing and shortening braking reaction times through training.

    Science.gov (United States)

    Wilkins, Luke; Gray, Rob; Gaska, James; Winterbottom, Marc

    2013-12-30

    A driving simulator was used to examine the relationship between motion perception and driving performance. Although motion perception test scores have been shown to be related to driving safety, it is not clear which combination of tests are the best predictors and whether motion perception training can improve driving performance. In experiment 1, 60 younger drivers (22.4 ± 2.5 years) completed three motion perception tests (2-dimensional [2D] motion-defined letter [MDL] identification, 3D motion in depth sensitivity [MID], and dynamic visual acuity [DVA]) followed by two driving tests (emergency braking [EB] and hazard perception [HP]). In experiment 2, 20 drivers (21.6 ± 2.1 years) completed 6 weeks of motion perception training (using the MDL, MID, and DVA tests), while 20 control drivers (22.0 ± 2.7 years) completed an online driving safety course. The EB performance was measured before and after training. In experiment 1, MDL (r = 0.34) and MID (r = 0.46) significantly correlated with EB score. The change in DVA score as a function of target speed (i.e., "velocity susceptibility") was correlated most strongly with HP score (r = -0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to posttreatment, while there was no significant change for the control group: t(38) = 2.24, P = 0.03. Tests of 3D motion perception are the best predictor of EB, while DVA velocity susceptibility is the best predictor of hazard perception. Motion perception training appears to result in faster braking responses.

  13. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    OpenAIRE

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2010-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adapta...

  14. Visual-vestibular integration motion perception reporting

    Science.gov (United States)

    Harm, Deborah L.; Reschke, Millard R.; Parker, Donald E.

    1999-01-01

    Self-orientation and self/surround-motion perception derive from a multimodal sensory process that integrates information from the eyes, vestibular apparatus, proprioceptive and somatosensory receptors. Results from short and long duration spaceflight investigations indicate that: (1) perceptual and sensorimotor function was disrupted during the initial exposure to microgravity and gradually improved over hours to days (individuals adapt), (2) the presence and/or absence of information from different sensory modalities differentially affected the perception of orientation, self-motion and surround-motion, (3) perceptual and sensorimotor function was initially disrupted upon return to Earth-normal gravity and gradually recovered to preflight levels (individuals readapt), and (4) the longer the exposure to microgravity, the more complete the adaptation, the more profound the postflight disturbances, and the longer the recovery period to preflight levels. While much has been learned about perceptual and sensorimotor reactions and adaptation to microgravity, there is much remaining to be learned about the mechanisms underlying the adaptive changes, and about how intersensory interactions affect perceptual and sensorimotor function during voluntary movements. During space flight, SMS and perceptual disturbances have led to reductions in performance efficiency and sense of well-being. During entry and immediately after landing, such disturbances could have a serious impact on the ability of the commander to land the Orbiter and on the ability of all crew members to egress from the Orbiter, particularly in a non-nominal condition or following extended stays in microgravity. An understanding of spatial orientation and motion perception is essential for developing countermeasures for Space Motion Sickness (SMS) and perceptual disturbances during spaceflight and upon return to Earth. Countermeasures for optimal performance in flight and a successful return to Earth require

  15. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    Science.gov (United States)

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. The Importance of Spatiotemporal Information in Biological Motion Perception: White Noise Presented with a Step-like Motion Activates the Biological Motion Area.

    Science.gov (United States)

    Callan, Akiko; Callan, Daniel; Ando, Hiroshi

    2017-02-01

    Humans can easily recognize the motion of living creatures using only a handful of point-lights that describe the motion of the main joints (biological motion perception). This special ability to perceive the motion of animate objects signifies the importance of the spatiotemporal information in perceiving biological motion. The posterior STS (pSTS) and posterior middle temporal gyrus (pMTG) region have been established by many functional neuroimaging studies as a locus for biological motion perception. Because listening to a walking human also activates the pSTS/pMTG region, the region has been proposed to be supramodal in nature. In this study, we investigated whether the spatiotemporal information from simple auditory stimuli is sufficient to activate this biological motion area. We compared spatially moving white noise, having a running-like tempo that was consistent with biological motion, with stationary white noise. The moving-minus-stationary contrast showed significant differences in activation of the pSTS/pMTG region. Our results suggest that the spatiotemporal information of the auditory stimuli is sufficient to activate the biological motion area.

  17. Human Perception of Ambiguous Inertial Motion Cues

    Science.gov (United States)

    Zhang, Guan-Lu

    2010-01-01

    Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that

  18. Vestibular signals in primate cortex for self-motion perception.

    Science.gov (United States)

    Gu, Yong

    2018-04-21

    The vestibular peripheral organs in our inner ears detect transient motion of the head in everyday life. This information is sent to the central nervous system for automatic processes such as vestibulo-ocular reflexes, balance and postural control, and higher cognitive functions including perception of self-motion and spatial orientation. Recent neurophysiological studies have discovered a prominent vestibular network in the primate cerebral cortex. Many of the areas involved are multisensory: their neurons are modulated by both vestibular signals and visual optic flow, potentially facilitating more robust heading estimation through cue integration. Combining psychophysics, computation, physiological recording and causal manipulation techniques, recent work has addressed both the encoding and decoding of vestibular signals for self-motion perception. Copyright © 2018. Published by Elsevier Ltd.

  19. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    Science.gov (United States)

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  20. The upper spatial limit for perception of displacement is affected by preceding motion.

    Science.gov (United States)

    Stefanova, Miroslava; Mateeff, Stefan; Hohnsbein, Joachim

    2009-03-01

    The upper spatial limit D(max) for perception of apparent motion of a random dot pattern may be strongly affected by another, collinear, motion that precedes it [Mateeff, S., Stefanova, M., &. Hohnsbein, J. (2007). Perceived global direction of a compound of real and apparent motion. Vision Research, 47, 1455-1463]. In the present study this phenomenon was studied with two-dimensional motion stimuli. A random dot pattern moved alternately in the vertical and oblique direction (zig-zag motion). The vertical motion was of 1.04 degrees length; it was produced by three discrete spatial steps of the dots. Thereafter the dots were displaced by a single spatial step in oblique direction. Each motion lasted for 57ms. The upper spatial limit for perception of the oblique motion was measured under two conditions: the vertical component of the oblique motion and the vertical motion were either in the same or in opposite directions. It was found that the perception of the oblique motion was strongly influenced by the relative direction of the vertical motion that preceded it; in the "same" condition the upper spatial limit was much shorter than in the "opposite" condition. Decreasing the speed of the vertical motion reversed this effect. Interpretations based on networks of motion detectors and on Gestalt theory are discussed.

  1. The effect of occlusion therapy on motion perception deficits in amblyopia.

    Science.gov (United States)

    Giaschi, Deborah; Chapman, Christine; Meier, Kimberly; Narasimhan, Sathyasri; Regan, David

    2015-09-01

    There is growing evidence for deficits in motion perception in amblyopia, but these are rarely assessed clinically. In this prospective study we examined the effect of occlusion therapy on motion-defined form perception and multiple-object tracking. Participants included children (3-10years old) with unilateral anisometropic and/or strabismic amblyopia who were currently undergoing occlusion therapy and age-matched control children with normal vision. At the start of the study, deficits in motion-defined form perception were present in at least one eye in 69% of the children with amblyopia. These deficits were still present at the end of the study in 55% of the amblyopia group. For multiple-object tracking, deficits were present initially in 64% and finally in 55% of the children with amblyopia, even after completion of occlusion therapy. Many of these deficits persisted in spite of an improvement in amblyopic eye visual acuity in response to occlusion therapy. The prevalence of motion perception deficits in amblyopia as well as their resistance to occlusion therapy, support the need for new approaches to amblyopia treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Neural representations of kinematic laws of motion: evidence for action-perception coupling.

    Science.gov (United States)

    Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar

    2007-12-18

    Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.

  3. Unaffected perceptual thresholds for biological and non-biological form-from-motion perception in autism spectrum conditions.

    Directory of Open Access Journals (Sweden)

    Ayse Pinar Saygin

    2010-10-01

    Full Text Available Perception of biological motion is linked to the action perception system in the human brain, abnormalities within which have been suggested to underlie impairments in social domains observed in autism spectrum conditions (ASC. However, the literature on biological motion perception in ASC is heterogeneous and it is unclear whether deficits are specific to biological motion, or might generalize to form-from-motion perception.We compared psychophysical thresholds for both biological and non-biological form-from-motion perception in adults with ASC and controls. Participants viewed point-light displays depicting a walking person (Biological Motion, a translating rectangle (Structured Object or a translating unfamiliar shape (Unstructured Object. The figures were embedded in noise dots that moved similarly and the task was to determine direction of movement. The number of noise dots varied on each trial and perceptual thresholds were estimated adaptively. We found no evidence for an impairment in biological or non-biological object motion perception in individuals with ASC. Perceptual thresholds in the three conditions were almost identical between the ASC and control groups.Impairments in biological motion and non-biological form-from-motion perception are not across the board in ASC, and are only found for some stimuli and tasks. We discuss our results in relation to other findings in the literature, the heterogeneity of which likely relates to the different tasks performed. It appears that individuals with ASC are unaffected in perceptual processing of form-from-motion, but may exhibit impairments in higher order judgments such as emotion processing. It is important to identify more specifically which processes of motion perception are impacted in ASC before a link can be made between perceptual deficits and the higher-level features of the disorder.

  4. Tuning self-motion perception in virtual reality with visual illusions.

    Science.gov (United States)

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  5. Motion perception tasks as potential correlates to driving difficulty in the elderly

    Science.gov (United States)

    Raghuram, A.; Lakshminarayanan, V.

    2006-09-01

    Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.

  6. Neural mechanisms of speed perception: transparent motion

    NARCIS (Netherlands)

    Krekelberg, Bart; van Wezel, Richard Jack Anton

    2013-01-01

    Visual motion on the macaque retina is processed by direction- and speed-selective neurons in extrastriate middle temporal cortex (MT). There is strong evidence for a link between the activity of these neurons and direction perception. However, there is conflicting evidence for a link between speed

  7. Ventral aspect of the visual form pathway is not critical for the perception of biological motion

    Science.gov (United States)

    Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene

    2015-01-01

    Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504

  8. Applications of computer-graphics animation for motion-perception research

    Science.gov (United States)

    Proffitt, D. R.; Kaiser, M. K.

    1986-01-01

    The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events.

  9. S1-3: Perception of Biological Motion in Schizophrenia and Obsessive-Compulsive Disorder

    Directory of Open Access Journals (Sweden)

    Jejoong Kim

    2012-10-01

    Full Text Available Major mental disorders including schizophrenia, autism, and obsessive-compulsive disorder (OCD are characterized by impaired social functioning regardless of wide range of clinical symptoms. Past studies also revealed that people with these mental illness exhibit perceptual problems with altered neural activation. For example, schizophrenia patients are deficient in processing rapid and dynamic visual stimuli. As well documented, people are very sensitive to motion signals generated by others (i.e., biological motion even when those motions are portrayed by point-light display. Therefore, ability to perceive biological motion is important for both visual perception and social functioning. Nevertheless, there have been no systematic attempts to investigate biological motion perception in people with mental illness associated with impaired social functioning until a decade ago. Recently, a series of studies newly revealed abnormal patterns of biological motion perception and associated neural activations in schizophrenia and OCD. These new achievements will be reviewed focusing on perceptual and neural difference between patients with schizophrenia/OCD and healthy individuals. Then implications and possible future research will be discussed in this talk.

  10. First-person and third-person verbs in visual motion-perception regions.

    Science.gov (United States)

    Papeo, Liuba; Lingnau, Angelika

    2015-02-01

    Verb-related activity is consistently found in the left posterior lateral cortex (PLTC), encompassing also regions that respond to visual-motion perception. Besides motion, those regions appear sensitive to distinctions among the entities beyond motion, including that between first- vs. third-person ("third-person bias"). In two experiments, using functional magnetic resonance imaging (fMRI), we studied whether the implied subject (first/third-person) and/or the semantic content (motor/non-motor) of verbs modulate the neural activity in the left PLTC-regions responsive during basic- and biological-motion perception. In those sites, we found higher activity for verbs than for nouns. This activity was modulated by the person (but not the semantic content) of the verbs, with stronger response to third- than first-person verbs. The third-person bias elicited by verbs supports a role of motion-processing regions in encoding information about the entity beyond (and independently from) motion, and sets in a new light the role of these regions in verb processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Perception of biological motion from size-invariant body representations

    Directory of Open Access Journals (Sweden)

    Markus eLappe

    2015-03-01

    Full Text Available The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  12. Neural dynamics of motion perception: direction fields, apertures, and resonant grouping.

    Science.gov (United States)

    Grossberg, S; Mingolla, E

    1993-03-01

    A neural network model of global motion segmentation by visual cortex is described. Called the motion boundary contour system (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyze how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The motion BCS describes how preprocessing of motion signals by a motion oriented contrast (MOC) filter is joined to long-range cooperative grouping mechanisms in a motion cooperative-competitive (MOCC) loop to control phenomena such as motion capture. The motion BCS is computed in parallel with the static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the motion BCS and the static BCS, specialized to process motion directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1-->MT and V1-->V2-->MT are made--notably, the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions of contrast. Interactions of model simple cells, complex cells, hyper-complex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.

  13. Rocking or rolling--perception of ambiguous motion after returning from space.

    Directory of Open Access Journals (Sweden)

    Gilles Clément

    Full Text Available The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1-2 days. During dynamic linear acceleration (0.15-0.6 Hz, ±1.7 m/s2 perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore-aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.

  14. Prolonged asymmetric vestibular stimulation induces opposite, long-term effects on self-motion perception and ocular responses.

    Science.gov (United States)

    Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M

    2013-04-01

    Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR.

  15. Multisensory perception of spatial orientation and self-motion

    NARCIS (Netherlands)

    de Winkel, K.N.

    2013-01-01

    The aim of this project was to improve our insight in how the brain combines information from different sensory systems (e.g. vestibular and visual system) into an integrated percept of self-motion and spatial orientation. Based on evidence from other research in different areas, such as hand-eye

  16. Two independent mechanisms for motion-in-depth perception: evidence from individual differences

    Directory of Open Access Journals (Sweden)

    Harold T Nefs

    2010-10-01

    Full Text Available Our forward-facing eyes allow us the advantage of binocular visual information: using the tiny differences between right and left eye views to learn about depth and location in three dimensions. Our visual systems also contain specialized mechanisms to detect motion-in-depth from binocular vision, but the nature of these mechanisms remains controversial. Binocular motion-in-depth perception could theoretically be based on first detecting binocular disparity and then monitoring how it changes over time. The alternative is to monitor the motion in the right and left eye separately and then compare these motion signals. Here we used an individual differences approach to test whether the two sources of information are processed via dissociated mechanisms, and to measure the relative importance of those mechanisms. Our results suggest the existence of two distinct mechanisms, each contributing to the perception of motion in depth in most observers. Additionally, for the first time, we demonstrate the relative prevalence of the two mechanisms within a normal population. In general, visual systems appear to rely mostly on the mechanism sensitive to changing binocular disparity, but perception of motion in depth is augmented by the presence of a less sensitive mechanism that uses interocular velocity differences. Occasionally, we find observers with the opposite pattern of sensitivity. More generally this work showcases the power of the individual differences approach in studying the functional organisation of cognitive systems.

  17. Asymmetric vestibular stimulation reveals persistent disruption of motion perception in unilateral vestibular lesions.

    Science.gov (United States)

    Panichi, R; Faralli, M; Bruni, R; Kiriakarely, A; Occhigrossi, C; Ferraresi, A; Bronstein, A M; Pettorossi, V E

    2017-11-01

    Self-motion perception was studied in patients with unilateral vestibular lesions (UVL) due to acute vestibular neuritis at 1 wk and 4, 8, and 12 mo after the acute episode. We assessed vestibularly mediated self-motion perception by measuring the error in reproducing the position of a remembered visual target at the end of four cycles of asymmetric whole-body rotation. The oscillatory stimulus consists of a slow (0.09 Hz) and a fast (0.38 Hz) half cycle. A large error was present in UVL patients when the slow half cycle was delivered toward the lesion side, but minimal toward the healthy side. This asymmetry diminished over time, but it remained abnormally large at 12 mo. In contrast, vestibulo-ocular reflex responses showed a large direction-dependent error only initially, then they normalized. Normalization also occurred for conventional reflex vestibular measures (caloric tests, subjective visual vertical, and head shaking nystagmus) and for perceptual function during symmetric rotation. Vestibular-related handicap, measured with the Dizziness Handicap Inventory (DHI) at 12 mo correlated with self-motion perception asymmetry but not with abnormalities in vestibulo-ocular function. We conclude that 1 ) a persistent self-motion perceptual bias is revealed by asymmetric rotation in UVLs despite vestibulo-ocular function becoming symmetric over time, 2 ) this dissociation is caused by differential perceptual-reflex adaptation to high- and low-frequency rotations when these are combined as with our asymmetric stimulus, 3 ) the findings imply differential central compensation for vestibuloperceptual and vestibulo-ocular reflex functions, and 4 ) self-motion perception disruption may mediate long-term vestibular-related handicap in UVL patients. NEW & NOTEWORTHY A novel vestibular stimulus, combining asymmetric slow and fast sinusoidal half cycles, revealed persistent vestibuloperceptual dysfunction in unilateral vestibular lesion (UVL) patients. The compensation of

  18. Visual Motion Perception and Visual Attentive Processes.

    Science.gov (United States)

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  19. Synesthesia for color is linked to improved color perception but reduced motion perception.

    Science.gov (United States)

    Banissy, Michael J; Tester, Victoria; Muggleton, Neil G; Janik, Agnieszka B; Davenport, Aimee; Franklin, Anna; Walsh, Vincent; Ward, Jamie

    2013-12-01

    Synesthesia is a rare condition in which one property of a stimulus (e.g., shape) triggers a secondary percept (e.g., color) not typically associated with the first. Work on synesthesia has predominantly focused on confirming the authenticity of synesthetic experience, but much less research has been conducted to examine the extent to which synesthesia is linked to broader perceptual differences. In the research reported here, we examined whether synesthesia is associated with differences in color and motion processing by comparing these abilities in synesthetes who experience color as their evoked sensation with nonsynesthetic participants. We show that synesthesia for color is linked to facilitated color sensitivity but decreased motion sensitivity. These findings are discussed in relation to the neurocognitive mechanisms of synesthesia and interactions between color and motion processing in typical adults.

  20. Suppressive mechanisms in visual motion processing: From perception to intelligence.

    Science.gov (United States)

    Tadin, Duje

    2015-10-01

    Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Shared sensory estimates for human motion perception and pursuit eye movements.

    Science.gov (United States)

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  2. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  3. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  4. Self-motion perception: assessment by computer-generated animations

    Science.gov (United States)

    Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.

    1998-01-01

    The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.

  5. Enhancing Motion-In-Depth Perception of Random-Dot Stereograms.

    Science.gov (United States)

    Zhang, Di; Nourrit, Vincent; De Bougrenet de la Tocnaye, Jean-Louis

    2018-07-01

    Random-dot stereograms have been widely used to explore the neural mechanisms underlying binocular vision. Although they are a powerful tool to stimulate motion-in-depth (MID) perception, published results report some difficulties in the capacity to perceive MID generated by random-dot stereograms. The purpose of this study was to investigate whether the performance of MID perception could be improved using an appropriate stimulus design. Sixteen inexperienced observers participated in the experiment. A training session was carried out to improve the accuracy of MID detection before the experiment. Four aspects of stimulus design were investigated: presence of a static reference, background texture, relative disparity, and stimulus contrast. Participants' performance in MID direction discrimination was recorded and compared to evaluate whether varying these factors helped MID perception. Results showed that only the presence of background texture had a significant effect on MID direction perception. This study provides suggestions for the design of 3D stimuli in order to facilitate MID perception.

  6. Motion Perception and Manual Control Performance During Passive Tilt and Translation Following Space Flight

    Science.gov (United States)

    Clement, Gilles; Wood, Scott J.

    2010-01-01

    This joint ESA-NASA study is examining changes in motion perception following Space Shuttle flights and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data has been collected on 5 astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation (216 deg/s) combined with body translation (12-22 cm, peak-to-peak) is utilized to elicit roll-tilt perception (equivalent to 20 deg, peak-to-peak). A forward-backward moving sled (24-390 cm, peak-to-peak) with or without chair tilting in pitch is utilized to elicit pitch tilt perception (equivalent to 20 deg, peak-to-peak). These combinations are elicited at 0.15, 0.3, and 0.6 Hz for evaluating the effect of motion frequency on tilt-translation ambiguity. In both devices, a closed-loop nulling task is also performed during pseudorandom motion with and without vibrotactile feedback of tilt. All tests are performed in complete darkness. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for translation motion perception to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. The results of this study indicate that post-flight recovery of motion perception and manual control performance is complete within 8 days following short-duration space missions. Vibrotactile feedback of tilt improves manual control performance both before and after flight.

  7. Integration time for the perception of depth from motion parallax.

    Science.gov (United States)

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio

  8. Psilocybin impairs high-level but not low-level motion perception.

    Science.gov (United States)

    Carter, Olivia L; Pettigrew, John D; Burr, David C; Alais, David; Hasler, Felix; Vollenweider, Franz X

    2004-08-26

    The hallucinogenic serotonin(1A&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.

  9. Perception of the dynamic visual vertical during sinusoidal linear motion.

    Science.gov (United States)

    Pomante, A; Selen, L P J; Medendorp, W P

    2017-10-01

    The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the

  10. Effects of aging on perception of motion

    Science.gov (United States)

    Kaur, Manpreet; Wilder, Joseph; Hung, George; Julesz, Bela

    1997-09-01

    Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.

  11. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    Science.gov (United States)

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  12. Effects of Frequency and Motion Paradigm on Perception of Tilt and Translation During Periodic Linear Acceleration

    Science.gov (United States)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, Scott J.

    2009-01-01

    Previous studies have demonstrated an effect of frequency on the gain of tilt and translation perception. Results from different motion paradigms are often combined to extend the stimulus frequency range. For example, Off-Vertical Axis Rotation (OVAR) and Variable Radius Centrifugation (VRC) are useful to test low frequencies of linear acceleration at amplitudes that would require impractical sled lengths. The purpose of this study was to compare roll-tilt and lateral translation motion perception in 12 healthy subjects across four paradigms: OVAR, VRC, sled translation and rotation about an earth-horizontal axis. Subjects were oscillated in darkness at six frequencies from 0.01875 to 0.6 Hz (peak acceleration equivalent to 10 deg, less for sled motion below 0.15 Hz). Subjects verbally described the amplitude of perceived tilt and translation, and used a joystick to indicate the direction of motion. Consistent with previous reports, tilt perception gain decreased as a function of stimulus frequency in the motion paradigms without concordant canal tilt cues (OVAR, VRC and Sled). Translation perception gain was negligible at low stimulus frequencies and increased at higher frequencies. There were no significant differences between the phase of tilt and translation, nor did the phase significantly vary across stimulus frequency. There were differences in perception gain across the different paradigms. Paradigms that included actual tilt stimuli had the larger tilt gains, and paradigms that included actual translation stimuli had larger translation gains. In addition, the frequency at which there was a crossover of tilt and translation gains appeared to vary across motion paradigm between 0.15 and 0.3 Hz. Since the linear acceleration in the head lateral plane was equivalent across paradigms, differences in gain may be attributable to the presence of linear accelerations in orthogonal directions and/or cognitive aspects based on the expected motion paths.

  13. Modification of Motion Perception and Manual Control Following Short-Durations Spaceflight

    Science.gov (United States)

    Wood, S. J.; Vanya, R. D.; Esteves, J. T.; Rupert, A. H.; Clement, G.

    2011-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination and spatial disorientation following G-transitions. This ESA-NASA study was designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short-duration spaceflights. The goals of this study were to (1) examine the effects of stimulus frequency on adaptive changes in motion perception during passive tilt and translation motion, (2) quantify decrements in manual control of tilt motion, and (3) evaluate vibrotactile feedback as a sensorimotor countermeasure.

  14. Contribution of self-motion perception to acoustic target localization.

    Science.gov (United States)

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  15. Motion perception during tilt and translation after space flight

    Science.gov (United States)

    Clément, Gilles; Wood, Scott J.

    2013-11-01

    Preliminary results of an ongoing study examining the effects of space flight on astronauts' motion perception induced by independent tilt and translation motions are presented. This experiment used a sled and a variable radius centrifuge that translated the subjects forward-backward or laterally, and simultaneously tilted them in pitch or roll, respectively. Tests were performed on the ground prior to and immediately after landing. The astronauts were asked to report about their perceived motion in response to different combinations of body tilt and translation in darkness. Their ability to manually control their own orientation was also evaluated using a joystick with which they nulled out the perceived tilt while the sled and centrifuge were in motion. Preliminary results confirm that the magnitude of perceived tilt increased during static tilt in roll after space flight. A deterioration in the crewmember to control tilt using non-visual inertial cues was also observed post-flight. However, the use of a tactile prosthesis indicating the direction of down on the subject's trunk improved manual control performance both before and after space flight.

  16. Development of visual motion perception for prospective control: Brain and behavioural studies in infants

    Directory of Open Access Journals (Sweden)

    Seth B. Agyei

    2016-02-01

    Full Text Available During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioural and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioural data when studying the neural correlates of prospective control.

  17. Receptive fields for smooth pursuit eye movements and motion perception.

    Science.gov (United States)

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Self-Motion Perception: Assessment by Real-Time Computer Generated Animations

    Science.gov (United States)

    Parker, Donald E.

    1999-01-01

    Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?

  19. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    Science.gov (United States)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  20. The Coordination Dynamics of Observational Learning: Relative Motion Direction and Relative Phase as Informational Content Linking Action-Perception to Action-Production.

    Science.gov (United States)

    Buchanan, John J

    2016-01-01

    The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.

  1. Perception of self motion during and after passive rotation of the body around an earth-vertical axis.

    Science.gov (United States)

    Sinha, N; Zaher, N; Shaikh, A G; Lasker, A G; Zee, D S; Tarnutzer, A A

    2008-01-01

    We investigated the perception of self-rotation using constant-velocity chair rotations. Subjects signalled self motion during three independent tasks (1) by pushing a button when rotation was first sensed, when velocity reached a peak, when velocity began to decrease, and when velocity reached zero, (2) by rotating a disc to match the perceived motion of the body, or (3) by changing the static position of the dial such that a bigger change in its position correlated with a larger perceived velocity. All three tasks gave a consistent quantitative measure of perceived angular velocity. We found a delay in the time at which peak velocity of self-rotation was perceived (2-5 s) relative to the beginning or to the end of chair rotation. In addition the decay of the perception of self-rotation was preceded by a sensed constant-velocity interval or plateau (9-14 s). This delay in the rise of self-motion perception, and the plateau for the maximum perceived velocity, contrasts with the rapid rise and the immediate decay of the angular vestibuloocular reflex (aVOR). This difference suggests that the sensory signal from the semicircular canals undergoes additional neural processing, beyond the contribution of the velocity-storage mechanism of the aVOR, to compute the percept of self-motion.

  2. Object Manipulation and Motion Perception: Evidence of an Influence of Action Planning on Visual Processing

    NARCIS (Netherlands)

    Lindemann, O.; Bekkering, H.

    2009-01-01

    In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction.

  3. Primary visual cortex activity along the apparent-motion trace reflects illusory perception.

    Directory of Open Access Journals (Sweden)

    Lars Muckli

    2005-08-01

    Full Text Available The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1 is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.

  4. What women like: influence of motion and form on esthetic body perception

    Directory of Open Access Journals (Sweden)

    Valentina eCazzato

    2012-07-01

    Full Text Available Several studies have shown the distinct contribution of motion and form to the esthetic evaluation of female bodies. Here, we investigated how variations of implied motion and body size interact in the esthetic evaluation of female and male bodies in a sample of young healthy women. Participants provided attractiveness, beauty, and liking ratings for the shape and posture of virtual renderings of human bodies with variable body size and implied motion. The esthetic judgments for both shape and posture of human models were influenced by body size and implied motion, with a preference for thinner and more dynamic stimuli. Implied motion, however, attenuated the impact of extreme body size on the esthetic evaluation of body postures, and body size variations did not affect the preference for more dynamic stimuli. Results show that body form and action cues interact in esthetic perception, but the final esthetic appreciation of human bodies is predicted by a mixture of perceptual and affective evaluative components.

  5. Motion interactive video games in home training for children with cerebral palsy: parents' perceptions.

    Science.gov (United States)

    Sandlund, Marlene; Dock, Katarina; Häger, Charlotte K; Waterworth, Eva Lindh

    2012-01-01

    To explore parents' perceptions of using low-cost motion interactive video games as home training for their children with mild/moderate cerebral palsy. Semi-structured interviews were carried out with parents from 15 families after participation in an intervention where motion interactive games were used daily in home training for their child. A qualitative content analysis approach was applied. The parents' perception of the training was very positive. They expressed the view that motion interactive video games may promote positive experiences of physical training in rehabilitation, where the social aspects of gaming were especially valued. Further, the parents experienced less need to take on coaching while gaming stimulated independent training. However, there was a desire for more controlled and individualized games to better challenge the specific rehabilitative need of each child. Low-cost motion interactive games may provide increased motivation and social interaction to home training and promote independent training with reduced coaching efforts for the parents. In future designs of interactive games for rehabilitation purposes, it is important to preserve the motivational and social features of games while optimizing the individualized physical exercise.

  6. Combined fMRI- and eye movement-based decoding of bistable plaid motion perception.

    Science.gov (United States)

    Wilbertz, Gregor; Ketkar, Madhura; Guggenmos, Matthias; Sterzer, Philipp

    2018-05-01

    The phenomenon of bistable perception, in which perception alternates spontaneously despite constant sensory stimulation, has been particularly useful in probing the neural bases of conscious perception. The study of such bistability requires access to the observer's perceptual dynamics, which is usually achieved via active report. This report, however, constitutes a confounding factor in the study of conscious perception and can also be biased in the context of certain experimental manipulations. One approach to circumvent these problems is to track perceptual alternations using signals from the eyes or the brain instead of observers' reports. Here we aimed to optimize such decoding of perceptual alternations by combining eye and brain signals. Eye-tracking and functional magnetic resonance imaging (fMRI) was performed in twenty participants while they viewed a bistable visual plaid motion stimulus and reported perceptual alternations. Multivoxel pattern analysis (MVPA) for fMRI was combined with eye-tracking in a Support vector machine to decode participants' perceptual time courses from fMRI and eye-movement signals. While both measures individually already yielded high decoding accuracies (on average 86% and 88% correct, respectively) classification based on the two measures together further improved the accuracy (91% correct). These findings show that leveraging on both fMRI and eye movement data may pave the way for optimized no-report paradigms through improved decodability of bistable motion perception and hence for a better understanding of the neural correlates of consciousness. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Visual working memory contents bias ambiguous structure from motion perception.

    Directory of Open Access Journals (Sweden)

    Lisa Scocchia

    Full Text Available The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM.

  8. Morphing technique reveals intact perception of object motion and disturbed perception of emotional expressions by low-functioning adolescents with Autism Spectrum Disorder.

    Science.gov (United States)

    Han, Bora; Tijus, Charles; Le Barillier, Florence; Nadel, Jacqueline

    2015-12-01

    A morphing procedure has been designed to compare directly the perception of emotional expressions and of moving objects. Morphing tasks were presented to 12 low-functioning teenagers with Autism Spectrum Disorder (LF ASD) compared to 12 developmental age-matched typical children and a group presenting ceiling performance. In a first study, when presented with morphed stimuli of objects and emotional faces, LF ASD showed an intact perception of object change of state together with an impaired perception of emotional facial change of state. In a second study, an eye-tracker recorded visual exploration of morphed emotional stimuli displayed by a human face and a robotic set-up. Facing the morphed robotic stimuli, LF ASD displayed equal duration of fixations toward emotional regions and toward mechanical sources of motion, while the typical groups tracked the emotional regions only. Altogether the findings of the two studies suggest that individuals with ASD process motion rather than emotional signals when facing facial expressions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Color improves speed of processing but not perception in a motion illusion

    Directory of Open Access Journals (Sweden)

    Carolyn J Perry

    2012-03-01

    Full Text Available When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, the addition of color reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception. We propose 4 potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception. Potential neural bases are also explored.

  10. Color improves speed of processing but not perception in a motion illusion.

    Science.gov (United States)

    Perry, Carolyn J; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored.

  11. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    Science.gov (United States)

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    Science.gov (United States)

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  13. Velocity storage contribution to vestibular self-motion perception in healthy human subjects.

    Science.gov (United States)

    Bertolini, G; Ramat, S; Laurens, J; Bockisch, C J; Marti, S; Straumann, D; Palla, A

    2011-01-01

    Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.

  14. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    Science.gov (United States)

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  15. Self-motion perception and vestibulo-ocular reflex during whole body yaw rotation in standing subjects: the role of head position and neck proprioception.

    Science.gov (United States)

    Panichi, Roberto; Botti, Fabio Massimo; Ferraresi, Aldo; Faralli, Mario; Kyriakareli, Artemis; Schieppati, Marco; Pettorossi, Vito Enrico

    2011-04-01

    Self-motion perception and vestibulo-ocular reflex (VOR) were studied during whole body yaw rotation in the dark at different static head positions. Rotations consisted of four cycles of symmetric sinusoidal and asymmetric oscillations. Self-motion perception was evaluated by measuring the ability of subjects to manually track a static remembered target. VOR was recorded separately and the slow phase eye position (SPEP) was computed. Three different head static yaw deviations (active and passive) relative to the trunk (0°, 45° to right and 45° to left) were examined. Active head deviations had a significant effect during asymmetric oscillation: the movement perception was enhanced when the head was kept turned toward the side of body rotation and decreased in the opposite direction. Conversely, passive head deviations had no effect on movement perception. Further, vibration (100 Hz) of the neck muscles splenius capitis and sternocleidomastoideus remarkably influenced perceived rotation during asymmetric oscillation. On the other hand, SPEP of VOR was modulated by active head deviation, but was not influenced by neck muscle vibration. Through its effects on motion perception and reflex gain, head position improved gaze stability and enhanced self-motion perception in the direction of the head deviation. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Comparison of two Simon tasks: neuronal correlates of conflict resolution based on coherent motion perception.

    Science.gov (United States)

    Wittfoth, Matthias; Buck, Daniela; Fahle, Manfred; Herrmann, Manfred

    2006-08-15

    The present study aimed at characterizing the neural correlates of conflict resolution in two variations of the Simon effect. We introduced two different Simon tasks where subjects had to identify shapes on the basis of form-from-motion perception (FFMo) within a randomly moving dot field, while (1) motion direction (motion-based Simon task) or (2) stimulus location (location-based Simon task) had to be ignored. Behavioral data revealed that both types of Simon tasks induced highly significant interference effects. Using event-related fMRI, we could demonstrate that both tasks share a common cluster of activated brain regions during conflict resolution (pre-supplementary motor area (pre-SMA), superior parietal lobule (SPL), and cuneus) but also show task-specific activation patterns (left superior temporal cortex in the motion-based, and the left fusiform gyrus in the location-based Simon task). Although motion-based and location-based Simon tasks are conceptually very similar (Type 3 stimulus-response ensembles according to the taxonomy of [Kornblum, S., Stevens, G. (2002). Sequential effects of dimensional overlap: findings and issues. In: Prinz, W., Hommel., B. (Eds.), Common mechanism in perception and action. Oxford University Press, Oxford, pp. 9-54]) conflict resolution in both tasks results in the activation of different task-specific regions probably related to the different sources of task-irrelevant information. Furthermore, the present data give evidence those task-specific regions are most likely to detect the relationship between task-relevant and task-irrelevant information.

  17. The economics of motion perception and invariants of visual sensitivity.

    Science.gov (United States)

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  18. Spectral fingerprints of large-scale cortical dynamics during ambiguous motion perception.

    Science.gov (United States)

    Helfrich, Randolph F; Knepper, Hannah; Nolte, Guido; Sengelmann, Malte; König, Peter; Schneider, Till R; Engel, Andreas K

    2016-11-01

    Ambiguous stimuli have been widely used to study the neuronal correlates of consciousness. Recently, it has been suggested that conscious perception might arise from the dynamic interplay of functionally specialized but widely distributed cortical areas. While previous research mainly focused on phase coupling as a correlate of cortical communication, more recent findings indicated that additional coupling modes might coexist and possibly subserve distinct cortical functions. Here, we studied two coupling modes, namely phase and envelope coupling, which might differ in their origins, putative functions and dynamics. Therefore, we recorded 128-channel EEG while participants performed a bistable motion task and utilized state-of-the-art source-space connectivity analysis techniques to study the functional relevance of different coupling modes for cortical communication. Our results indicate that gamma-band phase coupling in extrastriate visual cortex might mediate the integration of visual tokens into a moving stimulus during ambiguous visual stimulation. Furthermore, our results suggest that long-range fronto-occipital gamma-band envelope coupling sustains the horizontal percept during ambiguous motion perception. Additionally, our results support the idea that local parieto-occipital alpha-band phase coupling controls the inter-hemispheric information transfer. These findings provide correlative evidence for the notion that synchronized oscillatory brain activity reflects the processing of sensory input as well as the information integration across several spatiotemporal scales. The results indicate that distinct coupling modes are involved in different cortical computations and that the rich spatiotemporal correlation structure of the brain might constitute the functional architecture for cortical processing and specific multi-site communication. Hum Brain Mapp 37:4099-4111, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Long-lasting effects of neck muscle vibration and contraction on self-motion perception of vestibular origin.

    Science.gov (United States)

    Pettorossi, Vito Enrico; Panichi, Roberto; Botti, Fabio Massimo; Biscarini, Andrea; Filippi, Guido Maria; Schieppati, Marco

    2015-10-01

    To show that neck proprioceptive input can induce long-term effects on vestibular-dependent self-motion perception. Motion perception was assessed by measuring the subject's error in tracking in the dark the remembered position of a fixed target during whole-body yaw asymmetric rotation of a supporting platform, consisting in a fast rightward half-cycle and a slow leftward half-cycle returning the subject to the initial position. Neck muscles were relaxed or voluntarily contracted, and/or vibrated. Whole-body rotation was administered during or at various intervals after the vibration train. The tracking position error (TPE) at the end of the platform rotation was measured during and after the muscle conditioning maneuvers. Neck input produced immediate and sustained changes in the vestibular perceptual response to whole-body rotation. Vibration of the left sterno-cleido-mastoideus (SCM) or right splenius capitis (SC) or isometric neck muscle effort to rotate the head to the right enhanced the TPE by decreasing the perception of the slow rotation. The reverse effect was observed by activating the contralateral muscle. The effects persisted after the end of SCM conditioning, and slowly vanished within several hours, as tested by late asymmetric rotations. The aftereffect increased in amplitude and persistence by extending the duration of the vibration train (from 1 to 10min), augmenting the vibration frequency (from 5 to 100Hz) or contracting the vibrated muscle. Symmetric yaw rotation elicited a negligible TPE, upon which neck muscle vibrations were ineffective. Neck proprioceptive input induces enduring changes in vestibular-dependent self-motion perception, conditional on the vestibular stimulus feature, and on the side and the characteristics of vibration and status of vibrated muscles. This shows that our perception of whole-body yaw-rotation is not only dependent on accurate vestibular information, but is modulated by proprioceptive information related to

  20. The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.

    Science.gov (United States)

    Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P

    2015-01-01

    Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.

  1. Structure-from-motion: dissociating perception, neural persistence, and sensory memory of illusory depth and illusory rotation.

    Science.gov (United States)

    Pastukhov, Alexander; Braun, Jochen

    2013-02-01

    In the structure-from-motion paradigm, physical motion on a screen produces the vivid illusion of an object rotating in depth. Here, we show how to dissociate illusory depth and illusory rotation in a structure-from-motion stimulus using a rotationally asymmetric shape and reversals of physical motion. Reversals of physical motion create a conflict between the original illusory states and the new physical motion: Either illusory depth remains constant and illusory rotation reverses, or illusory rotation stays the same and illusory depth reverses. When physical motion reverses after the interruption in presentation, we find that illusory rotation tends to remain constant for long blank durations (T (blank) ≥ 0.5 s), but illusory depth is stabilized if interruptions are short (T (blank) ≤ 0.1 s). The stability of illusory depth over brief interruptions is consistent with the effect of neural persistence. When this is curtailed using a mask, stability of ambiguous vision (for either illusory depth or illusory rotation) is disrupted. We also examined the selectivity of the neural persistence of illusory depth. We found that it relies on a static representation of an interpolated illusory object, since changes to low-level display properties had little detrimental effect. We discuss our findings with respect to other types of history dependence in multistable displays (sensory stabilization memory, neural fatigue, etc.). Our results suggest that when brief interruptions are used during the presentation of multistable displays, switches in perception are likely to rely on the same neural mechanisms as spontaneous switches, rather than switches due to the initial percept choice at the stimulus onset.

  2. The effect of oxytocin on biological motion perception in dogs (Canis familiaris).

    Science.gov (United States)

    Kovács, Krisztina; Kis, Anna; Kanizsár, Orsolya; Hernádi, Anna; Gácsi, Márta; Topál, József

    2016-05-01

    Recent studies have shown that the neuropeptide oxytocin is involved in the regulation of several complex human social behaviours. There is, however, little research on the effect of oxytocin on basic mechanisms underlying human sociality, such as the perception of biological motion. In the present study, we investigated the effect of oxytocin on biological motion perception in dogs (Canis familiaris), a species adapted to the human social environment and thus widely used to model many aspects of human social behaviour. In a within-subjects design, dogs (N = 39), after having received either oxytocin or placebo treatment, were presented with 2D projection of a moving point-light human figure and the inverted and scrambled version of the same movie. Heart rate (HR) and heart rate variability (HRV) were measured as physiological responses, and behavioural response was evaluated by observing dogs' looking time. Subjects were also rated on the personality traits of Neuroticism and Agreeableness by their owners. As expected, placebo-pretreated (control) dogs showed a spontaneous preference for the biological motion pattern; however, there was no such preference after oxytocin pretreatment. Furthermore, following the oxytocin pretreatment female subjects looked more at the moving point-light figure than males. The individual variations along the dimensions of Agreeableness and Neuroticism also modulated dogs' behaviour. Furthermore, HR and HRV measures were affected by oxytocin treatment and in turn played a role in subjects' looking behaviour. We discuss how these findings contribute to our understanding of the neurohormonal regulatory mechanisms of human (and non-human) social skills.

  3. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    Science.gov (United States)

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  4. Biological motion perception links diverse facets of theory of mind during middle childhood.

    Science.gov (United States)

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-06-01

    Two cornerstones of social development--social perception and theory of mind--undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12 years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Embodied learning of a generative neural model for biological motion perception and inference.

    Science.gov (United States)

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  6. Embodied Learning of a Generative Neural Model for Biological Motion Perception and Inference

    Directory of Open Access Journals (Sweden)

    Fabian eSchrodt

    2015-07-01

    Full Text Available Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  7. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Directory of Open Access Journals (Sweden)

    Qingcui eWang

    2015-05-01

    Full Text Available Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ or ‘group motion’. In element motion, the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in group motion, both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside. Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of group motion as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps. The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  8. Apparent motion perception in lower limb amputees with phantom sensations: "obstacle shunning" and "obstacle tolerance".

    Science.gov (United States)

    Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J

    2018-03-21

    Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post

  9. Facilitating Effects of Emotion on the Perception of Biological Motion: Evidence for a Happiness Superiority Effect.

    Science.gov (United States)

    Lee, Hannah; Kim, Jejoong

    2017-06-01

    It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.

  10. The Posture of Putting One's Palms Together Modulates Visual Motion Event Perception.

    Science.gov (United States)

    Saito, Godai; Gyoba, Jiro

    2018-02-01

    We investigated the effect of an observer's hand postures on visual motion perception using the stream/bounce display. When two identical visual objects move across collinear horizontal trajectories toward each other in a two-dimensional display, observers perceive them as either streaming or bouncing. In our previous study, we found that when observers put their palms together just below the coincidence point of the two objects, the percentage of bouncing responses increased, mainly depending on the proprioceptive information from their own hands. However, it remains unclear if the tactile or haptic (force) information produced by the postures mostly influences the stream/bounce perception. We solved this problem by changing the tactile and haptic information on the palms of the hands. Experiment 1 showed that the promotion of bouncing perception was observed only when the posture of directly putting one's palms together was used, while there was no effect when a brick was sandwiched between the participant's palms. Experiment 2 demonstrated that the strength of force used when putting the palms together had no effect on increasing bounce perception. Our findings indicate that the hands-induced bounce effect derives from the tactile information produced by the direct contact between both palms.

  11. S1-1: Individual Differences in the Perception of Biological Motion

    Directory of Open Access Journals (Sweden)

    Ian Thornton

    2012-10-01

    Full Text Available Our ability to accurately perceive the actions of others based on reduced visual cues has been well documented. Previous work has suggested that this ability is probably made possible by separable mechanisms that can operate in either a passive, bottom-up fashion or an active, top-down fashion (Thornton, Rensink, & Shiffrar, 2002 Perception 31 837–853. One line of evidence for exploring the contribution of top-down mechanisms is to consider the extent to which individual differences in more general cognitive abilities, such as attention and working memory, predict performance on biological motion tasks. In this talk, I will begin by reviewing previous work that has looked at biological motion processing in clinical settings and as a function of domain-specific expertise. I will then introduce a new task that we are using in my lab to explore individual variation in action matching as a function of independently assessed attentional control and working memory capacity.

  12. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  13. Modification of Otolith-Ocular Reflexes, Motion Perception and Manual Control During Variable Radius Centrifugation Following Space Flight

    Science.gov (United States)

    Wood, Scott J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.

    2009-01-01

    Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, less than 20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. One result of this study will be to characterize the variability (gain, asymmetry) in both otolith-ocular responses and motion perception during variable radius centrifugation, and measure the time course of post-flight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved

  14. Detecting Biological Motion for Human–Robot Interaction: A Link between Perception and Action

    Directory of Open Access Journals (Sweden)

    Alessia Vignolo

    2017-06-01

    Full Text Available One of the fundamental skills supporting safe and comfortable interaction between humans is their capability to understand intuitively each other’s actions and intentions. At the basis of this ability is a special-purpose visual processing that human brain has developed to comprehend human motion. Among the first “building blocks” enabling the bootstrapping of such visual processing is the ability to detect movements performed by biological agents in the scene, a skill mastered by human babies in the first days of their life. In this paper, we present a computational model based on the assumption that such visual ability must be based on local low-level visual motion features, which are independent of shape, such as the configuration of the body and perspective. Moreover, we implement it on the humanoid robot iCub, embedding it into a software architecture that leverages the regularities of biological motion also to control robot attention and oculomotor behaviors. In essence, we put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. We posit that this choice facilitates mutual understanding and goal prediction during collaboration, increasing the pleasantness and safety of the interaction.

  15. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  16. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    OpenAIRE

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in ...

  17. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    Science.gov (United States)

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  18. Reprint of "Biological motion perception links diverse facets of theory of mind during middle childhood".

    Science.gov (United States)

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-09-01

    Two cornerstones of social development-social perception and theory of mind-undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Unconscious Local Motion Alters Global Image Speed

    Science.gov (United States)

    Khuu, Sieu K.; Chung, Charles Y. L.; Lord, Stephanie; Pearson, Joel

    2014-01-01

    Accurate motion perception of self and object speed is crucial for successful interaction in the world. The context in which we make such speed judgments has a profound effect on their accuracy. Misperceptions of motion speed caused by the context can have drastic consequences in real world situations, but they also reveal much about the underlying mechanisms of motion perception. Here we show that motion signals suppressed from awareness can warp simultaneous conscious speed perception. In Experiment 1, we measured global speed discrimination thresholds using an annulus of 8 local Gabor elements. We show that physically removing local elements from the array attenuated global speed discrimination. However, removing awareness of the local elements only had a small effect on speed discrimination. That is, unconscious local motion elements contributed to global conscious speed perception. In Experiment 2 we measured the global speed of the moving Gabor patterns, when half the elements moved at different speeds. We show that global speed averaging occurred regardless of whether local elements were removed from awareness, such that the speed of invisible elements continued to be averaged together with the visible elements to determine the global speed. These data suggest that contextual motion signals outside of awareness can both boost and affect our experience of motion speed, and suggest that such pooling of motion signals occurs before the conscious extraction of the surround motion speed. PMID:25503603

  20. Neural correlates of visually induced self-motion illusion in depth.

    Science.gov (United States)

    Kovács, Gyula; Raabe, Markus; Greenlee, Mark W

    2008-08-01

    Optic-flow fields can induce the conscious illusion of self-motion in a stationary observer. Here we used functional magnetic resonance imaging to reveal the differential processing of self- and object-motion in the human brain. Subjects were presented a constantly expanding optic-flow stimulus, composed of disparate red-blue dots, viewed through red-blue glasses to generate a vivid percept of three-dimensional motion. We compared the activity obtained during periods of illusory self-motion with periods of object-motion percept. We found that the right MT+, precuneus, as well as areas located bilaterally along the dorsal part of the intraparietal sulcus and along the left posterior intraparietal sulcus were more active during self-motion perception than during object-motion. Additional signal increases were located in the depth of the left superior frontal sulcus, over the ventral part of the left anterior cingulate, in the depth of the right central sulcus and in the caudate nucleus/putamen. We found no significant deactivations associated with self-motion perception. Our results suggest that the illusory percept of self-motion is correlated with the activation of a network of areas, ranging from motion-specific areas to regions involved in visuo-vestibular integration, visual imagery, decision making, and introspection.

  1. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    Science.gov (United States)

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  2. Sound-contingent visual motion aftereffect

    Directory of Open Access Journals (Sweden)

    Kobayashi Maori

    2011-05-01

    Full Text Available Abstract Background After a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion, one of the signals (color becomes a driver for the other signal (motion. This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adult's brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound. Results Dynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days. Conclusions These results indicate that a new neural representation can be rapidly established between auditory and visual modalities.

  3. Social network size relates to developmental neural sensitivity to biological motion

    Directory of Open Access Journals (Sweden)

    L.A. Kirby

    2018-04-01

    Full Text Available The ability to perceive others’ actions and goals from human motion (i.e., biological motion perception is a critical component of social perception and may be linked to the development of real-world social relationships. Adult research demonstrates two key nodes of the brain’s biological motion perception system—amygdala and posterior superior temporal sulcus (pSTS—are linked to variability in social network properties. The relation between social perception and social network properties, however, has not yet been investigated in middle childhood—a time when individual differences in social experiences and social perception are growing. The aims of this study were to (1 replicate past work showing amygdala and pSTS sensitivity to biological motion in middle childhood; (2 examine age-related changes in the neural sensitivity for biological motion, and (3 determine whether neural sensitivity for biological motion relates to social network characteristics in children. Consistent with past work, we demonstrate a significant relation between social network size and neural sensitivity for biological motion in left pSTS, but do not find age-related change in biological motion perception. This finding offers evidence for the interplay between real-world social experiences and functional brain development and has important implications for understanding disorders of atypical social experience. Keywords: Biological motion, Social networks, Middle childhood, Neural specialization, Brain-behavior relations, pSTS

  4. Motion perception in motion : how we perceive object motion during smooth pursuit eye movements

    NARCIS (Netherlands)

    Souman, J.L.

    2005-01-01

    Eye movements change the retinal image motion of objects in the visual field. When we make an eye movement, the image of a stationary object will move across the retinae, while the retinal image of an object that we follow with the eyes is approximately stationary. To enable us to perceive motion in

  5. The Relative Importance of Spatial Versus Temporal Structure in the Perception of Biological Motion: An Event-Related Potential Study

    Science.gov (United States)

    Hirai, Masahiro; Hiraki, Kazuo

    2006-01-01

    We investigated how the spatiotemporal structure of animations of biological motion (BM) affects brain activity. We measured event-related potentials (ERPs) during the perception of BM under four conditions: normal spatial and temporal structure; scrambled spatial and normal temporal structure; normal spatial and scrambled temporal structure; and…

  6. Study of the perception of visual motion in amblyopia using functional MRI

    International Nuclear Information System (INIS)

    Lu Guangming; Zhang Zhiqiang; Zhou Wenzhen; Zheng Ling; Yin Jie; Liang Ping

    2006-01-01

    Objective: To research the pathophysiological mechanism of anisometropic and strabismic amblyopia through observation of the cortex activation under the stimulus of visual motion using functional MRI (fMRI). Methods: Seven patients with anisometropic amblyopia and 10 patients with strabismic amblyopia were examined under the stimulus with the paradigm that task and control states were rotating and stationary grating with 1.5 T MR scanners. The data were processed using software of SPM offline, and the result was analyzed with single subject. An index of interocular difference of activation (IDA) was set for Mann-Whitney rank sum test to denote the extension of difference between activation of each eye. Results: There appeared activation on bilaterally occipital lobe in both group of amblyopia patients. There was mild activation on frontal lobe when amblyopic eyes were stimulated, but no activation when sound eyes. The MT area was regarded as region of interesting when analyzed, the activation of all sound eyes was stronger than amblyopic eyes in 7 anisometropic amblyopia patients. There were 5 patients whose level of activation of amblyopic eye's were lower than sound eye, and four were higher than sound eye, among the strabismic amblyopia patients except one patient's activation was none. There was statistical difference between IDA value of two groups (Z=2.382, P=0.017). Conclusion: There are more cortex areas activated of amblyopic eye than sound eye when single eye is stimulated. The function of visual motion maybe has been affected in anisometropic amblyopia. In strabismic amblyopia, the function of visual motion may relate to the underlying mechanism of strabismic, which suggests, as for the impairment of perception of visual motion, there is difference between two types of amblyopia. (authors)

  7. Neural dynamics of motion processing and speed discrimination.

    Science.gov (United States)

    Chey, J; Grossberg, S; Mingolla, E

    1998-09-01

    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

  8. Efficacy of manual and manipulative therapy in the perception of pain and cervical motion in patients with tension-type headache: a randomized, controlled clinical trial.

    Science.gov (United States)

    Espí-López, Gemma V; Gómez-Conesa, Antonia

    2014-03-01

    The purpose of this study was to evaluate the efficacy of manipulative and manual therapy treatments with regard to pain perception and neck mobility in patients with tension-type headache. A randomized clinical trial was conducted on 84 adults diagnosed with tension-type headache. Eighty-four subjects were enrolled in this study: 68 women and 16 men. Mean age was 39.76 years, ranging from 18 to 65 years. A total of 57.1% were diagnosed with chronic tension-type headache and 42.9% with tension-type headache. Participants were divided into 3 treatment groups (manual therapy, manipulative therapy, a combination of manual and manipulative therapy) and a control group. Four treatment sessions were administered during 4 weeks, with posttreatment assessment and follow-up at 1 month. Cervical ranges of motion pain perception, and frequency and intensity of headaches were assessed. All 3 treatment groups showed significant improvements in the different dimensions of pain perception. Manual therapy and manipulative treatment improved some cervical ranges of motion. Headache frequency was reduced with manipulative treatment (P treatment reported improvement after the treatment (P treatment and at follow-up with manipulative therapy (P treatment (P treatments, administered both separately and combined together, showed efficacy for patients with tension-type headache with regard to pain perception. As for cervical ranges of motion, treatments produced greater effect when separately administered.

  9. Deficient motion-defined and texture-defined figure-ground segregation in amblyopic children.

    Science.gov (United States)

    Wang, Jane; Ho, Cindy S; Giaschi, Deborah E

    2007-01-01

    Motion-defined form deficits in the fellow eye and the amblyopic eye of children with amblyopia implicate possible direction-selective motion processing or static figure-ground segregation deficits. Deficient motion-defined form perception in the fellow eye of amblyopic children may not be fully accounted for by a general motion processing deficit. This study investigates the contribution of figure-ground segregation deficits to the motion-defined form perception deficits in amblyopia. Performances of 6 amblyopic children (5 anisometropic, 1 anisostrabismic) and 32 control children with normal vision were assessed on motion-defined form, texture-defined form, and global motion tasks. Performance on motion-defined and texture-defined form tasks was significantly worse in amblyopic children than in control children. Performance on global motion tasks was not significantly different between the 2 groups. Faulty figure-ground segregation mechanisms are likely responsible for the observed motion-defined form perception deficits in amblyopia.

  10. Temporal ventriloquism along the path of apparent motion: speed perception under different spatial grouping principles.

    Science.gov (United States)

    Ogulmus, Cansu; Karacaoglu, Merve; Kafaligonul, Hulusi

    2018-03-01

    The coordination of intramodal perceptual grouping and crossmodal interactions plays a critical role in constructing coherent multisensory percepts. However, the basic principles underlying such coordinating mechanisms still remain unclear. By taking advantage of an illusion called temporal ventriloquism and its influences on perceived speed, we investigated how audiovisual interactions in time are modulated by the spatial grouping principles of vision. In our experiments, we manipulated the spatial grouping principles of proximity, uniform connectedness, and similarity/common fate in apparent motion displays. Observers compared the speed of apparent motions across different sound timing conditions. Our results revealed that the effects of sound timing (i.e., temporal ventriloquism effects) on perceived speed also existed in visual displays containing more than one object and were modulated by different spatial grouping principles. In particular, uniform connectedness was found to modulate these audiovisual interactions in time. The effect of sound timing on perceived speed was smaller when horizontal connecting bars were introduced along the path of apparent motion. When the objects in each apparent motion frame were not connected or connected with vertical bars, the sound timing was more influential compared to the horizontal bar conditions. Overall, our findings here suggest that the effects of sound timing on perceived speed exist in different spatial configurations and can be modulated by certain intramodal spatial grouping principles such as uniform connectedness.

  11. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Science.gov (United States)

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  12. Effect of pictorial depth cues, binocular disparity cues and motion parallax depth cues on lightness perception in three-dimensional virtual scenes.

    Directory of Open Access Journals (Sweden)

    Michiteru Kitazaki

    2008-09-01

    Full Text Available Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.Observers viewed a virtual room (4 m width x 5 m height x 17.5 m depth with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD depth cues and with or without motion parallax (MP depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5-17.5 m in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD, they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D processing model for lightness perception.

  13. Kinesthetic information disambiguates visual motion signals.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Modeling a space-variant cortical representation for apparent motion.

    Science.gov (United States)

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  15. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  16. Differences between Perception and Eye Movements during Complex Motions

    Science.gov (United States)

    Holly, Jan E.; Davis, Saralin M.; Sullivan, Kelly E.

    2013-01-01

    During passive whole-body motion in the dark, the motion perceived by subjects may or may not be veridical. Either way, reflexive eye movements are typically compensatory for the perceived motion. However, studies are discovering that for certain motions, the perceived motion and eye movements are incompatible. The incompatibility has not been explained by basic differences in gain or time constants of decay. This paper uses three-dimensional modeling to investigate gondola centrifugation (with a tilting carriage) and off-vertical axis rotation. The first goal was to determine whether known differences between perceived motions and eye movements are true differences when all three-dimensional combinations of angular and linear components are considered. The second goal was to identify the likely areas of processing in which perceived motions match or differ from eye movements, whether in angular components, linear components and/or dynamics. The results were that perceived motions are more compatible with eye movements in three dimensions than the one-dimensional components indicate, and that they differ more in their linear than their angular components. In addition, while eye movements are consistent with linear filtering processes, perceived motion has dynamics that cannot be explained by basic differences in time constants, filtering, or standard GIF-resolution processes. PMID:21846952

  17. Alpha motion based on a motion detector, but not on the Müller-Lyer illusion

    Science.gov (United States)

    Suzuki, Masahiro

    2014-07-01

    This study examined the mechanism of alpha motion, the apparent motion of the Müller-Lyer figure's shaft that occurs when the arrowheads and arrow tails are alternately presented. The following facts were found: (a) reduced exposure duration decreased the amount of alpha motion, and this phenomenon was not explainable by the amount of the Müller-Lyer illusion; (b) the motion aftereffect occurred after adaptation to alpha motion; (c) occurrence of alpha motion became difficult when the temporal frequency increased, and this characteristic of alpha motion was similar to the characteristic of a motion detector that motion detection became difficult when the temporal frequency increased from the optimal frequency. These findings indicated that alpha motion occurs on the basis of a motion detector but not on the Müller-Lyer illusion, and that the mechanism of alpha motion is the same as that of general motion perception.

  18. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  19. Decision-level adaptation in motion perception.

    Science.gov (United States)

    Mather, George; Sharman, Rebecca J

    2015-12-01

    Prolonged exposure to visual stimuli causes a bias in observers' responses to subsequent stimuli. Such adaptation-induced biases are usually explained in terms of changes in the relative activity of sensory neurons in the visual system which respond selectively to the properties of visual stimuli. However, the bias could also be due to a shift in the observer's criterion for selecting one response rather than the alternative; adaptation at the decision level of processing rather than the sensory level. We investigated whether adaptation to implied motion is best attributed to sensory-level or decision-level bias. Three experiments sought to isolate decision factors by changing the nature of the participants' task while keeping the sensory stimulus unchanged. Results showed that adaptation-induced bias in reported stimulus direction only occurred when the participants' task involved a directional judgement, and disappeared when adaptation was measured using a non-directional task (reporting where motion was present in the display, regardless of its direction). We conclude that adaptation to implied motion is due to decision-level bias, and that a propensity towards such biases may be widespread in sensory decision-making.

  20. Abnormal Size-Dependent Modulation of Motion Perception in Children with Autism Spectrum Disorder (ASD).

    Science.gov (United States)

    Sysoeva, Olga V; Galuta, Ilia A; Davletshina, Maria S; Orekhova, Elena V; Stroganova, Tatiana A

    2017-01-01

    Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD-a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD.

  1. Biases in the perception of self-motion during whole-body acceleration and deceleration

    Directory of Open Access Journals (Sweden)

    Luc eTremblay

    2013-12-01

    Full Text Available Several studies have investigated whether vestibular signals can be processed to determine the magnitude of passive body motions. Many of them required subjects to report their perceived displacements offline, i.e. after being submitted to passive displacements. Here, we used a protocol that allowed us to complement these results by asking subjects to report their introspective estimation of their displacement continuously, i.e. during the ongoing body rotation. To this end, participants rotated the handle of a manipulandum around a vertical axis to indicate their perceived change of angular position in space at the same time as they were passively rotated in the dark. The rotation acceleration (Acc and deceleration (Dec lasted either 1.5 s (peak of 60 deg/s2, referred to as being "High" or 3 s (peak of 33 deg/s2, referred to as being "Low". The participants were rotated either counter-clockwise or clockwise, and all combinations of acceleration and deceleration were tested (i.e., AccLow-DecLow; AccLow-DecHigh; AccHigh-DecLow; AccHigh-DecHigh. The participants' perception of body rotation was assessed by computing the gain, i.e. ratio between the amplitude of the perceived rotations (as measured by the rotating manipulandum’s handle and the amplitude of the actual chair rotations. The gain was measured at the end of the rotations, and was also computed separately for the acceleration and deceleration phases. Three salient findings resulted from this experiment: i the gain was much greater during body acceleration than during body deceleration, ii the gain was greater during High compared to Low accelerations and iii the gain measured during the deceleration was influenced by the preceding acceleration (i.e., Low or High. These different effects of the angular stimuli on the perception of body motion can be interpreted in relation to the consequences of body acceleration and deceleration on the vestibular system and on higher-order cognitive

  2. Contrast gain control in first- and second-order motion perception.

    Science.gov (United States)

    Lu, Z L; Sperling, G

    1996-12-01

    A novel pedestal-plus-test paradigm is used to determine the nonlinear gain-control properties of the first-order (luminance) and the second-order (texture-contrast) motion systems, that is, how these systems' responses to motion stimuli are reduced by pedestals and other masking stimuli. Motion-direction thresholds were measured for test stimuli consisting of drifting luminance and texture-contrast-modulation stimuli superimposed on pedestals of various amplitudes. (A pedestal is a static sine-wave grating of the same type and same spatial frequency as the moving test grating.) It was found that first-order motion-direction thresholds are unaffected by small pedestals, but at pedestal contrasts above 1-2% (5-10 x pedestal threshold), motion thresholds increase proportionally to pedestal amplitude (a Weber law). For first-order stimuli, pedestal masking is specific to the spatial frequency of the test. On the other hand, motion-direction thresholds for texture-contrast stimuli are independent of pedestal amplitude (no gain control whatever) throughout the accessible pedestal amplitude range (from 0 to 40%). However, when baseline carrier contrast increases (with constant pedestal modulation amplitude), motion thresholds increase, showing that gain control in second-order motion is determined not by the modulator (as in first-order motion) but by the carrier. Note that baseline contrast of the carrier is inherently independent of spatial frequency of the modulator. The drastically different gain-control properties of the two motion systems and prior observations of motion masking and motion saturation are all encompassed in a functional theory. The stimulus inputs to both first- and second-order motion process are normalized by feedforward, shunting gain control. The different properties arise because the modulator is used to control the first-order gain and the carrier is used to control the second-order gain.

  3. Directional Limits on Motion Transparency Assessed Through Colour-Motion Binding.

    Science.gov (United States)

    Maloney, Ryan T; Clifford, Colin W G; Mareschal, Isabelle

    2018-03-01

    Motion-defined transparency is the perception of two or more distinct moving surfaces at the same retinal location. We explored the limits of motion transparency using superimposed surfaces of randomly positioned dots defined by differences in motion direction and colour. In one experiment, dots were red or green and we varied the proportion of dots of a single colour that moved in a single direction ('colour-motion coherence') and measured the threshold direction difference for discriminating between two directions. When colour-motion coherences were high (e.g., 90% of red dots moving in one direction), a smaller direction difference was required to correctly bind colour with direction than at low coherences. In another experiment, we varied the direction difference between the surfaces and measured the threshold colour-motion coherence required to discriminate between them. Generally, colour-motion coherence thresholds decreased with increasing direction differences, stabilising at direction differences around 45°. Different stimulus durations were compared, and thresholds were higher at the shortest (150 ms) compared with the longest (1,000 ms) duration. These results highlight different yet interrelated aspects of the task and the fundamental limits of the mechanisms involved: the resolution of narrowly separated directions in motion processing and the local sampling of dot colours from each surface.

  4. Neural Mechanisms of Illusory Motion: Evidence from ERP Study

    Directory of Open Access Journals (Sweden)

    Xu Y. A. N. Yun

    2011-05-01

    Full Text Available ERPs were used to examine the neural correlates of illusory motion, by presenting the Rice Wave illusion (CI, its two variants (WI and NI and a real motion video (RM. Results showed that: Firstly, RM elicited a more negative deflection than CI, NI and WI between 200–350ms. Secondly, between 500–600ms, CI elicited a more positive deflection than NI and WI, and RM elicited a more positive deflection than CI, what's more interesting was the sequential enhancement of brain activity with the corresponding motion strength. We inferred that the former component might reflect the successful encoding of the local motion signals in detectors at the lower stage; while the latter one might be involved in the intensive representations of visual input in real/illusory motion perception, this was the whole motion-signal organization in the later stage of motion perception. Finally, between 1185–1450 ms, a significant positive component was found between illusory/real motion tasks than NI (no motion. Overall, we demonstrated that there was a stronger deflection under the corresponding lager motion strength. These results reflected not only the different temporal patterns between illusory and real motion but also extending to their distinguishing working memory representation and storage.

  5. Long-term effects of serial anodal tDCS on motion perception in subjects with occipital stroke measured in the unaffected visual hemifield

    Directory of Open Access Journals (Sweden)

    Manuel C Olma

    2013-06-01

    Full Text Available Transcranial direct current stimulation (tDCS is a novel neuromodulatory tool that has seen early transition to clinical trials, although the high variability of these findings necessitates further studies in clincally-relevant populations. The majority of evidence into effects of repeated tDCS is based on research in the human motor system, but it is unclear whether the long-term effects of serial tDCS are motor-specific or transferable to other brain areas. This study aimed to examine whether serial anodal tDCS over the visual cortex can exogenously induce long-term neuroplastic changes in the visual cortex. However, when the visual cortex is affected by a cortical lesion, up-regulated endogenous neuroplastic adaptation processes may alter the susceptibility to tDCS. To this end, motion perception was investigated in the unaffected hemifield of subjects with unilateral visual cortex lesions. Twelve subjects with occipital ischaemic lesions participated in a within-subject, sham-controlled, double-blind study. MRI-registered sham or anodal tDCS (1.5 mA, 20 minutes was applied on five consecutive days over the visual cortex. Motion perception was tested before and after stimulation sessions and at 14- and 28-day follow-up. After a 16-day interval an identical study block with the other stimulation condition (anodal or sham tDCS followed. Serial anodal tDCS over the visual cortex resulted in an improvement in motion perception, a function attributed to MT/V5. This effect was still measurable at 14- and 28-day follow-up measurements. Thus, this may represent evidence for long-term tDCS-induced plasticity and has implications for the design of studies examining the time course of tDCS effects in both the visual and motor systems.

  6. Micro-calibration of space and motion by photoreceptors synchronized in parallel with cortical oscillations: A unified theory of visual perception.

    Science.gov (United States)

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike

    2018-01-01

    A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical

  7. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    Science.gov (United States)

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  8. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    Science.gov (United States)

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Visual working memory contaminates perception

    OpenAIRE

    Kang, Min-Suk; Hong, Sang Wook; Blake, Randolph; Woodman, Geoffrey F.

    2011-01-01

    Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different mot...

  10. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    Science.gov (United States)

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  11. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    Directory of Open Access Journals (Sweden)

    Eiji Watanabe

    2018-03-01

    Full Text Available The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  12. Stream/Bounce Event Perception Reveals a Temporal Limit of Motion Correspondence Based on Surface Feature over Space and Time

    Directory of Open Access Journals (Sweden)

    Yousuke Kawachi

    2011-06-01

    Full Text Available We examined how stream/bounce event perception is affected by motion correspondence based on the surface features of moving objects passing behind an occlusion. In the stream/bounce display two identical objects moving across each other in a two-dimensional display can be perceived as either streaming through or bouncing off each other at coincidence. Here, surface features such as colour (Experiments 1 and 2 or luminance (Experiment 3 were switched between the two objects at coincidence. The moment of coincidence was invisible to observers due to an occluder. Additionally, the presentation of the moving objects was manipulated in duration after the feature switch at coincidence. The results revealed that a postcoincidence duration of approximately 200 ms was required for the visual system to stabilize judgments of stream/bounce events by determining motion correspondence between the objects across the occlusion on the basis of the surface feature. The critical duration was similar across motion speeds of objects and types of surface features. Moreover, controls (Experiments 4a–4c showed that cognitive bias based on feature (colour/luminance congruency across the occlusion could not fully account for the effects of surface features on the stream/bounce judgments. We discuss the roles of motion correspondence, visual feature processing, and attentive tracking in the stream/bounce judgments.

  13. The importance of stimulus noise analysis for self-motion studies.

    Directory of Open Access Journals (Sweden)

    Alessandro Nesti

    Full Text Available Motion simulators are widely employed in basic and applied research to study the neural mechanisms of perception and action during inertial stimulation. In these studies, uncontrolled simulator-introduced noise inevitably leads to a disparity between the reproduced motion and the trajectories meticulously designed by the experimenter, possibly resulting in undesired motion cues to the investigated system. Understanding actual simulator responses to different motion commands is therefore a crucial yet often underestimated step towards the interpretation of experimental results. In this work, we developed analysis methods based on signal processing techniques to quantify the noise in the actual motion, and its deterministic and stochastic components. Our methods allow comparisons between commanded and actual motion as well as between different actual motion profiles. A specific practical example from one of our studies is used to illustrate the methodologies and their relevance, but this does not detract from its general applicability. Analyses of the simulator's inertial recordings show direction-dependent noise and nonlinearity related to the command amplitude. The Signal-to-Noise Ratio is one order of magnitude higher for the larger motion amplitudes we tested, compared to the smaller motion amplitudes. Simulator-introduced noise is found to be primarily of deterministic nature, particularly for the stronger motion intensities. The effect of simulator noise on quantification of animal/human motion sensitivity is discussed. We conclude that accurate recording and characterization of executed simulator motion are a crucial prerequisite for the investigation of uncertainty in self-motion perception.

  14. The Verriest Lecture: Color lessons from space, time, and motion

    Science.gov (United States)

    Shevell, Steven K.

    2012-01-01

    The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398

  15. Visual motion detection and habitat preference in Anolis lizards.

    Science.gov (United States)

    Steinberg, David S; Leal, Manuel

    2016-11-01

    The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.

  16. Pre-coincidence brain activity predicts the perceptual outcome of streaming/bouncing motion display.

    Science.gov (United States)

    Zhao, Song; Wang, Yajie; Jia, Lina; Feng, Chengzhi; Liao, Yu; Feng, Wenfeng

    2017-08-18

    When two identical visual discs move toward each other on a two-dimensional visual display, they can be perceived as either "streaming through" or "bouncing off" each other after their coincidence. Previous studies have observed a strong bias toward the streaming percept. Additionally, the incidence of the bouncing percept in this ambiguous display could be increased by various factors, such as a brief sound at the moment of coincidence and a momentary pause of the two discs. The streaming/bouncing bistable motion phenomenon has been studied intensively since its discovery. However, little is known regarding the neural basis underling the perceptual ambiguity in the classic version of the streaming/bouncing motion display. The present study investigated the neural basis of the perception disambiguating underling the processing of the streaming/bouncing bistable motion display using event-related potential (ERP) recordings. Surprisingly, the amplitude of frontal central P2 (220-260 ms) that was elicited by the moving discs ~200 ms before the coincidence of the two discs was observed to be predictive of subsequent streaming or bouncing percept. A larger P2 amplitude was observed for streaming percept than the bouncing percept. These findings suggest that the streaming/bouncing bistable perception may have been disambiguated unconsciously ~200 ms before the coincidence of the two discs.

  17. Integration of canal and otolith inputs by central vestibular neurons is subadditive for both active and passive self-motion: implication for perception.

    Science.gov (United States)

    Carriot, Jerome; Jamali, Mohsen; Brooks, Jessica X; Cullen, Kathleen E

    2015-02-25

    Traditionally, the neural encoding of vestibular information is studied by applying either passive rotations or translations in isolation. However, natural vestibular stimuli are typically more complex. During everyday life, our self-motion is generally not restricted to one dimension, but rather comprises both rotational and translational motion that will simultaneously stimulate receptors in the semicircular canals and otoliths. In addition, natural self-motion is the result of self-generated and externally generated movements. However, to date, it remains unknown how information about rotational and translational components of self-motion is integrated by vestibular pathways during active and/or passive motion. Accordingly, here, we compared the responses of neurons at the first central stage of vestibular processing to rotation, translation, and combined motion. Recordings were made in alert macaques from neurons in the vestibular nuclei involved in postural control and self-motion perception. In response to passive stimulation, neurons did not combine canal and otolith afferent information linearly. Instead, inputs were subadditively integrated with a weighting that was frequency dependent. Although canal inputs were more heavily weighted at low frequencies, the weighting of otolith input increased with frequency. In response to active stimulation, neuronal modulation was significantly attenuated (∼ 70%) relative to passive stimulation for rotations and translations and even more profoundly attenuated for combined motion due to subadditive input integration. Together, these findings provide insights into neural computations underlying the integration of semicircular canal and otolith inputs required for accurate posture and motor control, as well as perceptual stability, during everyday life. Copyright © 2015 the authors 0270-6474/15/353555-11$15.00/0.

  18. The application of biological motion research: biometrics, sport, and the military.

    Science.gov (United States)

    Steel, Kylie; Ellem, Eathan; Baxter, David

    2015-02-01

    The body of research that examines the perception of biological motion is extensive and explores the factors that are perceived from biological motion and how this information is processed. This research demonstrates that individuals are able to use relative (temporal and spatial) information from a person's movement to recognize factors, including gender, age, deception, emotion, intention, and action. The research also demonstrates that movement presents idiosyncratic properties that allow individual discrimination, thus providing the basis for significant exploration in the domain of biometrics and social signal processing. Medical forensics, safety garments, and victim selection domains also have provided a history of research on the perception of biological motion applications; however, a number of additional domains present opportunities for application that have not been explored in depth. Therefore, the purpose of this paper is to present an overview of the current applications of biological motion-based research and to propose a number of areas where biological motion research, specific to recognition, could be applied in the future.

  19. The Perception of the Higher Derivatives of Visual Motion.

    Science.gov (United States)

    1986-06-24

    the two runs the motion was uniform. It was found that sensitivity to acceleration (as indicated by proportion of correct dis- criminations ) decreased...that dis- whose size alternately expanded or contracted at a fixed rate, crimination of direction of motion in depth has submaxima with the transition...stereoknetici. Archivo Italiano di Psicologia . tection: Comparison of postadaptation thresholds. Journal of the 1924.3. 105-120. Optical Society of America. 1983

  20. Gravity Cues Embedded in the Kinematics of Human Motion Are Detected in Form-from-Motion Areas of the Visual System and in Motor-Related Areas.

    Science.gov (United States)

    Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J J; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine

    2017-01-01

    The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer's motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex.

  1. NATO Symposium entitled "Symposium on the Study of Motion Perception : Recent Developments and Applications"

    CERN Document Server

    Wagenaar, Willem; Leibowitz, Herschel

    1982-01-01

    From August 24-29, 1980 the international "Symposium on the Study of Motion Perception; Recent Developments and Applications", sponsored by NATO and organized by the editors of this book, was held in Veldhoven, the Netherlands. The meeting was attended by about eighty scholars, including psychologists, neurologists, physicists and other scientists, from fourteen different countries. During the symposium some fifty research papers were presented and a series of tutorial review papers were read and discussed. The research presentations have been published in a special issue of the international journal of psychonomics "Acta Psychologica" (Vol. 48, 1981). The present book is a compilation of the tutorial papers. The tutorials were arranged around early versions of the chapters now appearing in this book. The long discussions at the Veldhoven tutorial sessions resulted in extensive revisions of the texts prior to this publication. Unfortunately this led to a delay in publication, but we feel that this was justifi...

  2. Sound frequency and aural selectivity in sound-contingent visual motion aftereffect.

    Directory of Open Access Journals (Sweden)

    Maori Kobayashi

    Full Text Available BACKGROUND: One possible strategy to evaluate whether signals in different modalities originate from a common external event or object is to form associations between inputs from different senses. This strategy would be quite effective because signals in different modalities from a common external event would then be aligned spatially and temporally. Indeed, it has been demonstrated that after adaptation to visual apparent motion paired with alternating auditory tones, the tones begin to trigger illusory motion perception to a static visual stimulus, where the perceived direction of visual lateral motion depends on the order in which the tones are replayed. The mechanisms underlying this phenomenon remain unclear. One important approach to understanding the mechanisms is to examine whether the effect has some selectivity in auditory processing. However, it has not yet been determined whether this aftereffect can be transferred across sound frequencies and between ears. METHODOLOGY/PRINCIPAL FINDINGS: Two circles placed side by side were presented in alternation, producing apparent motion perception, and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. However, the aftereffect was observed only when the adapter and test tones were presented at the same frequency and to the same ear. CONCLUSIONS/SIGNIFICANCE: These findings suggest that the auditory processing underlying the establishment of novel audiovisual associations is selective, potentially but not necessarily indicating that this processing occurs at an early stage.

  3. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  4. Visual working memory contaminates perception.

    Science.gov (United States)

    Kang, Min-Suk; Hong, Sang Wook; Blake, Randolph; Woodman, Geoffrey F

    2011-10-01

    Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different motion direction in visual working memory. Control experiments showed that none of a variety of alternative explanations could account for this repulsion effect induced by working memory. Our findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms as those involved in processing basic sensory events.

  5. ZAG-Otolith: Modification of Otolith-Ocular Reflexes, Motion Perception and Manual Control during Variable Radius Centrifugation Following Space Flight

    Science.gov (United States)

    Wood, S. J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.

    2009-01-01

    Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.

  6. Perceptually Uniform Motion Space.

    Science.gov (United States)

    Birkeland, Asmund; Turkay, Cagatay; Viola, Ivan

    2014-11-01

    Flow data is often visualized by animated particles inserted into a flow field. The velocity of a particle on the screen is typically linearly scaled by the velocities in the data. However, the perception of velocity magnitude in animated particles is not necessarily linear. We present a study on how different parameters affect relative motion perception. We have investigated the impact of four parameters. The parameters consist of speed multiplier, direction, contrast type and the global velocity scale. In addition, we investigated if multiple motion cues, and point distribution, affect the speed estimation. Several studies were executed to investigate the impact of each parameter. In the initial results, we noticed trends in scale and multiplier. Using the trends for the significant parameters, we designed a compensation model, which adjusts the particle speed to compensate for the effect of the parameters. We then performed a second study to investigate the performance of the compensation model. From the second study we detected a constant estimation error, which we adjusted for in the last study. In addition, we connect our work to established theories in psychophysics by comparing our model to a model based on Stevens' Power Law.

  7. Binocular eye movement control and motion perception: what is being tracked?

    Science.gov (United States)

    van der Steen, Johannes; Dits, Joyce

    2012-10-19

    We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.

  8. GABA shapes the dynamics of bistable perception.

    Science.gov (United States)

    van Loon, Anouk M; Knapen, Tomas; Scholte, H Steven; St John-Saaltink, Elexa; Donner, Tobias H; Lamme, Victor A F

    2013-05-06

    Sometimes, perception fluctuates spontaneously between two distinct interpretations of a constant sensory input. These bistable perceptual phenomena provide a unique window into the neural mechanisms that create the contents of conscious perception. Models of bistable perception posit that mutual inhibition between stimulus-selective neural populations in visual cortex plays a key role in these spontaneous perceptual fluctuations. However, a direct link between neural inhibition and bistable perception has not yet been established experimentally. Here, we link perceptual dynamics in three distinct bistable visual illusions (binocular rivalry, motion-induced blindness, and structure from motion) to measurements of gamma-aminobutyric acid (GABA) concentrations in human visual cortex (as measured with magnetic resonance spectroscopy) and to pharmacological stimulation of the GABAA receptor by means of lorazepam. As predicted by a model of neural interactions underlying bistability, both higher GABA concentrations in visual cortex and lorazepam administration induced slower perceptual dynamics, as reflected in a reduced number of perceptual switches and a lengthening of percept durations. Thus, we show that GABA, the main inhibitory neurotransmitter, shapes the dynamics of bistable perception. These results pave the way for future studies into the competitive neural interactions across the visual cortical hierarchy that elicit conscious perception. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    Science.gov (United States)

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  10. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    Science.gov (United States)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input

  11. Reversed stereo depth and motion direction with anti-correlated stimuli.

    Science.gov (United States)

    Read, J C; Eagle, R A

    2000-01-01

    We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.

  12. Two-year-olds with autism orient to nonsocial contingencies rather than biological motion

    Science.gov (United States)

    Klin, Ami; Lin, David J.; Gorrindo, Phillip; Ramsay, Gordon; Jones, Warren

    2009-01-01

    Typically-developing human infants preferentially attend to biological motion within the first days of life1. This ability is highly conserved across species2,3 and is believed to be critical for filial attachment and for detection of predators4. The neural underpinnings of biological motion perception are overlapping with brain regions involved in perception of basic social signals such as facial expression and gaze direction5, and preferential attention to biological motion is seen as a precursor to the capacity for attributing intentions to others6. However, in a serendipitous observation7, we recently found that an infant with autism failed to recognize point-light displays of biological motion but was instead highly sensitive to the presence of a non-social, physical contingency that occurred within the stimuli by chance. This observation raised the hypothesis that perception of biological motion may be altered in children with autism from a very early age, with cascading consequences for both social development and for the lifelong impairments in social interaction that are a hallmark of autism spectrum disorders8. Here we show that two-year-olds with autism fail to orient towards point-light displays of biological motion, and that their viewing behavior when watching these point-light displays can be explained instead as a response to non-social, physical contingencies physical contingencies that are disregarded by control children. This observation has far-reaching implications for understanding the altered neurodevelopmental trajectory of brain specialization in autism9. PMID:19329996

  13. Vibro-Perception of Optical Bio-Inspired Fiber-Skin.

    Science.gov (United States)

    Li, Tao; Zhang, Sheng; Lu, Guo-Wei; Sunami, Yuta

    2018-05-12

    In this research, based on the principle of optical interferometry, the Mach-Zehnder and Optical Phase-locked Loop (OPLL) vibro-perception systems of bio-inspired fiber-skin are designed to mimic the tactile perception of human skin. The fiber-skin is made of the optical fiber embedded in the silicone elastomer. The optical fiber is an instinctive and alternative sensor for tactile perception with high sensitivity and reliability, also low cost and susceptibility to the magnetic interference. The silicone elastomer serves as a substrate with high flexibility and biocompatibility, and the optical fiber core serves as the vibro-perception sensor to detect physical motions like tapping and sliding. According to the experimental results, the designed optical fiber-skin demonstrates the ability to detect the physical motions like tapping and sliding in both the Mach-Zehnder and OPLL vibro-perception systems. For direct contact condition, the OPLL vibro-perception system shows better performance compared with the Mach-Zehnder vibro-perception system. However, the Mach-Zehnder vibro-perception system is preferable to the OPLL system in the indirect contact experiment. In summary, the fiber-skin is validated to have light touch character and excellent repeatability, which is highly-suitable for skin-mimic sensing.

  14. Perception-oriented methodology for robust motion estimation design

    NARCIS (Netherlands)

    Heinrich, A.; Vleuten, van der R.J.; Haan, de G.

    2014-01-01

    Optimizing a motion estimator (ME) for picture rate conversion is challenging. This is because there are many types of MEs and, within each type, many parameters, which makes subjective assessment of all the alternatives impractical. To solve this problem, we propose an automatic design methodology

  15. Perception of animacy from the motion of a single sound object

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-01-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused...... that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain...

  16. Beta, but not gamma, band oscillations index visual form-motion integration.

    Directory of Open Access Journals (Sweden)

    Charles Aissani

    Full Text Available Electrophysiological oscillations in different frequency bands co-occur with perceptual, motor and cognitive processes but their function and respective contributions to these processes need further investigations. Here, we recorded MEG signals and seek for percept related modulations of alpha, beta and gamma band activity during a perceptual form/motion integration task. Participants reported their bound or unbound perception of ambiguously moving displays that could either be seen as a whole square-like shape moving along a Lissajou's figure (bound percept or as pairs of bars oscillating independently along cardinal axes (unbound percept. We found that beta (15-25 Hz, but not gamma (55-85 Hz oscillations, index perceptual states at the individual and group level. The gamma band activity found in the occipital lobe, although significantly higher during visual stimulation than during base line, is similar in all perceptual states. Similarly, decreased alpha activity during visual stimulation is not different for the different percepts. Trial-by-trial classification of perceptual reports based on beta band oscillations was significant in most observers, further supporting the view that modulation of beta power reliably index perceptual integration of form/motion stimuli, even at the individual level.

  17. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  18. Comparison of Flight Simulators Based on Human Motion Perception Metrics

    Science.gov (United States)

    Valente Pais, Ana R.; Correia Gracio, Bruno J.; Kelly, Lon C.; Houck, Jacob A.

    2015-01-01

    In flight simulation, motion filters are used to transform aircraft motion into simulator motion. When looking for the best match between visual and inertial amplitude in a simulator, researchers have found that there is a range of inertial amplitudes, rather than a single inertial value, that is perceived by subjects as optimal. This zone, hereafter referred to as the optimal zone, seems to correlate to the perceptual coherence zones measured in flight simulators. However, no studies were found in which these two zones were compared. This study investigates the relation between the optimal and the coherence zone measurements within and between different simulators. Results show that for the sway axis, the optimal zone lies within the lower part of the coherence zone. In addition, it was found that, whereas the width of the coherence zone depends on the visual amplitude and frequency, the width of the optimal zone remains constant.

  19. Imagined Spaces: Motion Graphics in Performance Spaces

    DEFF Research Database (Denmark)

    Steijn, Arthur

    2016-01-01

    through theories drawn from two different fields. The first is from the field of direct visual perception as explored and described by the American psychologist J. J. Gibson. I supplement this angle by introducing relevant new media theories extracted from writings from L. Manovich. I also briefly...... introduce a second theoretic perspective from neuroscience, especially neurological theories related to aesthetic experiences as studied, categorized and explained by V. S. Ramachandran. Key Words: Motion graphics, video projections, space, direct visual perception, design process, new media, neuroscience...

  20. Performance characterization of Watson Ahumada motion detector using random dot rotary motion stimuli.

    Directory of Open Access Journals (Sweden)

    Siddharth Jain

    Full Text Available The performance of Watson & Ahumada's model of human visual motion sensing is compared against human psychophysical performance. The stimulus consists of random dots undergoing rotary motion, displayed in a circular annulus. The model matches psychophysical observer performance with respect to most parameters. It is able to replicate some key psychophysical findings such as invariance of observer performance to dot density in the display, and decrease of observer performance with frame duration of the display.Associated with the concept of rotary motion is the notion of a center about which rotation occurs. One might think that for accurate estimation of rotary motion in the display, this center must be accurately known. A simple vector analysis reveals that this need not be the case. Numerical simulations confirm this result, and may explain the position invariance of MST(d cells. Position invariance is the experimental finding that rotary motion sensitive cells are insensitive to where in their receptive field rotation occurs.When all the dots in the display are randomly drawn from a uniform distribution, illusory rotary motion is perceived. This case was investigated by Rose & Blake previously, who termed the illusory rotary motion the omega effect. Two important experimental findings are reported concerning this effect. First, although the display of random dots evokes perception of rotary motion, the direction of motion perceived does not depend on what dot pattern is shown. Second, the time interval between spontaneous flips in perceived direction is lognormally distributed (mode approximately 2 s. These findings suggest the omega effect fits in the category of a typical bistable illusion, and therefore the processes that give rise to this illusion may be the same processes that underlie much of other bistable phenomenon.

  1. Further explorations of the facing bias in biological motion perception: perspective cues, observer sex, and response times.

    Directory of Open Access Journals (Sweden)

    Ben Schouten

    Full Text Available The human visual system has evolved to be highly sensitive to visual information about other persons and their movements as is illustrated by the effortless perception of point-light figures or 'biological motion'. When presented orthographically, a point-light walker is interpreted in two anatomically plausible ways: As 'facing the viewer' or as 'facing away' from the viewer. However, human observers show a 'facing bias': They perceive such a point-light walker as facing towards them in about 70-80% of the cases. In studies exploring the role of social and biological relevance as a possible account for the facing bias, we found a 'figure gender effect': Male point-light figures elicit a stronger facing bias than female point-light figures. Moreover, we also found an 'observer gender effect': The 'figure gender effect' was stronger for male than for female observers. In the present study we presented to 11 males and 11 females point-light walkers of which, very subtly, the perspective information was manipulated by modifying the earlier reported 'perspective technique'. Proportions of 'facing the viewer' responses and reaction times were recorded. Results show that human observers, even in the absence of local shape or size cues, easily pick up on perspective cues, confirming recent demonstrations of high visual sensitivity to cues on whether another person is potentially approaching. We also found a consistent difference in how male and female observers respond to stimulus variations (figure gender or perspective cues that cause variations in the perceived in-depth orientation of a point-light walker. Thus, the 'figure gender effect' is possibly caused by changes in the relative locations and motions of the dots that the perceptual system tends to interpret as perspective cues. Third, reaction time measures confirmed the existence of the facing bias and recent research showing faster detection of approaching than receding biological motion.

  2. How to use body tilt for the simulation of linear self motion

    NARCIS (Netherlands)

    Groen, E.L.; Bles, W.

    2004-01-01

    We examined to what extent body tilt may augment the perception of visually simulated linear self acceleration. Fourteen subjects judged visual motion profiles of fore-aft motion at four different frequencies between 0.04-0.33 Hz, and at three different acceleration amplitudes (0.44, 0.88 and 1.76

  3. The Independent and Shared Mechanisms of Intrinsic Brain Dynamics: Insights From Bistable Perception

    Directory of Open Access Journals (Sweden)

    Teng Cao

    2018-04-01

    Full Text Available In bistable perception, constant input leads to alternating perception. The dynamics of the changing perception reflects the intrinsic dynamic properties of the “unconscious inferential” process in the brain. Under the same condition, individuals differ in how fast they experience the perceptual alternation. In this study, testing many forms of bistable perception in a large number of observers, we investigated the key question of whether there is a general and common mechanism or multiple and independent mechanisms that control the dynamics of the inferential brain. Bistable phenomena tested include binocular rivalry, vase-face, Necker cube, moving plaid, motion induced blindness, biological motion, spinning dancer, rotating cylinder, Lissajous-figure, rolling wheel, and translating diamond. Switching dynamics for each bistable percept was measured in 100 observers. Results show that the switching rates of subsets of bistable percept are highly correlated. The clustering of dynamic properties of some bistable phenomena but not an overall general control of switching dynamics implies that the brain’s inferential processes are both shared and independent – faster in constructing 3D structure from motion does not mean faster in integrating components into an objects.

  4. Neural Integration of Information Specifying Human Structure from Form, Motion, and Depth

    Science.gov (United States)

    Jackson, Stuart; Blake, Randolph

    2010-01-01

    Recent computational models of biological motion perception operate on ambiguous two-dimensional representations of the body (e.g., snapshots, posture templates) and contain no explicit means for disambiguating the three-dimensional orientation of a perceived human figure. Are there neural mechanisms in the visual system that represent a moving human figure’s orientation in three dimensions? To isolate and characterize the neural mechanisms mediating perception of biological motion, we used an adaptation paradigm together with bistable point-light (PL) animations whose perceived direction of heading fluctuates over time. After exposure to a PL walker with a particular stereoscopically defined heading direction, observers experienced a consistent aftereffect: a bistable PL walker, which could be perceived in the adapted orientation or reversed in depth, was perceived predominantly reversed in depth. A phase-scrambled adaptor produced no aftereffect, yet when adapting and test walkers differed in size or appeared on opposite sides of fixation aftereffects did occur. Thus, this heading direction aftereffect cannot be explained by local, disparity-specific motion adaptation, and the properties of scale and position invariance imply higher-level origins of neural adaptation. Nor is disparity essential for producing adaptation: when suspended on top of a stereoscopically defined, rotating globe, a context-disambiguated “globetrotter” was sufficient to bias the bistable walker’s direction, as were full-body adaptors. In sum, these results imply that the neural signals supporting biomotion perception integrate information on the form, motion, and three-dimensional depth orientation of the moving human figure. Models of biomotion perception should incorporate mechanisms to disambiguate depth ambiguities in two-dimensional body representations. PMID:20089892

  5. Neurons compute internal models of the physical laws of motion.

    Science.gov (United States)

    Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David

    2004-07-29

    A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.

  6. Simple 3-D stimulus for motion parallax and its simulation.

    Science.gov (United States)

    Ono, Hiroshi; Chornenkyy, Yevgen; D'Amour, Sarah

    2013-01-01

    Simulation of a given stimulus situation should produce the same perception as the original. Rogers et al (2009 Perception 38 907-911) simulated Wheeler's (1982, PhD thesis, Rutgers University, NJ) motion parallax stimulus and obtained quite different perceptions. Wheeler's observers were unable to reliably report the correct direction of depth, whereas Rogers's were. With three experiments we explored the possible reasons for the discrepancy. Our results suggest that Rogers was able to see depth from the simulation partly due to his experience seeing depth with random dot surfaces.

  7. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    Science.gov (United States)

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  8. The difference between the perception of absolute and relative motion: A reaction time study

    NARCIS (Netherlands)

    J.B.J. Smeets (Jeroen); E. Brenner (Eli)

    1994-01-01

    textabstractWe used a reaction-time paradigm to examine the extent to which motion detection depends on relative motion. In the absence of relative motion, the responses could be described by a simple model based on the detection of a fixed change in position. If relative motion was present, the

  9. A review on otolith models in human perception.

    Science.gov (United States)

    Asadi, Houshyar; Mohamed, Shady; Lim, Chee Peng; Nahavandi, Saeid

    2016-08-01

    The vestibular system, which consists of semicircular canals and otolith, are the main sensors mammals use to perceive rotational and linear motions. Identifying the most suitable and consistent mathematical model of the vestibular system is important for research related to driving perception. An appropriate vestibular model is essential for implementation of the Motion Cueing Algorithm (MCA) for motion simulation purposes, because the quality of the MCA is directly dependent on the vestibular model used. In this review, the history and development process of otolith models are presented and analyzed. The otolith organs can detect linear acceleration and transmit information about sensed applied specific forces on the human body. The main purpose of this review is to determine the appropriate otolith models that agree with theoretical analyses and experimental results as well as provide reliable estimation for the vestibular system functions. Formulating and selecting the most appropriate mathematical model of the vestibular system is important to ensure successful human perception modelling and simulation when implementing the model into the MCA for motion analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The influence of sleep deprivation and oscillating motion on sleepiness, motion sickness, and cognitive and motor performance.

    Science.gov (United States)

    Kaplan, Janna; Ventura, Joel; Bakshi, Avijit; Pierobon, Alberto; Lackner, James R; DiZio, Paul

    2017-01-01

    Our goal was to determine how sleep deprivation, nauseogenic motion, and a combination of motion and sleep deprivation affect cognitive vigilance, visual-spatial perception, motor learning and retention, and balance. We exposed four groups of subjects to different combinations of normal 8h sleep or 4h sleep for two nights combined with testing under stationary conditions or during 0.28Hz horizontal linear oscillation. On the two days following controlled sleep, all subjects underwent four test sessions per day that included evaluations of fatigue, motion sickness, vigilance, perceptual discrimination, perceptual learning, motor performance and learning, and balance. Sleep loss and exposure to linear oscillation had additive or multiplicative relationships to sleepiness, motion sickness severity, decreases in vigilance and in perceptual discrimination and learning. Sleep loss also decelerated the rate of adaptation to motion sickness over repeated sessions. Sleep loss degraded the capacity to compensate for novel robotically induced perturbations of reaching movements but did not adversely affect adaptive recovery of accurate reaching. Overall, tasks requiring substantial attention to cognitive and motor demands were degraded more than tasks that were more automatic. Our findings indicate that predicting performance needs to take into account in addition to sleep loss, the attentional demands and novelty of tasks, the motion environment in which individuals will be performing and their prior susceptibility to motion sickness during exposure to provocative motion stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.

    Science.gov (United States)

    Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo

    2014-04-01

    Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.

  12. Frequency-Domain Joint Motion and Disparity Estimation Using Steerable Filters

    Directory of Open Access Journals (Sweden)

    Dimitrios Alexiadis

    2018-02-01

    Full Text Available In this paper, the problem of joint disparity and motion estimation from stereo image sequences is formulated in the spatiotemporal frequency domain, and a novel steerable filter-based approach is proposed. Our rationale behind coupling the two problems is that according to experimental evidence in the literature, the biological visual mechanisms for depth and motion are not independent of each other. Furthermore, our motivation to study the problem in the frequency domain and search for a filter-based solution is based on the fact that, according to early experimental studies, the biological visual mechanisms can be modelled based on frequency-domain or filter-based considerations, for both the perception of depth and the perception of motion. The proposed framework constitutes the first attempt to solve the joint estimation problem through a filter-based solution, based on frequency-domain considerations. Thus, the presented ideas provide a new direction of work and could be the basis for further developments. From an algorithmic point of view, we additionally extend state-of-the-art ideas from the disparity estimation literature to handle the joint disparity-motion estimation problem and formulate an algorithm that is evaluated through a number of experimental results. Comparisons with state-of-the-art-methods demonstrate the accuracy of the proposed approach.

  13. Perception of Animacy from the Motion of a Single Sound Object.

    Science.gov (United States)

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-02-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.

  14. Self versus environment motion in postural control.

    Directory of Open Access Journals (Sweden)

    Kalpana Dokka

    2010-02-01

    Full Text Available To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results.

  15. Direction detection thresholds of passive self-motion in artistic gymnasts.

    Science.gov (United States)

    Hartmann, Matthias; Haller, Katia; Moser, Ivan; Hossner, Ernst-Joachim; Mast, Fred W

    2014-04-01

    In this study, we compared direction detection thresholds of passive self-motion in the dark between artistic gymnasts and controls. Twenty-four professional female artistic gymnasts (ranging from 7 to 20 years) and age-matched controls were seated on a motion platform and asked to discriminate the direction of angular (yaw, pitch, roll) and linear (leftward-rightward) motion. Gymnasts showed lower thresholds for the linear leftward-rightward motion. Interestingly, there was no difference for the angular motions. These results show that the outstanding self-motion abilities in artistic gymnasts are not related to an overall higher sensitivity in self-motion perception. With respect to vestibular processing, our results suggest that gymnastic expertise is exclusively linked to superior interpretation of otolith signals when no change in canal signals is present. In addition, thresholds were overall lower for the older (14-20 years) than for the younger (7-13 years) participants, indicating the maturation of vestibular sensitivity from childhood to adolescence.

  16. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  17. Age-related changes in perception of movement in driving scenes.

    Science.gov (United States)

    Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M

    2014-07-01

    Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of

  18. Spatial Attention and Audiovisual Interactions in Apparent Motion

    Science.gov (United States)

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  19. Both physical exercise and progressive muscle relaxation reduce the facing-the-viewer bias in biological motion perception.

    Directory of Open Access Journals (Sweden)

    Adam Heenan

    Full Text Available Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1 or an anxiety induction/reduction task (Experiment 2 would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2 would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1 or performed an anxiety induction/reduction task (Experiment 2, and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our

  20. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  1. The influence of visual motion on interceptive actions and perception.

    Science.gov (United States)

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  3. Second-order processing of four-stroke apparent motion.

    Science.gov (United States)

    Mather, G; Murdoch, L

    1999-05-01

    In four-stroke apparent motion displays, pattern elements oscillate between two adjacent positions and synchronously reverse in contrast, but appear to move unidirectionally. For example, if rightward shifts preserve contrast but leftward shifts reverse contrast, consistent rightward motion is seen. In conventional first-order displays, elements reverse in luminance contrast (e.g. light elements become dark, and vice-versa). The resulting perception can be explained by responses in elementary motion detectors turned to spatio-temporal orientation. Second-order motion displays contain texture-defined elements, and there is some evidence that they excite second-order motion detectors that extract spatio-temporal orientation following the application of a non-linear 'texture-grabbing' transform by the visual system. We generated a variety of second-order four-stroke displays, containing texture-contrast reversals instead of luminance contrast reversals, and used their effectiveness as a diagnostic test for the presence of various forms of non-linear transform in the second-order motion system. Displays containing only forward or only reversed phi motion sequences were also tested. Displays defined by variation in luminance, contrast, orientation, and size were effective. Displays defined by variation in motion, dynamism, and stereo were partially or wholly ineffective. Results obtained with contrast-reversing and four-stroke displays indicate that only relatively simple non-linear transforms (involving spatial filtering and rectification) are available during second-order energy-based motion analysis.

  4. Mom's shadow: structure-from-motion in newly hatched chicks as revealed by an imprinting procedure.

    Science.gov (United States)

    Mascalzoni, Elena; Regolin, Lucia; Vallortigara, Giorgio

    2009-03-01

    The ability to recognize three-dimensional objects from two-dimensional (2-D) displays was investigated in domestic chicks, focusing on the role of the object's motion. In Experiment 1 newly hatched chicks, imprinted on a three-dimensional (3-D) object, were allowed to choose between the shadows of the familiar object and of an object never seen before. In Experiments 2 and 3 random-dot displays were used to produce the perception of a solid shape only when set in motion. Overall, the results showed that domestic chicks were able to recognize familiar shapes from 2-D motion stimuli. It is likely that similar general mechanisms underlying the perception of structure-from-motion and the extraction of 3-D information are shared by humans and animals. The present data shows that they occur similarly in birds as known for mammals, two separate vertebrate classes; this possibly indicates a common phylogenetic origin of these processes.

  5. Discrimination of curvature from motion during smooth pursuit eye movements and fixation.

    Science.gov (United States)

    Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2017-09-01

    Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found

  6. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  7. Motion correction in neurological fan beam SPECT using motion tracking and fully 3D reconstruction

    International Nuclear Information System (INIS)

    Fulton, R.R.; Hutton, B.; Eberl, S.; Meikle, S.; Braun, M.; Westmead Hospital, Westmead, NSW; University of Technology, Sydney, NSW

    1998-01-01

    Full text: We have previously proposed the use of fully three-dimensional (3D) reconstruction and continuous monitoring of head position to correct for motion artifacts in neurological SPECT and PET. Knowledge of the motion during acquisition provided by a head tracking system can be used to reposition the projection data in space in such a way as to negate motion effects during reconstruction. The reconstruction algorithm must deal with variations in the projection geometry resulting from differences in the timing and nature of motion between patients. Rotational movements about any axis other than the camera's axis of rotation give rise to projection geometries which necessitate the use of a fully 3D reconstruction algorithm. Our previous work with computer simulations assuming parallel hole collimation demonstrated the feasibility of correcting for motion. We have now refined our iterative 3D reconstruction algorithm to support fan beam data and attenuation correction, and developed a practical head tracking system for use on a Trionix Triad SPECT system. The correction technique has been tested in fan beam SPECT studies of the 3D Hoffman brain phantom. Arbitrary movements were applied to the phantom during acquisition and recorded by the head tracker which monitored the position and orientation of the phantom throughout the study. 3D reconstruction was then performed using the motion data provided by the tracker. The accuracy of correction was assessed by comparing the corrected images with a motion free study acquired immediately beforehand, visually and by calculating mean squared error (MSE). Motion correction reduced distortion perceptibly and, depending on the motions applied, improved MSE by up to an order of magnitude. 3D reconstruction of the 128x128x128 data set took 20 minutes on a SUN Ultra 1 workstation. The results of these phantom experiments suggest that the technique can effectively compensate for head motion under clinical SPECT imaging

  8. S4-3: Spatial Processing of Visual Motion

    Directory of Open Access Journals (Sweden)

    Shin'ya Nishida

    2012-10-01

    Full Text Available Local motion signals are extracted in parallel by a bank of motion detectors, and their spatiotemporal interactions are processed in subsequent stages. In this talk, I will review our recent studies on spatial interactions in visual motion processing. First, we found two types of spatial pooling of local motion signals. Directionally ambiguous 1D local motion signals are pooled across orientation and space for solution of the aperture problem, while 2D local motion signals are pooled for estimation of global vector average (e.g., Amano et al., 2009 Journal of Vision 9(3:4 1–25. Second, when stimulus presentation is brief, coherent motion detection of dynamic random-dot kinematogram is not efficient. Nevertheless, it is significantly improved by transient and synchronous presentation of a stationary surround pattern. This suggests that centre-surround spatial interaction may help rapid perception of motion (Linares et al., submitted. Third, to know how the visual system encodes pairwise relationships between remote motion signals, we measured the temporal rate limit for perceiving the relationship of two motion directions presented at the same time at different spatial locations. Compared with similar tasks with luminance or orientation signals, motion comparison was more rapid and hence efficient. This high performance was affected little by inter-element separation even when it was increased up to 100 deg. These findings indicate the existence of specialized processes to encode long-range relationships between motion signals for quick appreciation of global dynamic scene structure (Maruya et al., in preparation.

  9. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    Science.gov (United States)

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  10. Methodology for estimating human perception to tremors in high-rise buildings

    Science.gov (United States)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  11. Perception of the Body in Space: Mechanisms

    Science.gov (United States)

    Young, Laurence R.

    1991-01-01

    The principal topic is the perception of body orientation and motion in space and the extent to which these perceptual abstraction can be related directly to the knowledge of sensory mechanisms, particularly for the vestibular apparatus. Spatial orientation is firmly based on the underlying sensory mechanisms and their central integration. For some of the simplest situations, like rotation about a vertical axis in darkness, the dynamic response of the semicircular canals furnishes almost enough information to explain the sensations of turning and stopping. For more complex conditions involving multiple sensory systems and possible conflicts among their messages, a mechanistic response requires significant speculative assumptions. The models that exist for multisensory spatial orientation are still largely of the non-rational parameter variety. They are capable of predicting relationships among input motions and output perceptions of motion, but they involve computational functions that do not now and perhaps never will have their counterpart in central nervous system machinery. The challenge continues to be in the iterative process of testing models by experiment, correcting them where necessary, and testing them again.

  12. Airsickness and aircraft motion during short-haul flights.

    Science.gov (United States)

    Turner, M; Griffin, M J; Holland, I

    2000-12-01

    There is little quantitative information that can be used to predict the incidence of airsickness from the motions experienced in military or civil aviation. This study examines the relationship between low-frequency aircraft motion and passenger sickness in short-haul turboprop flights within the United Kingdom. A questionnaire survey of 923 fare-paying passengers was conducted on 38 commercial airline flights. Concurrent measurements of aircraft motion were made on all journeys, yielding approximately 30 h of aircraft motion data. Overall, 0.5% of passengers reported vomiting, 8.4% reported nausea (range 0% to 34.8%) and 16.2% reported illness (range 0% to 47.8%) during flight. Positive correlations were found between the percentage of passengers who experienced nausea or felt ill and the magnitude of low-frequency lateral and vertical motion, although neither motion uniquely predicted airsickness. The incidence of motion sickness also varied with passenger age, gender, food consumption and activity during air travel. No differences in sickness were found between passengers located in different seating sections of the aircraft, or as a function of moderate levels of alcohol consumption. The passenger responses suggest that a useful prediction of airsickness can be obtained from magnitudes of low frequency aircraft motion. However, some variations in airsickness may also be explained by individual differences between passengers and their psychological perception of flying.

  13. Synchronous and asynchronous perceptual bindings of colour and motion following identical stimulations.

    Science.gov (United States)

    McIntyre, Morgan E; Arnold, Derek H

    2018-05-01

    When a moving surface alternates in colour and direction, perceptual couplings of colour and motion can differ from their physical correspondence. Periods of motion tend to be perceptually bound with physically delayed colours - a colour/motion perceptual asynchrony. This can be eliminated by motion transparency. Here we show that the colour/motion perceptual asynchrony is not invariably eliminated by motion transparency. Nor is it an inevitable consequence given a particular physical input. Instead, it can emerge when moving surfaces are perceived as alternating in direction, even if those surfaces seem transparent, and it is eliminated when surfaces are perceived as moving invariably. For a given observer either situation can result from exposure to a common input. Our findings suggest that neural events that promote the perception of motion reversals are causal of the colour/motion perceptual asynchrony. Moreover, they suggest that motion transparency and coherence can be signalled simultaneously by subpopulations of direction-selective neurons, with this conflict instantaneously resolved by a competitive winner-takes-all interaction, which can instantiate or eliminate colour/motion perceptual asynchrony. Copyright © 2017. Published by Elsevier Ltd.

  14. Optic Flow Information Influencing Heading Perception during Rotation

    Directory of Open Access Journals (Sweden)

    Diederick C. Niehorster

    2011-05-01

    Full Text Available We investigated what roles global spatial frequency, surface structure, and foreground motion play in heading perception during simulated rotation from optic flow. The display (110°Hx94°V simulated walking on a straight path over a ground plane (depth range: 1.4–50 m at 2 m/s while fixating a target off to one side (mean R/T ratios: ±1, ±2, ±3 under six display conditions. Four displays consisted of nonexpanding dots that were distributed so as to manipulate the amount of foreground motion and the presence of surface structure. In one further display the ground was covered with disks that expanded during the trial and lastly a textured ground display was created with the same spatial frequency power spectrum as the disk ground. At the end of each 1s trial, observers indicated their perceived heading along a line at the display's center. Mean heading biases were smaller for the textured than for the disk ground, for the displays with more foreground motion and for the displays with surface structure defined by dot motion than without. We conclude that while spatial frequency content is not a crucial factor, dense motion parallax and surface structure in optic flow are important for accurate heading perception during rotation.

  15. Brain mechanisms for social perception: lessons from autism and typical development.

    Science.gov (United States)

    Pelphrey, Kevin A; Carter, Elizabeth J

    2008-12-01

    In this review, we summarize our research program, which has as its goal charting the typical and atypical development of the social brain in children, adolescents, and adults with and without autism. We highlight recent work using virtual reality stimuli, eye tracking, and functional magnetic resonance imaging that has implicated the superior temporal sulcus (STS) region as an important component of the network of brain regions that support various aspects of social cognition and social perception. Our work in typically developing adults has led to the conclusion that the STS region is involved in social perception via its role in the visual analysis of others' actions and intentions from biological-motion cues. Our work in high-functioning adolescents and adults with autism has implicated the STS region as a mechanism underlying social perception dysfunction in this neurodevelopmental disorder. We also report novel findings from a study of biological-motion perception in young children with and without autism.

  16. Integration of motion energy from overlapping random background noise increases perceived speed of coherently moving stimuli.

    Science.gov (United States)

    Chuang, Jason; Ausloos, Emily C; Schwebach, Courtney A; Huang, Xin

    2016-12-01

    The perception of visual motion can be profoundly influenced by visual context. To gain insight into how the visual system represents motion speed, we investigated how a background stimulus that did not move in a net direction influenced the perceived speed of a center stimulus. Visual stimuli were two overlapping random-dot patterns. The center stimulus moved coherently in a fixed direction, whereas the background stimulus moved randomly. We found that human subjects perceived the speed of the center stimulus to be significantly faster than its veridical speed when the background contained motion noise. Interestingly, the perceived speed was tuned to the noise level of the background. When the speed of the center stimulus was low, the highest perceived speed was reached when the background had a low level of motion noise. As the center speed increased, the peak perceived speed was reached at a progressively higher background noise level. The effect of speed overestimation required the center stimulus to overlap with the background. Increasing the background size within a certain range enhanced the effect, suggesting spatial integration. The speed overestimation was significantly reduced or abolished when the center stimulus and the background stimulus had different colors, or when they were placed at different depths. When the center- and background-stimuli were perceptually separable, speed overestimation was correlated with perceptual similarity between the center- and background-stimuli. These results suggest that integration of motion energy from random motion noise has a significant impact on speed perception. Our findings put new constraints on models regarding the neural basis of speed perception. Copyright © 2016 the American Physiological Society.

  17. Gravito-Inertial Force Resolution in Perception of Synchronized Tilt and Translation

    Science.gov (United States)

    Wood, Scott J.; Holly, Jan; Zhang, Guen-Lu

    2011-01-01

    Natural movements in the sagittal plane involve pitch tilt relative to gravity combined with translation motion. The Gravito-Inertial Force (GIF) resolution hypothesis states that the resultant force on the body is perceptually resolved into tilt and translation consistently with the laws of physics. The purpose of this study was to test this hypothesis for human perception during combined tilt and translation motion. EXPERIMENTAL METHODS: Twelve subjects provided verbal reports during 0.3 Hz motion in the dark with 4 types of tilt and/or translation motion: 1) pitch tilt about an interaural axis at +/-10deg or +/-20deg, 2) fore-aft translation with acceleration equivalent to +/-10deg or +/-20deg, 3) combined "in phase" tilt and translation motion resulting in acceleration equivalent to +/-20deg, and 4) "out of phase" tilt and translation motion that maintained the resultant gravito-inertial force aligned with the longitudinal body axis. The amplitude of perceived pitch tilt and translation at the head were obtained during separate trials. MODELING METHODS: Three-dimensional mathematical modeling was performed to test the GIF-resolution hypothesis using a dynamical model. The model encoded GIF-resolution using the standard vector equation, and used an internal model of motion parameters, including gravity. Differential equations conveyed time-varying predictions. The six motion profiles were tested, resulting in predicted perceived amplitude of tilt and translation for each. RESULTS: The modeling results exhibited the same pattern as the experimental results. Most importantly, both modeling and experimental results showed greater perceived tilt during the "in phase" profile than the "out of phase" profile, and greater perceived tilt during combined "in phase" motion than during pure tilt of the same amplitude. However, the model did not predict as much perceived translation as reported by subjects during pure tilt. CONCLUSION: Human perception is consistent with

  18. Defining the computational structure of the motion detector in Drosophila.

    Science.gov (United States)

    Clark, Damon A; Bursztyn, Limor; Horowitz, Mark A; Schnitzer, Mark J; Clandinin, Thomas R

    2011-06-23

    Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Impaired Velocity Processing Reveals an Agnosia for Motion in Depth.

    Science.gov (United States)

    Barendregt, Martijn; Dumoulin, Serge O; Rokers, Bas

    2016-11-01

    Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers' ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion. © The Author(s) 2016.

  20. Stronger misdirection in curved than in straight motion

    Directory of Open Access Journals (Sweden)

    Jorge eOtero-Millan

    2011-11-01

    Full Text Available Illusions developed by magicians are a rich and largely untapped source of insight into perception and cognition. Here we show that curved motion, as employed by the magician in a classic sleight of hand trick, generates stronger misdirection than rectilinear motion, and that this difference can be explained by the differential engagement of the smooth pursuit and the saccadic oculomotor systems. This research moreover exemplifies how the magician’s intuitive understanding of the spectator’s mindset can surpass that of the cognitive scientist in specific instances, and that observation-based behavioral insights developed by magicians are worthy of quantitative investigation in the neuroscience laboratory.

  1. Procedural Audio in Computer Games Using Motion Controllers: An Evaluation on the Effect and Perception

    Directory of Open Access Journals (Sweden)

    Niels Böttcher

    2013-01-01

    Full Text Available A study has been conducted into whether the use of procedural audio affects players in computer games using motion controllers. It was investigated whether or not (1 players perceive a difference between detailed and interactive procedural audio and prerecorded audio, (2 the use of procedural audio affects their motor-behavior, and (3 procedural audio affects their perception of control. Three experimental surveys were devised, two consisting of game sessions and the third consisting of watching videos of gameplay. A skiing game controlled by a Nintendo Wii balance board and a sword-fighting game controlled by a Wii remote were implemented with two versions of sound, one sample based and the other procedural based. The procedural models were designed using a perceptual approach and by alternative combinations of well-known synthesis techniques. The experimental results showed that, when being actively involved in playing or purely observing a video recording of a game, the majority of participants did not notice any difference in sound. Additionally, it was not possible to show that the use of procedural audio caused any consistent change in the motor behavior. In the skiing experiment, a portion of players perceived the control of the procedural version as being more sensitive.

  2. Role of Cerebellum in Motion Perception and Vestibulo-ocular Reflex—Similarities and Disparities

    Science.gov (United States)

    Shaikh, Aasef G.; Palla, Antonella; Marti, Sarah; Olasagasti, Itsaso; Optican, Lance M.; Zee, David S.; Straumann, Dominik

    2012-01-01

    Vestibular velocity storage enhances the efficacy of the angular vestibulo-ocular reflex (VOR) during relatively low-frequency head rotations. This function is modulated by GABA-mediated inhibitory cerebellar projections. Velocity storage also exists in perceptual pathway and has similar functional principles as VOR. However, it is not known whether the neural substrate for perception and VOR overlap. We propose two possibilities. First, there is the same velocity storage for both VOR and perception; second, there are nonoverlapping neural networks: one might be involved in perception and the other for the VOR. We investigated these possibilities by measuring VOR and perceptual responses in healthy human subjects during whole-body, constant-velocity rotation steps about all three dimensions (yaw, pitch, and roll) before and after 10 mg of 4-aminopyridine (4-AP). 4-AP, a selective blocker of inward rectifier potassium conductance, can lead to increased synchronization and precision of Purkinje neuron discharge and possibly enhance the GABAergic action. Hence 4-AP could reduce the decay time constant of the perceived angular velocity and VOR. We found that 4-AP reduced the decay time constant, but the amount of reduction in the two processes, perception and VOR, was not the same, suggesting the possibility of nonoverlapping or partially overlapping neural substrates for VOR and perception. We also noted that, unlike the VOR, the perceived angular velocity gradually built up and plateau prior to decay. Hence, the perception pathway may have additional mechanism that changes the dynamics of perceived angular velocity beyond the velocity storage. 4-AP had no effects on the duration of build-up of perceived angular velocity, suggesting that the higher order processing of perception, beyond the velocity storage, might not occur under the influence of mechanism that could be influenced by 4-AP. PMID:22777507

  3. Near-optimal integration of facial form and motion.

    Science.gov (United States)

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  4. Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

    Science.gov (United States)

    Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco

    2013-05-01

    Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.

  5. The First Time Ever I Saw Your Feet: Inversion Effect in Newborns' Sensitivity to Biological Motion

    Science.gov (United States)

    Bardi, Lara; Regolin, Lucia; Simion, Francesca

    2014-01-01

    Inversion effect in biological motion perception has been recently attributed to an innate sensitivity of the visual system to the gravity-dependent dynamic of the motion. However, the specific cues that determine the inversion effect in naïve subjects were never investigated. In the present study, we have assessed the contribution of the local…

  6. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    Science.gov (United States)

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy

  7. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  8. Differences in Otolith and Abdominal Viscera Graviceptor Dynamics: Implications for Motion Sickness and Perceived Body Position

    Science.gov (United States)

    vonGierke, Henning E.; Parker, Donald E.

    1993-01-01

    Human graviceptors, located in the trunk by Mittelstaedt probably transduce acceleration by abdominal viscera motion. As demonstrated previously in biodynamic vibration and impact tolerance research the thoraco-abdominal viscera exhibit a resonance at 4 to 6 Hz. Behavioral observations and mechanical models of otolith graviceptor response indicate a phase shift increasing with frequency between 0.01 and O.5 Hz. Consequently the potential exists for intermodality sensory conflict between vestibular and visceral graviceptor signals at least at the mechanical receptor level. The frequency range of this potential conflict corresponds with the primary frequency range for motion sickness incidence in transportation, in subjects rotated about Earth-horizontal axes (barbecue spit stimulation) and in periodic parabolic flight microgravity research and also for erroneous perception of vertical oscillations in helicopters. We discuss the implications of this hypothesis for previous self motion perception research and suggestions for various future studies.

  9. Self-recognition of avatar motion: how do I know it's me?

    Science.gov (United States)

    Cook, Richard; Johnston, Alan; Heyes, Cecilia

    2012-02-22

    When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.

  10. Autogenic-feedback training - A treatment for motion and space sickness

    Science.gov (United States)

    Cowings, Patricia S.

    1990-01-01

    A training method for preventing the occurrence of motion sickness in humans, called autogenic-feedback training (AFT), is described. AFT is based on a combination of biofeedback and autogenic therapy which involves training physiological self-regulation as an alternative to pharmacological management. AFT was used to reliably increase tolerance to motion-sickness-inducing tests in both men and women ranging in age from 18 to 54 years. The effectiveness of AFT is found to be significantly higher than that of protective adaptation training. Data obtained show that there is no apparent effect from AFT on measures of vestibular perception and no side effects.

  11. Integration of visual and inertial cues in perceived heading of self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Weesie, H.M.; Werkhoven, P.J.; Groen, E.L.

    2010-01-01

    In the present study, we investigated whether the perception of heading of linear self-motion can be explained by Maximum Likelihood Integration (MLI) of visual and non-visual sensory cues. MLI predicts smaller variance for multisensory judgments compared to unisensory judgments. Nine participants

  12. Visual Hierarchy and Mind Motion in Advertising Design

    Directory of Open Access Journals (Sweden)

    Doaa Farouk Badawy Eldesouky

    2013-06-01

    Full Text Available Visual hierarchy is a significant concept in the field of advertising, a field that is dominated by effective communication, visual recognition and motion. Designers of advertisements have always been trying to organize the visual hierarchy throughout their advertising designs to aid the eye to recognize information in the desired order, to achieve the ultimate goals of clear perception and effectively delivering the advertising messages. However many assumptions and questions usually rise on how to create effective hierarchy throughout advertising designs and lead the eye and mind of the viewer in the most favorable way. This paper attempts to study visual hierarchy and mind motion in advertising designs and why it is important to develop visual paths when designing an advertisement. It explores the theory behind it, and how the very principles can be used to put these concepts into practice. The paper demonstrates some advertising samples applying visual hierarchy and mind motion in a representation of applying the basics and discussing the results.

  13. Visual Hierarchy and Mind Motion in Advertising Design

    Directory of Open Access Journals (Sweden)

    Doaa Farouk Badawy Eldesouky

    2013-06-01

    Full Text Available Visual hierarchy is a significant concept in the field of advertising, a field that is dominated by effective communication, visual recognition and motion. Designers of advertisements have always been trying to organize the visual hierarchy throughout their advertising designs to aid the eye to recognize information in the desired order, to achieve the ultimate goals of clear perception and effectively delivering the advertising messages. However many assumptions and questions usually rise on how to create effective hierarchy throughout advertising designs and lead the eye and mind of the viewer in the most favorable way. This paper attempts to study visual hierarchy and mind motion in advertising designs and why it is important to develop visual paths when designing an advertisement. It explores the theory behind it, and how the very principles can be used to put these concepts into practice. The paper demonstrates some advertising samples applying visual hierarchy and mind motion in a representation of applying the basics and discussing the results. 

  14. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    OpenAIRE

    Rick Van Der Zwan; Anna Brooks; Duncan Blair; Coralia Machatch; Graeme Hacker

    2011-01-01

    Johnson and Tassinary (2005) proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009). Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ...

  15. He Throws like a Girl (but Only when He's Sad): Emotion Affects Sex-Decoding of Biological Motion Displays

    Science.gov (United States)

    Johnson, Kerri L.; McKay, Lawrie S.; Pollick, Frank E.

    2011-01-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming…

  16. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  17. Illusory object motion in the centre of a radial pattern: The Pursuit–Pursuing illusion

    Science.gov (United States)

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed. PMID:23145267

  18. Illusory object motion in the centre of a radial pattern: The Pursuit-Pursuing illusion.

    Science.gov (United States)

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed.

  19. Stability of Kinesthetic Perception in Efferent-Afferent Spaces: The Concept of Iso-perceptual Manifold.

    Science.gov (United States)

    Latash, Mark L

    2018-02-21

    The main goal of this paper is to introduce the concept of iso-perceptual manifold for perception of body configuration and related variables (kinesthetic perception) and to discuss its relation to the equilibrium-point hypothesis and the concepts of reference coordinate and uncontrolled manifold. Hierarchical control of action is postulated with abundant transformations between sets of spatial reference coordinates for salient effectors at different levels. Iso-perceptual manifold is defined in the combined space of afferent and efferent variables as the subspace corresponding to a stable percept. Examples of motion along an iso-perceptual manifold (perceptually equivalent motion) are considered during various natural actions. Some combinations of afferent and efferent signals, in particular those implying a violation of body's integrity, give rise to variable percepts by artificial projection onto iso-perceptual manifolds. This framework is used to interpret unusual features of vibration-induced kinesthetic illusions and to predict new illusions not yet reported in the literature. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Evaluation of simulation motion fidelity criteria in the vertical and directional axes

    Science.gov (United States)

    Schroeder, Jeffery A.

    1993-01-01

    An evaluation of existing motion fidelity criteria was conducted on the NASA Ames Vertical Motion Simulator. Experienced test pilots flew single-axis repositioning tasks in both the vertical and the directional axes. Using a first-order approximation of a hovering helicopter, tasks were flown with variations only in the filters that attenuate the commands to the simulator motion system. These filters had second-order high-pass characteristics, and the variations were made in the filter gain and natural frequency. The variations spanned motion response characteristics from nearly full math-model motion to fixed-base. Between configurations, pilots recalibrated their motion response perception by flying the task with full motion. Pilots subjectively rated the motion fidelity of subsequent configurations relative to this full motion case, which was considered the standard for comparison. The results suggested that the existing vertical-axis criterion was accurate for combinations of gain and natural frequency changes. However, if only the gain or the natural frequency was changed, the rated motion fidelity was better than the criterion predicted. In the vertical axis, the objective and subjective results indicated that a larger gain reduction was tolerated than the existing criterion allowed. The limited data collected in the yaw axis revealed that pilots had difficulty in distinguishing among the variations in the pure yaw motion cues.

  1. Dynamic Stimuli And Active Processing In Human Visual Perception

    Science.gov (United States)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  2. Does chronic idiopathic dizziness reflect an impairment of sensory predictions of self-motion?

    Directory of Open Access Journals (Sweden)

    Joern K Pomper

    2013-11-01

    Full Text Available Most patients suffering from chronic idiopathic dizziness do not present signs of vestibular dysfunction or organic failures of other kinds. Hence, this kind of dizziness is commonly seen as psychogenic in nature, sharing commonalities with specific phobias, panic disorder and generalized anxiety. A more specific concept put forward by Brandt and Dieterich (1986 states that these patients suffer from dizziness because of an inadequate compensation of self-induced sensory stimulation. According to this hypothesis self-motion-induced reafferent visual stimulation is interpreted as motion in the world since a predictive signal reflecting the consequences of self-motion, needed to compensate the reafferent stimulus, is inadequate. While conceptually intriguing, experimental evidence supporting the idea of an inadequate prediction of the sensory consequences of own movements has as yet been lacking. Here we tested this hypothesis by applying it to the perception of background motion induced by smooth-pursuit eye movements. As a matter of fact, we found the same mildly undercompensating prediction, responsible for the perception of slight illusory world motion („Filehne illusion in the 15 patients tested and their age-matched controls. Likewise, the ability to adapt this prediction to the needs of the visual context was not deteriorated in patients. Finally, we could not find any correlation between measures of the individual severity of dizziness and the ability to predict. In sum, our results do not support the concept of a deviant prediction of self-induced sensory stimulation as cause of chronic idiopathic dizziness.

  3. Type of featural attention differentially modulates hMT+ responses to illusory motion aftereffects.

    Science.gov (United States)

    Castelo-Branco, Miguel; Kozak, Lajos R; Formisano, Elia; Teixeira, João; Xavier, João; Goebel, Rainer

    2009-11-01

    Activity in the human motion complex (hMT(+)/V5) is related to the perception of motion, be it either real surface motion or an illusion of motion such as apparent motion (AM) or motion aftereffect (MAE). It is a long-lasting debate whether illusory motion-related activations in hMT(+) represent the motion itself or attention to it. We have asked whether hMT(+) responses to MAEs are present when shifts in arousal are suppressed and attention is focused on concurrent motion versus nonmotion features. Significant enhancement of hMT(+) activity was observed during MAEs when attention was focused either on concurrent spatial angle or color features. This observation was confirmed by direct comparison of adapting (MAE inducing) versus nonadapting conditions. In contrast, this effect was diminished when subjects had to report on concomitant speed changes of superimposed AM. The same finding was observed for concomitant orthogonal real motion (RM), suggesting that selective attention to concurrent illusory or real motion was interfering with the saliency of MAE signals in hMT(+). We conclude that MAE-related changes in the global activity of hMT(+) are present provided selective attention is not focused on an interfering feature such as concurrent motion. Accordingly, there is a genuine MAE-related motion signal in hMT(+) that is neither explained by shifts in arousal nor by selective attention.

  4. An adaptive neural mechanism for acoustic motion perception with varying sparsity

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may...

  5. Motion planning for autonomous vehicle based on radial basis function neural network in unstructured environment.

    Science.gov (United States)

    Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao

    2014-09-18

    The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality.

  6. Hierarchical Motion Control for a Team of Humanoid Soccer Robots

    Directory of Open Access Journals (Sweden)

    Seung-Joon Yi

    2016-02-01

    Full Text Available Robot soccer has become an effective benchmarking problem for robotics research as it requires many aspects of robotics including perception, self localization, motion planning and distributed coordination to work in uncertain and adversarial environments. Especially with humanoid robots that lack inherent stability, a capable and robust motion controller is crucial for generating walking and kicking motions without losing balance. In this paper, we describe the details of a motion controller to control a team of humanoid soccer robots, which consists of a hierarchy of controllers with different time frames and abstraction levels. A low level controller governs the real time control of each joint angle, either using target joint angles or target endpoint transforms. A mid-level controller handles bipedal locomotion and balancing of the robot. A high level controller decides the long term behavior of the robot, and finally the team level controller coordinates the behavior of a group of robots by means of asynchronous communication between the robots. The suggested motion system has been successfully used by many humanoid robot teams at the RoboCup international robot soccer competitions, which has awarded us five successful championships in a row.

  7. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    Science.gov (United States)

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  8. Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.

    Science.gov (United States)

    Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp

    2012-07-30

    Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.

  9. The Temporal Dynamics of Feature Integration for Color, form, and Motion

    Directory of Open Access Journals (Sweden)

    KS Pilz

    2012-07-01

    Full Text Available When two similar visual stimuli are presented in rapid succession, only their fused image is perceived, without having conscious access to the single stimuli. Such feature fusion occurs both for color (eg, Efron1973 and form (eg, Scharnowski et al 2007. For verniers, the fusion process lasts for more than 400 ms, as has been shown using TMS (Scharnowski et al 2009. In three experiments, we used light masks to investigate the time course of feature fusion for color, form, and motion. In experiment one, two verniers were presented in rapid succession with opposite offset directions. Subjects had to indicate the offset direction of the vernier. In a second experiment, a red and a green disk were presented in rapid succession, and subjects had to indicate whether the fused, yellow disk appeared rather than red or green. In a third experiment, three frames of random dots were presented successively. The first two frames created a percept of apparent motion to the upper right; and the last two frames, one to the upper left or vice versa. Subjects had to indicate the direction of motion. All stimuli were presented foveally. In all three experiments, we first balanced performance so that neither the first nor the second stimulus dominated the fused percept. In a second step, a light mask was presented either before, during, or after stimulus presentation. Depending on presentation time, the light masks modulated the fusion process so that either the first or the second stimulus dominated the percept. Our results show that unconscious feature fusion lasts more than five times longer than the actual stimulus duration, which indicates that individual features are stored for a substantial amount of time before they are integrated.

  10. The Flash-Lag Effect as a Motion-Based Predictive Shift.

    Directory of Open Access Journals (Sweden)

    Mina A Khoei

    2017-01-01

    Full Text Available Due to its inherent neural delays, the visual system has an outdated access to sensory information about the current position of moving objects. In contrast, living organisms are remarkably able to track and intercept moving objects under a large range of challenging environmental conditions. Physiological, behavioral and psychophysical evidences strongly suggest that position coding is extrapolated using an explicit and reliable representation of object's motion but it is still unclear how these two representations interact. For instance, the so-called flash-lag effect supports the idea of a differential processing of position between moving and static objects. Although elucidating such mechanisms is crucial in our understanding of the dynamics of visual processing, a theory is still missing to explain the different facets of this visual illusion. Here, we reconsider several of the key aspects of the flash-lag effect in order to explore the role of motion upon neural coding of objects' position. First, we formalize the problem using a Bayesian modeling framework which includes a graded representation of the degree of belief about visual motion. We introduce a motion-based prediction model as a candidate explanation for the perception of coherent motion. By including the knowledge of a fixed delay, we can model the dynamics of sensory information integration by extrapolating the information acquired at previous instants in time. Next, we simulate the optimal estimation of object position with and without delay compensation and compared it with human perception under a broad range of different psychophysical conditions. Our computational study suggests that the explicit, probabilistic representation of velocity information is crucial in explaining position coding, and therefore the flash-lag effect. We discuss these theoretical results in light of the putative corrective mechanisms that can be used to cancel out the detrimental effects of neural

  11. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  12. Remote operation: a selective review of research into visual depth perception.

    Science.gov (United States)

    Reinhardt-Rutland, A H

    1996-07-01

    Some perceptual motor operations are performed remotely; examples include the handling of life-threatening materials and surgical procedures. A camera conveys the site of operation to a TV monitor, so depth perception relies mainly on pictorial information, perhaps with enhancement of the occlusion cue by motion. However, motion information such as motion parallax is not likely to be important. The effectiveness of pictorial information is diminished by monocular and binocular information conveying flatness of the screen and by difficulties in scaling: Only a degree of relative depth can be conveyed. Furthermore, pictorial information can mislead. Depth perception is probably adequate in remote operation, if target objects are well separated, with well-defined edges and familiar shapes. Stereoscopic viewing systems are being developed to introduce binocular information to remote operation. However, stereoscopic viewing is problematic because binocular disparity conflicts with convergence and monocular information. An alternative strategy to improve precision in remote operation may be to rely on individuals who lack binocular function: There is redundancy in depth information, and such individuals seem to compensate for the lack of binocular function.

  13. Multimodal Perception and Multicriterion Control of Nested Systems. 1; Coordination of Postural Control and Vehicular Control

    Science.gov (United States)

    Riccio, Gary E.; McDonald, P. Vernon

    1998-01-01

    The purpose of this report is to identify the essential characteristics of goal-directed whole-body motion. The report is organized into three major sections (Sections 2, 3, and 4). Section 2 reviews general themes from ecological psychology and control-systems engineering that are relevant to the perception and control of whole-body motion. These themes provide an organizational framework for analyzing the complex and interrelated phenomena that are the defining characteristics of whole-body motion. Section 3 of this report applies the organization framework from the first section to the problem of perception and control of aircraft motion. This is a familiar problem in control-systems engineering and ecological psychology. Section 4 examines an essential but generally neglected aspect of vehicular control: coordination of postural control and vehicular control. To facilitate presentation of this new idea, postural control and its coordination with vehicular control are analyzed in terms of conceptual categories that are familiar in the analysis of vehicular control.

  14. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Directory of Open Access Journals (Sweden)

    Amir Homayoun Javadi

    Full Text Available Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively. These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  15. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Science.gov (United States)

    Javadi, Amir Homayoun; Wee, Natalie

    2012-01-01

    Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  16. Neural Response to Biological Motion in Healthy Adults Varies as a Function of Autistic-Like Traits

    Directory of Open Access Journals (Sweden)

    Meghan H. Puglia

    2017-07-01

    Full Text Available Perception of biological motion is an important social cognitive ability that has been mapped to specialized brain regions. Perceptual deficits and neural differences during biological motion perception have previously been associated with autism, a disorder classified by social and communication difficulties and repetitive and restricted interests and behaviors. However, the traits associated with autism are not limited to diagnostic categories, but are normally distributed within the general population and show the same patterns of heritability across the continuum. In the current study, we investigate whether self-reported autistic-like traits in healthy adults are associated with variable neural response during passive viewing of biological motion displays. Results show that more autistic-like traits, particularly those associated with the communication domain, are associated with increased neural response in key regions involved in social cognitive processes, including prefrontal and left temporal cortices. This distinct pattern of activation might reflect differential neurodevelopmental processes for individuals with varying autistic-like traits, and highlights the importance of considering the full trait continuum in future work.

  17. Figure-ground segregation can rely on differences in motion direction.

    Science.gov (United States)

    Kandil, Farid I; Fahle, Manfred

    2004-12-01

    If the elements within a figure move synchronously while those in the surround move at a different time, the figure is easily segregated from the surround and thus perceived. Lee and Blake (1999) [Visual form created solely from temporal structure. Science, 284, 1165-1168] demonstrated that this figure-ground separation may be based not only on time differences between motion onsets, but also on the differences between reversals of motion direction. However, Farid and Adelson (2001) [Synchrony does not promote grouping in temporally structured displays. Nature Neuroscience, 4, 875-876] argued that figure-ground segregation in the motion-reversal experiment might have been based on a contrast artefact and concluded that (a)synchrony as such was 'not responsible for the perception of form in these or earlier displays'. Here, we present experiments that avoid contrast artefacts but still produce figure-ground segregation based on purely temporal cues. Our results show that subjects can segregate figure from ground even though being unable to use motion reversals as such. Subjects detect the figure when either (i) motion stops (leading to contrast artefacts), or (ii) motion directions differ between figure and ground. Segregation requires minimum delays of about 15 ms. We argue that whatever the underlying cues and mechanisms, a second stage beyond motion detection is required to globally compare the outputs of local motion detectors and to segregate figure from ground. Since analogous changes take place in both figure and ground in rapid succession, this second stage has to detect the asynchrony with high temporal precision.

  18. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    Science.gov (United States)

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  19. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Cognitive suppression of tilt sensations during linear horizontal ego-motion in the dark

    NARCIS (Netherlands)

    Wertheim, A.H.; Mesland, B.S.; Bles, W.

    2001-01-01

    On the basis of models of otolith functioning, one would expect that, during sinusoidal linear self-motion in darkness, percepts of body tilt are experienced. However, this is normally not the case, which suggests that the otoliths are not responsive to small deviations from the vertical of the

  1. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  2. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    Science.gov (United States)

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  3. The contribution of the body and motion to whole person recognition.

    Science.gov (United States)

    Simhi, Noa; Yovel, Galit

    2016-05-01

    While the importance of faces in person recognition has been the subject of many studies, there are relatively few studies examining recognition of the whole person in motion even though this most closely resembles daily experience. Most studies examining the whole body in motion use point light displays, which have many advantages but are impoverished and unnatural compared to real life. To determine which factors are used when recognizing the whole person in motion we conducted two experiments using naturalistic videos. In Experiment 1 we used a matching task in which the first stimulus in each pair could either be a video or multiple still images from a video of the full body. The second stimulus, on which person recognition was performed, could be an image of either the full body or face alone. We found that the body contributed to person recognition beyond the face, but only after exposure to motion. Since person recognition was performed on still images, the contribution of motion to person recognition was mediated by form-from-motion processes. To assess whether dynamic identity signatures may also contribute to person recognition, in Experiment 2 we presented people in motion and examined person recognition from videos compared to still images. Results show that dynamic identity signatures did not contribute to person recognition beyond form-from-motion processes. We conclude that the face, body and form-from-motion processes all appear to play a role in unfamiliar person recognition, suggesting the importance of considering the whole body and motion when examining person perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Is perception of self-motion speed a necessary condition for intercepting a moving target while walking?

    Science.gov (United States)

    Morice, Antoine H P; Wallet, Grégory; Montagne, Gilles

    2014-04-30

    While it has been shown that the Global Optic Flow Rate (GOFR) is used in the control of self-motion speed, this study examined its relevance in the control of interceptive actions while walking. We asked participants to intercept approaching targets by adjusting their walking speed in a virtual environment, and predicted that the influence of the GOFR depended on their interception strategy. Indeed, unlike the Constant Bearing Angle (CBA), the Modified Required Velocity (MRV) strategy relies on the perception of self-displacement speed. On the other hand, the CBA strategy involves specific speed adjustments depending on the curvature of the target's trajectory, whereas the MRV does not. We hypothesized that one strategy is selected among the two depending on the informational content of the environment. We thus manipulated the curvature and display of the target's trajectory, and the relationship between physical walking speed and the GOFR (through eye height manipulations). Our results showed that when the target trajectory was not displayed, walking speed profiles were affected by curvature manipulations. Otherwise, walking speed profiles were less affected by curvature manipulations and were affected by the GOFR manipulations. Taken together, these results show that the use of the GOFR for intercepting a moving target while walking depends on the informational content of the environment. Finally we discuss the complementary roles of these two perceptual-motor strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Temporal dynamics of 2D motion integration for ocular following in macaque monkeys.

    Science.gov (United States)

    Barthélemy, Fréderic V; Fleuriet, Jérome; Masson, Guillaume S

    2010-03-01

    Several recent studies have shown that extracting pattern motion direction is a dynamical process where edge motion is first extracted and pattern-related information is encoded with a small time lag by MT neurons. A similar dynamics was found for human reflexive or voluntary tracking. Here, we bring an essential, but still missing, piece of information by documenting macaque ocular following responses to gratings, unikinetic plaids, and barber-poles. We found that ocular tracking was always initiated first in the grating motion direction with ultra-short latencies (approximately 55 ms). A second component was driven only 10-15 ms later, rotating tracking toward pattern motion direction. At the end the open-loop period, tracking direction was aligned with pattern motion direction (plaids) or the average of the line-ending motion directions (barber-poles). We characterized the dependency on contrast of each component. Both timing and direction of ocular following were quantitatively very consistent with the dynamics of neuronal responses reported by others. Overall, we found a remarkable consistency between neuronal dynamics and monkey behavior, advocating for a direct link between the neuronal solution of the aperture problem and primate perception and action.

  6. Self-Organizing Neural Integration of Pose-Motion Features for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    German Ignacio Parisi

    2015-06-01

    Full Text Available The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented towards human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR networks that obtain progressively generalized representations of sensory inputs and learn inherent spatiotemporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best 21 results for a public benchmark of domestic daily actions.

  7. Synaptic Correlates of Low-Level Perception in V1.

    Science.gov (United States)

    Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves

    2016-04-06

    The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed

  8. The Perception is a Prism: body, presence and technologies

    Directory of Open Access Journals (Sweden)

    Enrico

    2014-05-01

    Full Text Available Starting from an interdisciplinary perspective of the concepts of body, perception, and technologies in the contemporary scene, this text will attempt to define the general aesthetic notion as bodyscape as an extension of the performer’s perception. Through a survey of some key practices from the contemporary scene such as choreographic compositions by Myriam Gourfink, Isabelle Choinière, and the project of motion signature by Martine Époque and Denis Poulin, the impact of technologies on redefining the process of the performer’s perception in the composition of the movement and the change of the notion of presence will be analysed. In this sense, a series of modifications that influence also the spectator’s perception is presented. Therefore, the notion of empathy is discussed, and an attempt to find out how this applies in the context of a digital image of the body is made.

  9. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    Science.gov (United States)

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  10. A Common Framework for the Analysis of Complex Motion? Standstill and Capture Illusions

    Directory of Open Access Journals (Sweden)

    Max Reinhard Dürsteler

    2014-12-01

    Full Text Available A series of illusions was created by presenting stimuli, which consisted of two overlapping surfaces each defined by textures of independent visual features (i.e. modulation of luminance, color, depth, etc.. When presented concurrently with a stationary 2-D luminance texture, observers often fail to perceive the motion of an overlapping stereoscopically defined depth-texture. This illusory motion standstill arises due to a failure to represent two independent surfaces (one for luminance and one for depth textures and motion transparency (the ability to perceive motion of both surfaces simultaneously. Instead the stimulus is represented as a single non-transparent surface taking on the stationary nature of the luminance-defined texture. By contrast, if it is the 2D-luminance defined texture that is in motion, observers often perceive the stationary depth texture as also moving. In this latter case, the failure to represent the motion transparency of the two textures gives rise to illusionary motion capture. Our past work demonstrated that the illusions of motion standstill and motion capture can occur for depth-textures that are rotating, or expanding / contracting, or else spiraling. Here I extend these findings to include stereo-shearing. More importantly, it is the motion (or lack thereof of the luminance texture that determines how the motion of the depth will be perceived. This observation is strongly in favor of a single pathway for complex motion that operates on luminance-defines texture motion signals only. In addition, these complex motion illusions arise with chromatically-defined textures with smooth, transitions between their colors. This suggests that in respect to color motion perception the complex motions’ pathway is only able to accurately process signals from isoluminant colored textures with sharp transitions between colors, and/or moving at high speeds, which is conceivable if it relies on inputs from a hypothetical dual

  11. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    Science.gov (United States)

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Pivotal role of hMT+ in long-range disambiguation of interhemispheric bistable surface motion.

    Science.gov (United States)

    Duarte, João Valente; Costa, Gabriel Nascimento; Martins, Ricardo; Castelo-Branco, Miguel

    2017-10-01

    It remains an open question whether long-range disambiguation of ambiguous surface motion can be achieved in early visual cortex or instead in higher level regions, which concerns object/surface segmentation/integration mechanisms. We used a bistable moving stimulus that can be perceived as a pattern comprehending both visual hemi-fields moving coherently downward or as two widely segregated nonoverlapping component objects (in each visual hemi-field) moving separately inward. This paradigm requires long-range integration across the vertical meridian leading to interhemispheric binding. Our fMRI study (n = 30) revealed a close relation between activity in hMT+ and perceptual switches involving interhemispheric segregation/integration of motion signals, crucially under nonlocal conditions where components do not overlap and belong to distinct hemispheres. Higher signal changes were found in hMT+ in response to spatially segregated component (incoherent) percepts than to pattern (coherent) percepts. This did not occur in early visual cortex, unlike apparent motion, which does not entail surface segmentation. We also identified a role for top-down mechanisms in state transitions. Deconvolution analysis of switch-related changes revealed prefrontal, insula, and cingulate areas, with the right superior parietal lobule (SPL) being particularly involved. We observed that directed influences could emerge either from left or right hMT+ during bistable motion integration/segregation. SPL also exhibited significant directed functional connectivity with hMT+, during perceptual state maintenance (Granger causality analysis). Our results suggest that long-range interhemispheric binding of ambiguous motion representations mainly reflect bottom-up processes from hMT+ during perceptual state maintenance. In contrast, state transitions maybe influenced by high-level regions such as the SPL. Hum Brain Mapp 38:4882-4897, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley

  13. Discrimination of animate and inanimate motion in 9-month-old infants: an ERP study.

    Science.gov (United States)

    Kaduk, Katharina; Elsner, Birgit; Reid, Vincent M

    2013-10-01

    Simple geometric shapes moving in a self-propelled manner, and violating Newtonian laws of motion by acting against gravitational forces tend to induce a judgement that an object is animate. Objects that change their motion only due to external causes are more likely judged as inanimate. How the developing brain is employed in the perception of animacy in early ontogeny is currently unknown. The aim of this study was to use ERP techniques to determine if the negative central component (Nc), a waveform related to attention allocation, was differentially affected when an infant observed animate or inanimate motion. Short animated movies comprising a marble moving along a marble run either in an animate or an inanimate manner were presented to 15 infants who were 9 months of age. The ERPs were time-locked to a still frame representing animate or inanimate motion that was displayed following each movie. We found that 9-month-olds are able to discriminate between animate and inanimate motion based on motion cues alone and most likely allocate more attentional resources to the inanimate motion. The present data contribute to our understanding of the animate-inanimate distinction and the Nc as a correlate of infant cognitive processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Conjunctions between motion and disparity are encoded with the same spatial resolution as disparity alone.

    Science.gov (United States)

    Allenmark, Fredrik; Read, Jenny C A

    2012-10-10

    Neurons in cortical area MT respond well to transparent streaming motion in distinct depth planes, such as caused by observer self-motion, but do not contain subregions excited by opposite directions of motion. We therefore predicted that spatial resolution for transparent motion/disparity conjunctions would be limited by the size of MT receptive fields, just as spatial resolution for disparity is limited by the much smaller receptive fields found in primary visual cortex, V1. We measured this using a novel "joint motion/disparity grating," on which human observers detected motion/disparity conjunctions in transparent random-dot patterns containing dots streaming in opposite directions on two depth planes. Surprisingly, observers showed the same spatial resolution for these as for pure disparity gratings. We estimate the limiting receptive field diameter at 11 arcmin, similar to V1 and much smaller than MT. Higher internal noise for detecting joint motion/disparity produces a slightly lower high-frequency cutoff of 2.5 cycles per degree (cpd) versus 3.3 cpd for disparity. This suggests that information on motion/disparity conjunctions is available in the population activity of V1 and that this information can be decoded for perception even when it is invisible to neurons in MT.

  15. Perception and the strongest sensory memory trace of multi-stable displays both form shortly after the stimulus onset.

    Science.gov (United States)

    Pastukhov, Alexander

    2016-02-01

    We investigated the relation between perception and sensory memory of multi-stable structure-from-motion displays. The latter is an implicit visual memory that reflects a recent history of perceptual dominance and influences only the initial perception of multi-stable displays. First, we established the earliest time point when the direction of an illusory rotation can be reversed after the display onset (29-114 ms). Because our display manipulation did not bias perception towards a specific direction of illusory rotation but only signaled the change in motion, this means that the perceptual dominance was established no later than 29-114 ms after the stimulus onset. Second, we used orientation-selectivity of sensory memory to establish which display orientation produced the strongest memory trace and when this orientation was presented during the preceding prime interval (80-140 ms). Surprisingly, both estimates point towards the time interval immediately after the display onset, indicating that both perception and sensory memory form at approximately the same time. This suggests a tighter integration between perception and sensory memory than previously thought, warrants a reconsideration of its role in visual perception, and indicates that sensory memory could be a unique behavioral correlate of the earlier perceptual inference that can be studied post hoc.

  16. Motion of the esophagus due to cardiac motion.

    Directory of Open Access Journals (Sweden)

    Jacob Palmer

    Full Text Available When imaging studies (e.g. CT are used to quantify morphological changes in an anatomical structure, it is necessary to understand the extent and source of motion which can give imaging artifacts (e.g. blurring or local distortion. The objective of this study was to assess the magnitude of esophageal motion due to cardiac motion. We used retrospective electrocardiogram-gated contrast-enhanced computed tomography angiography images for this study. The anatomic region from the carina to the bottom of the heart was taken at deep-inspiration breath hold with the patients' arms raised above their shoulders, in a position similar to that used for radiation therapy. The esophagus was delineated on the diastolic phase of cardiac motion, and deformable registration was used to sequentially deform the images in nearest-neighbor phases among the 10 cardiac phases, starting from the diastolic phase. Using the 10 deformation fields generated from the deformable registration, the magnitude of the extreme displacements was then calculated for each voxel, and the mean and maximum displacement was calculated for each computed tomography slice for each patient. The average maximum esophageal displacement due to cardiac motion for all patients was 5.8 mm (standard deviation: 1.6 mm, maximum: 10.0 mm in the transverse direction. For 21 of 26 patients, the largest esophageal motion was found in the inferior region of the heart; for the other patients, esophageal motion was approximately independent of superior-inferior position. The esophagus motion was larger at cardiac phases where the electrocardiogram R-wave occurs. In conclusion, the magnitude of esophageal motion near the heart due to cardiac motion is similar to that due to other sources of motion, including respiratory motion and intra-fraction motion. A larger cardiac motion will result into larger esophagus motion in a cardiac cycle.

  17. Integration of visual and inertial cues in the perception of angular self-motion

    NARCIS (Netherlands)

    Winkel, K.N. de; Soyka, F.; Barnett-Cowan, M.; Bülthoff, H.H.; Groen, E.L.; Werkhoven, P.J.

    2013-01-01

    The brain is able to determine angular self-motion from visual, vestibular, and kinesthetic information. There is compelling evidence that both humans and non-human primates integrate visual and inertial (i.e., vestibular and kinesthetic) information in a statistically optimal fashion when

  18. Load-sensitive impairment of working memory for biological motion in schizophrenia.

    Science.gov (United States)

    Lee, Hannah; Kim, Jejoong

    2017-01-01

    Impaired working memory (WM) is a core cognitive deficit in schizophrenia. Nevertheless, past studies have reported that patients may also benefit from increasing salience of memory stimuli. Such efficient encoding largely depends upon precise perception. Thus an investigation on the relationship between perceptual processing and WM would be worthwhile. Here, we used biological motion (BM), a socially relevant stimulus that schizophrenics have difficulty discriminating from similar meaningless motions, in a delayed-response task. Non-BM stimuli and static polygons were also used for comparison. In each trial, one of the three types of stimuli was presented followed by two probes, with a short delay in between. Participants were asked to indicate whether one of them was identical to the memory item or both were novel. The number of memory items was one or two. Healthy controls were more accurate in recognizing BM than non-BM regardless of memory loads. Patients with schizophrenia exhibited similar accuracy patterns to those of controls in the Load 1 condition only. These results suggest that information contained in BM could facilitate WM encoding in general, but the effect is vulnerable to the increase of cognitive load in schizophrenia, implying inefficient encoding driven by imprecise perception.

  19. Implied Dynamics Biases the Visual Perception of Velocity

    Science.gov (United States)

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform. PMID:24667578

  20. Implied dynamics biases the visual perception of velocity.

    Directory of Open Access Journals (Sweden)

    Barbara La Scaleia

    Full Text Available We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum, horizontally leftward, or vertically upward (upside-down. In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3 or rectilinear (Experiment 4 and 5 paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5. The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.

  1. Implied dynamics biases the visual perception of velocity.

    Science.gov (United States)

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.

  2. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    Science.gov (United States)

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion

  3. Développement de la pupillométrie pour la mesure objective des émotions dans le contexte de la consommation alimentaire

    OpenAIRE

    Lemercier, Anaïs

    2014-01-01

    Les perceptions sensorielles et hédoniques résultent de processus complexes d’intégration, qui ne sont pas seulement rationnels, mais aussi fondés sur des sentiments, des émotions et des souvenirs. Afin d'appréhender au mieux le comportement du consommateur, il est devenu indispensable de mesurer les émotions afin de comprendre leur rôle fondamental dans la prise de décision. En science du consommateur, les émotions sont principalement mesurées par questionnaire. Malheureusement, cette mesure...

  4. Attentional Networks and Biological Motion

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2010-03-01

    Full Text Available Our ability to see meaningful actions when presented with pointlight traces of human movement is commonly referred to as the perception of biological motion. While traditionalexplanations have emphasized the spontaneous and automatic nature of this ability, morerecent findings suggest that attention may play a larger role than is typically assumed. Intwo studies we show that the speed and accuracy of responding to point-light stimuli is highly correlated with the ability to control selective attention. In our first experiment we measured thresholds for determining the walking direction of a masked point-light figure, and performance on a range of attention-related tasks in the same set of observers. Mask-density thresholds for the direction discrimination task varied quite considerably from observer to observer and this variation was highly correlated with performance on both Stroop and flanker interference tasks. Other components of attention, such as orienting, alerting and visual search efficiency, showed no such relationship. In a second experiment, we examined the relationship between the ability to determine the orientation of unmasked point-light actions and Stroop interference, again finding a strong correlation. Our results are consistent with previous research suggesting that biological motion processing may requite attention, and specifically implicate networks of attention related to executive control and selection.

  5. Shaking Takete and Flowing Maluma. Non-Sense Words Are Associated with Motion Patterns.

    Directory of Open Access Journals (Sweden)

    Markus Koppensteiner

    Full Text Available People assign the artificial words takete and kiki to spiky, angular figures and the artificial words maluma and bouba to rounded figures. We examined whether such a cross-modal correspondence could also be found for human body motion. We transferred the body movements of speakers onto two-dimensional coordinates and created animated stick-figures based on this data. Then we invited people to judge these stimuli using the words takete-maluma, bouba-kiki, and several verbal descriptors that served as measures of angularity/smoothness. In addition to this we extracted the quantity of motion, the velocity of motion and the average angle between motion vectors from the coordinate data. Judgments of takete (and kiki were related to verbal descriptors of angularity, a high quantity of motion, high velocity and sharper angles. Judgments of maluma (or bouba were related to smooth movements, a low velocity, a lower quantity of motion and blunter angles. A forced-choice experiment during which we presented subsets with low and high rankers on our motion measures revealed that people preferably assigned stimuli displaying fast movements with sharp angles in motion vectors to takete and stimuli displaying slow movements with blunter angles in motion vectors to maluma. Results indicated that body movements share features with information inherent in words such as takete and maluma and that people perceive the body movements of speakers on the level of changes in motion direction (e.g., body moves to the left and then back to the right. Follow-up studies are needed to clarify whether impressions of angularity and smoothness have similar communicative values across different modalities and how this affects social judgments and person perception.

  6. Shaking Takete and Flowing Maluma. Non-Sense Words Are Associated with Motion Patterns.

    Science.gov (United States)

    Koppensteiner, Markus; Stephan, Pia; Jäschke, Johannes Paul Michael

    2016-01-01

    People assign the artificial words takete and kiki to spiky, angular figures and the artificial words maluma and bouba to rounded figures. We examined whether such a cross-modal correspondence could also be found for human body motion. We transferred the body movements of speakers onto two-dimensional coordinates and created animated stick-figures based on this data. Then we invited people to judge these stimuli using the words takete-maluma, bouba-kiki, and several verbal descriptors that served as measures of angularity/smoothness. In addition to this we extracted the quantity of motion, the velocity of motion and the average angle between motion vectors from the coordinate data. Judgments of takete (and kiki) were related to verbal descriptors of angularity, a high quantity of motion, high velocity and sharper angles. Judgments of maluma (or bouba) were related to smooth movements, a low velocity, a lower quantity of motion and blunter angles. A forced-choice experiment during which we presented subsets with low and high rankers on our motion measures revealed that people preferably assigned stimuli displaying fast movements with sharp angles in motion vectors to takete and stimuli displaying slow movements with blunter angles in motion vectors to maluma. Results indicated that body movements share features with information inherent in words such as takete and maluma and that people perceive the body movements of speakers on the level of changes in motion direction (e.g., body moves to the left and then back to the right). Follow-up studies are needed to clarify whether impressions of angularity and smoothness have similar communicative values across different modalities and how this affects social judgments and person perception.

  7. Separate visual representations for perception and for visually guided behavior

    Science.gov (United States)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  8. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    Science.gov (United States)

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  9. Neurons Responsive to Global Visual Motion Have Unique Tuning Properties in Hummingbirds.

    Science.gov (United States)

    Gaede, Andrea H; Goller, Benjamin; Lam, Jessica P M; Wylie, Douglas R; Altshuler, Douglas L

    2017-01-23

    Neurons in animal visual systems that respond to global optic flow exhibit selectivity for motion direction and/or velocity. The avian lentiformis mesencephali (LM), known in mammals as the nucleus of the optic tract (NOT), is a key nucleus for global motion processing [1-4]. In all animals tested, it has been found that the majority of LM and NOT neurons are tuned to temporo-nasal (back-to-front) motion [4-11]. Moreover, the monocular gain of the optokinetic response is higher in this direction, compared to naso-temporal (front-to-back) motion [12, 13]. Hummingbirds are sensitive to small visual perturbations while hovering, and they drift to compensate for optic flow in all directions [14]. Interestingly, the LM, but not other visual nuclei, is hypertrophied in hummingbirds relative to other birds [15], which suggests enhanced perception of global visual motion. Using extracellular recording techniques, we found that there is a uniform distribution of preferred directions in the LM in Anna's hummingbirds, whereas zebra finch and pigeon LM populations, as in other tetrapods, show a strong bias toward temporo-nasal motion. Furthermore, LM and NOT neurons are generally classified as tuned to "fast" or "slow" motion [10, 16, 17], and we predicted that most neurons would be tuned to slow visual motion as an adaptation for slow hovering. However, we found the opposite result: most hummingbird LM neurons are tuned to fast pattern velocities, compared to zebra finches and pigeons. Collectively, these results suggest a role in rapid responses during hovering, as well as in velocity control and collision avoidance during forward flight of hummingbirds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. He throws like a girl (but only when he's sad): emotion affects sex-decoding of biological motion displays.

    Science.gov (United States)

    Johnson, Kerri L; McKay, Lawrie S; Pollick, Frank E

    2011-05-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants' judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2-4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Rubber hand illusion affects joint angle perception.

    Directory of Open Access Journals (Sweden)

    Martin V Butz

    Full Text Available The Rubber Hand Illusion (RHI is a well-established experimental paradigm. It has been shown that the RHI can affect hand location estimates, arm and hand motion towards goals, the subjective visual appearance of the own hand, and the feeling of body ownership. Several studies also indicate that the peri-hand space is partially remapped around the rubber hand. Nonetheless, the question remains if and to what extent the RHI can affect the perception of other body parts. In this study we ask if the RHI can alter the perception of the elbow joint. Participants had to adjust an angular representation on a screen according to their proprioceptive perception of their own elbow joint angle. The results show that the RHI does indeed alter the elbow joint estimation, increasing the agreement with the position and orientation of the artificial hand. Thus, the results show that the brain does not only adjust the perception of the hand in body-relative space, but it also modifies the perception of other body parts. In conclusion, we propose that the brain continuously strives to maintain a consistent internal body image and that this image can be influenced by the available sensory information sources, which are mediated and mapped onto each other by means of a postural, kinematic body model.

  12. Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.

    Science.gov (United States)

    White, Claire; Brown, John; Edwards, Mark

    2014-07-01

    Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.

  13. Modeling Crossing Behavior of Drivers at Unsignalized Intersections with Consideration of Risk Perception

    Directory of Open Access Journals (Sweden)

    Liu Miaomiao

    2016-01-01

    Full Text Available Drivers’ risk perception is vital to driving behavior and traffic safety. In the dynamic interaction of a driver-vehicle-environment system, drivers’ risk perception changes dynamically. This study focused on drivers’ risk perception at unsignalized intersections in China and analyzed drivers’ crossing behavior. Based on cognitive psychology theory and an adaptive neuro-fuzzy inference system, quantitative models of drivers’ risk perception were established for the crossing processes between two straight-moving vehicles from the orthogonal direction. The acceptable risk perception levels of drivers were identified using a self-developed data analysis method. Based on game theory, the relationship among the quantitative value of drivers’ risk perception, acceptable risk perception level, and vehicle motion state was analyzed. The models of drivers’ crossing behavior were then established. Finally, the behavior models were validated using data collected from real-world vehicle movements and driver decisions. The results showed that the developed behavior models had both high accuracy and good applicability. This study would provide theoretical and algorithmic references for the microscopic simulation and active safety control system of vehicles.

  14. Visual Perception Based Rate Control Algorithm for HEVC

    Science.gov (United States)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  15. The Role of Motion Concepts in Understanding Non-Motion Concepts

    Directory of Open Access Journals (Sweden)

    Omid Khatin-Zadeh

    2017-12-01

    Full Text Available This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems.

  16. Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2014-05-27

    Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.

  17. Illusory motion reveals velocity matching, not foveation, drives smooth pursuit of large objects.

    Science.gov (United States)

    Ma, Zheng; Watamaniuk, Scott N J; Heinen, Stephen J

    2017-10-01

    When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object's centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference. We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets.

  18. Sporadic frame dropping impact on quality perception

    Science.gov (United States)

    Pastrana-Vidal, Ricardo R.; Gicquel, Jean Charles; Colomes, Catherine; Cherifi, Hocine

    2004-06-01

    Over the past few years there has been an increasing interest in real time video services over packet networks. When considering quality, it is essential to quantify user perception of the received sequence. Severe motion discontinuities are one of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, transmission errors in the packet networks or restitution strategy could be the origin of this perceived jerkiness. In this paper we present a psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality. First, the perceptual detection thresholds of generated temporal discontinuities were measured. Then, the quality function was estimated in relation to a single frame dropping for different durations. Finally, a set of tests was performed to quantify the effect of several impairments distributed over time. We have found that the detection thresholds are content, duration and motion dependent. The assessment results show how quality is impaired by a single burst of dropped frames in a 10 sec sequence. The effect of several bursts of discarded frames, irregularly distributed over the time is also discussed.

  19. Acquiring neural signals for developing a perception and cognition model

    Science.gov (United States)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  20. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated.

    Directory of Open Access Journals (Sweden)

    Merle-Marie Ahrens

    Full Text Available Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing. Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design and task-irrelevant (by instruction, and by creating instead endogenous (orthogonal expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech.

  1. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  2. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  3. Ambiguous Tilt and Translation Motion Cues in Astronauts after Space Flight

    Science.gov (United States)

    Clement, G.; Harm, D. L.; Rupert, A. H.; Beaton, K. H.; Wood, S. J.

    2008-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with visual, proprioceptive, and somatosensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions following transitions between gravity levels. This joint ESA-NASA pre- and post-flight experiment is designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances in astronauts following short-duration space flights. The first specific aim is to examine the effects of stimulus frequency on adaptive changes in eye movements and motion perception during independent tilt and translation motion profiles. Roll motion is provided by a variable radius centrifuge. Pitch motion is provided by NASA's Tilt-Translation Sled in which the resultant gravitoinertial vector remains aligned with the body longitudinal axis during tilt motion (referred to as the Z-axis gravitoinertial or ZAG paradigm). We hypothesize that the adaptation of otolith-mediated responses to these stimuli will have specific frequency characteristics, being greatest in the mid-frequency range where there is a crossover of tilt and translation. The second specific aim is to employ a closed-loop nulling task in which subjects are tasked to use a joystick to null-out tilt motion disturbances on these two devices. The stimuli consist of random steps or sum-of-sinusoids stimuli, including the ZAG profiles on the Tilt-Translation Sled. We hypothesize that the ability to control tilt orientation will be compromised following space flight, with increased control errors corresponding to changes in self-motion perception. The third specific aim is to evaluate how sensory substitution aids can be used to improve manual control performance. During the closed-loop nulling task on both devices, small tactors placed around the torso vibrate according to the actual body tilt angle relative to gravity. We hypothesize

  4. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    Science.gov (United States)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  5. OMNIDIRECTIONAL PERCEPTION FOR LIGHTWEIGHT UAVS USING A CONTINUOUSLY ROTATING 3D LASER SCANNER

    Directory of Open Access Journals (Sweden)

    D. Droeschel

    2013-08-01

    Full Text Available Many popular unmanned aerial vehicles (UAV are restricted in their size and weight, making the design of sensory systems for these robots challenging. We designed a small and lightweight continuously rotating 3D laser scanner – allowing for environment perception in a range of 30 m in almost all directions. This sensor it well suited for applications such as 3D obstacle detection, 6D motion estimation, localization, and mapping. We aggregate the distance measurements in a robot-centric grid-based map. To estimate the motion of our multicopter, we register 3D laser scans towards this local map. In experiments, we compare the laser-based ego-motion estimate with ground-truth from a motion capture system. Overall, we can build an accurate 3D obstacle map and can estimate the vehicle's trajectory by 3D scan registration.

  6. Retrogressive harmonic motion as structural and stylistic characteristic of pop-rock music

    Science.gov (United States)

    Carter, Paul S.

    The central issue addressed in this dissertation is that of progressive and retrogressive harmonic motion as it is utilized in the repertoire of pop-rock music. I believe that analysis in these terms may prove to be a valuable tool for the understanding of the structure, style and perception of this music. Throughout my study of this music, various patterns of progressive and retrogressive harmonic motions within a piece reveal a kind of musical character about it, a character on which much of a work's style, organization and extramusical nature often depends. Several influential theorists, especially Jean-Phillipe Rameau, Hugo Riemann, and Arnold Schoenberg, have addressed the issues of functional harmony and the nature of the motion between chords of a tonal harmonic space. After assessing these views, I have found that it is possible to differentiate between two fundamental types of harmonic motions. This difference, one that I believe is instrumental in characterizing pop-rock music, is the basis for the analytical perspective I wish to embrace. After establishing a method of evaluating tonal harmonic root motions in these terms, I wish to examine a corpus of this music in order to discover what a characterization of its harmonic motion may reveal about each piece. Determining this harmonic character may help to establish structural and stylistic traits for that piece, its genre, composer, period, or even its sociological purpose. Conclusions may then be drawn regarding the role these patterns play in defining musical style traits of pop-rock. Partly as a tool for serving the study mentioned above I develop a graphical method of accounting for root motion I name the tonal "Space-Plot"; This apparatus allows the analyst to measure several facets about the harmonic motion of the music, and to see a wide scope of relations in and around a diatonic key.

  7. Modeling human perception of orientation in altered gravity

    Science.gov (United States)

    Clark, Torin K.; Newman, Michael C.; Oman, Charles M.; Merfeld, Daniel M.; Young, Laurence R.

    2015-01-01

    Altered gravity environments, such as those experienced by astronauts, impact spatial orientation perception, and can lead to spatial disorientation and sensorimotor impairment. To more fully understand and quantify the impact of altered gravity on orientation perception, several mathematical models have been proposed. The utricular shear, tangent, and the idiotropic vector models aim to predict static perception of tilt in hyper-gravity. Predictions from these prior models are compared to the available data, but are found to systematically err from the perceptions experimentally observed. Alternatively, we propose a modified utricular shear model for static tilt perception in hyper-gravity. Previous dynamic models of vestibular function and orientation perception are limited to 1 G. Specifically, they fail to predict the characteristic overestimation of roll tilt observed in hyper-gravity environments. To address this, we have proposed a modification to a previous observer-type canal-otolith interaction model based upon the hypothesis that the central nervous system (CNS) treats otolith stimulation in the utricular plane differently than stimulation out of the utricular plane. Here we evaluate our modified utricular shear and modified observer models in four altered gravity motion paradigms: (a) static roll tilt in hyper-gravity, (b) static pitch tilt in hyper-gravity, (c) static roll tilt in hypo-gravity, and (d) static pitch tilt in hypo-gravity. The modified models match available data in each of the conditions considered. Our static modified utricular shear model and dynamic modified observer model may be used to help quantitatively predict astronaut perception of orientation in altered gravity environments. PMID:25999822

  8. Modeling Human Perception of Orientation in Altered Gravity

    Directory of Open Access Journals (Sweden)

    Torin K. Clark

    2015-05-01

    Full Text Available Altered gravity environments, such as those experienced by astronauts, impact spatial orientation perception and can lead to spatial disorientation and sensorimotor impairment. To more fully understand and quantify the impact of altered gravity on orientation perception, several mathematical models have been proposed. The utricular shear, tangent, and the idiotropic vector models aim to predict static perception of tilt in hyper-gravity. Predictions from these prior models are compared to the available data, but are found to systematically err from the perceptions experimentally observed. Alternatively, we propose a modified utricular shear model for static tilt perception in hyper-gravity. Previous dynamic models of vestibular function and orientation perception are limited to 1 G. Specifically, they fail to predict the characteristic overestimation of roll tilt observed in hyper-gravity environments. To address this, we have proposed a modification to a previous observer-type canal otolith interaction model based upon the hypothesis that the central nervous system treats otolith stimulation in the utricular plane differently than stimulation out of the utricular plane. Here we evaluate our modified utricular shear and modified observer models in four altered gravity motion paradigms: a static roll tilt in hyper-gravity, b static pitch tilt in hyper-gravity, c static roll tilt in hypo-gravity, and d static pitch tilt in hypo-gravity. The modified models match available data in each of the conditions considered. Our static modified utricular shear model and dynamic modified observer model may be used to help quantitatively predict astronaut perception of orientation in altered gravity environments.

  9. Psychophysical scaling of circular vection (CV) produced by optokinetic (OKN) motion: individual differences and effects of practice.

    Science.gov (United States)

    Kennedy, R S; Hettinger, L J; Harm, D L; Ordy, J M; Dunlap, W P

    1996-01-01

    Vection (V) refers to the compelling visual illusion of self-motion experienced by stationary individuals when viewing moving visual surrounds. The phenomenon is of theoretical interest because of its relevance for understanding the neural basis of ordinary self-motion perception, and of practical importance because it is the experience that makes simulation, virtual reality displays, and entertainment devices more vicarious. This experiment was performed to address whether an optokinetically induced vection illusion exhibits monotonic and stable psychometric properties and whether individuals differ reliably in these (V) perceptions. Subjects were exposed to varying velocities of the circular vection (CV) display in an optokinetic (OKN) drum 2 meters in diameter in 5 one-hour daily sessions extending over a 1 week period. For grouped data, psychophysical scalings of velocity estimates showed that exponents in a Stevens' type power function were essentially linear (slope = 0.95) and largely stable over sessions. Latencies were slightly longer for the slowest and fastest induction stimuli, and the trend over sessions for average latency was longer as a function of practice implying time course adaptation effects. Test-retest reliabilities for individual slope and intercept measures were moderately strong (r = 0.45) and showed no evidence of superdiagonal form. This implies stability of the individual circularvection (CV) sensitivities. Because the individual CV scores were stable, reliabilities were improved by averaging 4 sessions in order to provide a stronger retest reliability (r = 0.80). Individual latency responses were highly reliable (r = 0.80). Mean CV latency and motion sickness symptoms were greater in males than in females. These individual differences in CV could be predictive of other outcomes, such as susceptibility to disorientation or motion sickness, and for CNS localization of visual-vestibular interactions in the experience of self-motion.

  10. Quantitative evaluation of impedance perception characteristics of humans in the man-machine interface

    International Nuclear Information System (INIS)

    Onish, Keiichi; Kim, Young Woo; Obinata, Goro; Hase, Kazunori

    2013-01-01

    We investigated impedance perception characteristics of humans in the man-machine interface. Sensibility or operational feel about physical properties of machine dynamics is obtained through perception process. We evaluated the impedance perception characteristics of humans who are operating a mechanical system, based on extended Scheffe's subjective evaluation method in full consideration of the influence of impedance level, impedance difference, experiment order, individual difference and so on. Constant method based quantitative evaluation was adopted to investigate the influence of motion frequency and change of the impedance on human impedance perception characteristics. Experimental results indicate that humans perceive impedance of mechanical systems based on comparison process of the dynamical characteristics of the systems. The proposed method can be applied to quantify the design requirement of man-machine interface. The effectiveness of the proposed method is verified through experimental results.

  11. Quantitative evaluation of impedance perception characteristics of humans in the man-machine interface

    Energy Technology Data Exchange (ETDEWEB)

    Onish, Keiichi [Yamaha Motor Co., Shizuoka (Japan); Kim, Young Woo [Daegu Techno Park R and D Center, Seoul (Korea, Republic of); Obinata, Goro [Nagoya University, Nagoya (Japan); Hase, Kazunori [Tokyo Metropolitan University, Tokyo (Japan)

    2013-05-15

    We investigated impedance perception characteristics of humans in the man-machine interface. Sensibility or operational feel about physical properties of machine dynamics is obtained through perception process. We evaluated the impedance perception characteristics of humans who are operating a mechanical system, based on extended Scheffe's subjective evaluation method in full consideration of the influence of impedance level, impedance difference, experiment order, individual difference and so on. Constant method based quantitative evaluation was adopted to investigate the influence of motion frequency and change of the impedance on human impedance perception characteristics. Experimental results indicate that humans perceive impedance of mechanical systems based on comparison process of the dynamical characteristics of the systems. The proposed method can be applied to quantify the design requirement of man-machine interface. The effectiveness of the proposed method is verified through experimental results.

  12. Circuit Mechanisms Governing Local vs. Global Motion Processing in Mouse Visual Cortex

    Directory of Open Access Journals (Sweden)

    Rune Rasmussen

    2017-12-01

    Full Text Available A withstanding question in neuroscience is how neural circuits encode representations and perceptions of the external world. A particularly well-defined visual computation is the representation of global object motion by pattern direction-selective (PDS cells from convergence of motion of local components represented by component direction-selective (CDS cells. However, how PDS and CDS cells develop their distinct response properties is still unresolved. The visual cortex of the mouse is an attractive model for experimentally solving this issue due to the large molecular and genetic toolbox available. Although mouse visual cortex lacks the highly ordered orientation columns of primates, it is organized in functional sub-networks and contains striate- and extrastriate areas like its primate counterparts. In this Perspective article, we provide an overview of the experimental and theoretical literature on global motion processing based on works in primates and mice. Lastly, we propose what types of experiments could illuminate what circuit mechanisms are governing cortical global visual motion processing. We propose that PDS cells in mouse visual cortex appear as the perfect arena for delineating and solving how individual sensory features extracted by neural circuits in peripheral brain areas are integrated to build our rich cohesive sensory experiences.

  13. Motion-Dependent Filling-In of Spatiotemporal Information at the Blind Spot.

    Science.gov (United States)

    Maus, Gerrit W; Whitney, David

    2016-01-01

    We usually do not notice the blind spot, a receptor-free region on the retina. Stimuli extending through the blind spot appear filled in. However, if an object does not reach through but ends in the blind spot, it is perceived as "cut off" at the boundary. Here we show that even when there is no corresponding stimulation at opposing edges of the blind spot, well known motion-induced position shifts also extend into the blind spot and elicit a dynamic filling-in process that allows spatial structure to be extrapolated into the blind spot. We presented observers with sinusoidal gratings that drifted into or out of the blind spot, or flickered in counterphase. Gratings moving into the blind spot were perceived to be longer than those moving out of the blind spot or flickering, revealing motion-dependent filling-in. Further, observers could perceive more of a grating's spatial structure inside the blind spot than would be predicted from simple filling-in of luminance information from the blind spot edge. This is evidence for a dynamic filling-in process that uses spatiotemporal information from the motion system to extrapolate visual percepts into the scotoma of the blind spot. Our findings also provide further support for the notion that an explicit spatial shift of topographic representations contributes to motion-induced position illusions.

  14. Exhibition of Stochastic Resonance in Vestibular Perception

    Science.gov (United States)

    Galvan-Garza, R. C.; Clark, T. K.; Merfeld, D. M.; Bloomberg, J. J.; Oman, C. M.; Mulavara, A. P.

    2016-01-01

    Astronauts experience sensorimotor changes during spaceflight, particularly during G-transitions. Post flight sensorimotor changes include spatial disorientation, along with postural and gait instability that may degrade operational capabilities of the astronauts and endanger the crew. A sensorimotor countermeasure that mitigates these effects would improve crewmember safety and decrease risk. The goal of this research is to investigate the potential use of stochastic vestibular stimulation (SVS) as a technology to improve sensorimotor function. We hypothesize that low levels of SVS will improve sensorimotor perception through the phenomenon of stochastic resonance (SR), when the response of a nonlinear system to a weak input signal is enhanced by the application of a particular nonzero level of noise. This study aims to advance the development of SVS as a potential countermeasure by 1) demonstrating the exhibition of stochastic resonance in vestibular perception, a vital component of sensorimotor function, 2) investigating the repeatability of SR exhibition, and 3) determining the relative contribution of the semicircular canals (SCC) and otolith (OTO) organs to vestibular perceptual SR. A constant current stimulator was used to deliver bilateral bipolar SVS via electrodes placed on each of the mastoid processes, as previously done. Vestibular perceptual motion recognition thresholds were measured using a 6-degree of freedom MOOG platform and a 150 trial 3-down/1-up staircase procedure. In the first test session, we measured vestibular perceptual thresholds in upright roll-tilt at 0.2 Hz (SCC+OTO) with SVS ranging from 0-700 µA. In a second test session a week later, we re-measured roll-tilt thresholds with 0, optimal (from test session 1), and 1500 µA SVS levels. A subset of these subjects, plus naive subjects, participated in two additional test sessions in which we measured thresholds in supine roll-rotation at 0.2 Hz (SCC) and upright y-translation at 1 Hz

  15. The Bicycle Illusion: Sidewalk Science Informs the Integration of Motion and Shape Perception

    Science.gov (United States)

    Masson, Michael E. J.; Dodd, Michael D.; Enns, James T.

    2009-01-01

    The authors describe a new visual illusion first discovered in a natural setting. A cyclist riding beside a pair of sagging chains that connect fence posts appears to move up and down with the chains. In this illusion, a static shape (the chains) affects the perception of a moving shape (the bicycle), and this influence involves assimilation…

  16. Measurement of shoulder motion fraction and motion ratio

    International Nuclear Information System (INIS)

    Kang, Yeong Han

    2006-01-01

    This study was to understand about the measurement of shoulder motion fraction and motion ratio. We proposed the radiological criterior of glenohumeral and scapulothoracic movement ratio. We measured the motion fraction of the glenohumeral and scapulothoracic movement using CR (computed radiological system) of arm elevation at neutral, 90 degree, full elevation. Central ray was 15 .deg., 19 .deg., 22 .deg. to the cephald for the parallel scapular spine, and the tilting of torso was external oblique 40 .deg., 36 .deg., 22 .deg. for perpendicular to glenohumeral surface. Healthful donor of 100 was divided 5 groups by age (20, 30, 40, 50, 60). The angle of glenohumeral motion and scapulothoracic motion could be taken from gross arm angle and radiological arm angle. We acquired 3 images at neutral, 90 .deg. and full elevation position and measured radiographic angle of glenoheumeral, scapulothoracic movement respectively. While the arm elevation was 90 .deg., the shoulder motion fraction was 1.22 (M), 1.70 (W) in right arm and 1.31, 1.54 in left. In full elevation, Right arm fraction was 1.63, 1.84 and left was 1.57, 1.32. In right dominant arm (78%), 90 .deg. and Full motion fraction was 1.58, 1.43, in left (22%) 1.82, 1.94. In generation 20, 90 .deg. and Full motion fraction was 1.56, 1.52, 30' was 1.82, 1.43, 40' was 1.23, 1.16, 50' was 1.80, 1.28,60' was 1.24, 1.75. There was not significantly by gender, dominant arm and age. The criteria of motion fraction was useful reference for clinical diagnosis the shoulder instability

  17. Markerless motion estimation for motion-compensated clinical brain imaging

    Science.gov (United States)

    Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.

    2018-05-01

    Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (brain imaging and holds good promise for a practical implementation in clinical PET, SPECT and CT systems.

  18. Perception of animacy in dogs and humans.

    Science.gov (United States)

    Abdai, Judit; Ferdinandy, Bence; Terencio, Cristina Baño; Pogány, Ákos; Miklósi, Ádám

    2017-06-01

    Humans have a tendency to perceive inanimate objects as animate based on simple motion cues. Although animacy is considered as a complex cognitive property, this recognition seems to be spontaneous. Researchers have found that young human infants discriminate between dependent and independent movement patterns. However, quick visual perception of animate entities may be crucial to non-human species as well. Based on general mammalian homology, dogs may possess similar skills to humans. Here, we investigated whether dogs and humans discriminate similarly between dependent and independent motion patterns performed by geometric shapes. We projected a side-by-side video display of the two patterns and measured looking times towards each side, in two trials. We found that in Trial 1, both dogs and humans were equally interested in the two patterns, but in Trial 2 of both species, looking times towards the dependent pattern decreased, whereas they increased towards the independent pattern. We argue that dogs and humans spontaneously recognized the specific pattern and habituated to it rapidly, but continued to show interest in the 'puzzling' pattern. This suggests that both species tend to recognize inanimate agents as animate relying solely on their motions. © 2017 The Author(s).

  19. Motion makes sense: an adaptive motor-sensory strategy underlies the perception of object location in rats.

    Science.gov (United States)

    Saraf-Sinik, Inbar; Assa, Eldad; Ahissar, Ehud

    2015-06-10

    Tactile perception is obtained by coordinated motor-sensory processes. We studied the processes underlying the perception of object location in freely moving rats. We trained rats to identify the relative location of two vertical poles placed in front of them and measured at high resolution the motor and sensory variables (19 and 2 variables, respectively) associated with this whiskers-based perceptual process. We found that the rats developed stereotypic head and whisker movements to solve this task, in a manner that can be described by several distinct behavioral phases. During two of these phases, the rats' whiskers coded object position by first temporal and then angular coding schemes. We then introduced wind (in two opposite directions) and remeasured their perceptual performance and motor-sensory variables. Our rats continued to perceive object location in a consistent manner under wind perturbations while maintaining all behavioral phases and relatively constant sensory coding. Constant sensory coding was achieved by keeping one group of motor variables (the "controlled variables") constant, despite the perturbing wind, at the cost of strongly modulating another group of motor variables (the "modulated variables"). The controlled variables included coding-relevant variables, such as head azimuth and whisker velocity. These results indicate that consistent perception of location in the rat is obtained actively, via a selective control of perception-relevant motor variables. Copyright © 2015 the authors 0270-6474/15/358777-13$15.00/0.

  20. PET motion correction using PRESTO with ITK motion estimation

    Energy Technology Data Exchange (ETDEWEB)

    Botelho, Melissa [Institute of Biophysics and Biomedical Engineering, Science Faculty of University of Lisbon (Portugal); Caldeira, Liliana; Scheins, Juergen [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich (Germany); Matela, Nuno [Institute of Biophysics and Biomedical Engineering, Science Faculty of University of Lisbon (Portugal); Kops, Elena Rota; Shah, N Jon [Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich (Germany)

    2014-07-29

    The Siemens BrainPET scanner is a hybrid MRI/PET system. PET images are prone to motion artefacts which degrade the image quality. Therefore, motion correction is essential. The library PRESTO converts motion-corrected LORs into highly accurate generic projection data [1], providing high-resolution PET images. ITK is an open-source software used for registering multidimensional data []. ITK provides motion estimation necessary to PRESTO.

  1. PET motion correction using PRESTO with ITK motion estimation

    International Nuclear Information System (INIS)

    Botelho, Melissa; Caldeira, Liliana; Scheins, Juergen; Matela, Nuno; Kops, Elena Rota; Shah, N Jon

    2014-01-01

    The Siemens BrainPET scanner is a hybrid MRI/PET system. PET images are prone to motion artefacts which degrade the image quality. Therefore, motion correction is essential. The library PRESTO converts motion-corrected LORs into highly accurate generic projection data [1], providing high-resolution PET images. ITK is an open-source software used for registering multidimensional data []. ITK provides motion estimation necessary to PRESTO.

  2. Correction of head motion artifacts in SPECT with fully 3-D OS-EM reconstruction

    International Nuclear Information System (INIS)

    Fulton, R.R.

    1998-01-01

    Full text: A method which relies on continuous monitoring of head position has been developed to correct for head motion in SPECT studies of the brain. Head position and orientation are monitored during data acquisition by an inexpensive head tracking system (ADL-1, Shooting Star Technology, Rosedale, British Colombia). Motion correction involves changing the projection geometry to compensate for motion (using data from the head tracker), and reconstructing with a fully 3-D OS-EM algorithm. The reconstruction algorithm can accommodate any number of movements and any projection geometry. A single iteration of 3-D OS-EM using all available projections provides a satisfactory 3-D reconstruction, essentially free of motion artifacts. The method has been validated in studies of the 3-D Hoffman brain phantom. Multiple 36- degree acquisitions, each with the phantom in a different position, were performed on a Trionix triple head camera. Movements were simulated by combining projections from the different acquisitions. Accuracy was assessed by comparison with a motion-free reconstruction, visually and by calculating mean squared error (MSE). Motion correction reduced distortion perceptibly and, depending on the motions applied, improved MSE by up to an order of magnitude. Three-dimensional reconstruction of the 128 x 128 x 128 data set took 2- minutes on a SUN Ultra 1 workstation. This motion correction technique can be retro-fitted to existing SPECT systems and could be incorporated in future SPECT camera designs. It appears to be applicable in PET as well as SPECT, to be able to correct for any head movements, and to have the potential to improve the accuracy of tomographic brain studies under clinical imaging conditions

  3. Perceptual learning of motion direction discrimination with suppressed and unsuppressed MT in humans: an fMRI study.

    Directory of Open Access Journals (Sweden)

    Benjamin Thompson

    Full Text Available The middle temporal area of the extrastriate visual cortex (area MT is integral to motion perception and is thought to play a key role in the perceptual learning of motion tasks. We have previously found, however, that perceptual learning of a motion discrimination task is possible even when the training stimulus contains locally balanced, motion opponent signals that putatively suppress the response of MT. Assuming at least partial suppression of MT, possible explanations for this learning are that 1 training made MT more responsive by reducing motion opponency, 2 MT remained suppressed and alternative visual areas such as V1 enabled learning and/or 3 suppression of MT increased with training, possibly to reduce noise. Here we used fMRI to test these possibilities. We first confirmed that the motion opponent stimulus did indeed suppress the BOLD response within hMT+ compared to an almost identical stimulus without locally balanced motion signals. We then trained participants on motion opponent or non-opponent stimuli. Training with the motion opponent stimulus reduced the BOLD response within hMT+ and greater reductions in BOLD response were correlated with greater amounts of learning. The opposite relationship between BOLD and behaviour was found at V1 for the group trained on the motion-opponent stimulus and at both V1 and hMT+ for the group trained on the non-opponent motion stimulus. As the average response of many cells within MT to motion opponent stimuli is the same as their response to non-directional flickering noise, the reduced activation of hMT+ after training may reflect noise reduction.

  4. Motion control report

    CERN Document Server

    2013-01-01

    Please note this is a short discount publication. In today's manufacturing environment, Motion Control plays a major role in virtually every project.The Motion Control Report provides a comprehensive overview of the technology of Motion Control:* Design Considerations* Technologies* Methods to Control Motion* Examples of Motion Control in Systems* A Detailed Vendors List

  5. Typical use of inverse dynamics in perceiving motion in autistic adults: Exploring computational principles of perception and action.

    Science.gov (United States)

    Takamuku, Shinya; Forbes, Paul A G; Hamilton, Antonia F de C; Gomi, Hiroaki

    2018-05-07

    There is increasing evidence for motor difficulties in many people with autism spectrum condition (ASC). These difficulties could be linked to differences in the use of internal models which represent relations between motions and forces/efforts. The use of these internal models may be dependent on the cerebellum which has been shown to be abnormal in autism. Several studies have examined internal computations of forward dynamics (motion from force information) in autism, but few have tested the inverse dynamics computation, that is, the determination of force-related information from motion information. Here, we examined this ability in autistic adults by measuring two perceptual biases which depend on the inverse computation. First, we asked participants whether they experienced a feeling of resistance when moving a delayed cursor, which corresponds to the inertial force of the cursor implied by its motion-both typical and ASC participants reported similar feelings of resistance. Second, participants completed a psychophysical task in which they judged the velocity of a moving hand with or without a visual cue implying inertial force. Both typical and ASC participants perceived the hand moving with the inertial cue to be slower than the hand without it. In both cases, the magnitude of the effects did not differ between the two groups. Our results suggest that the neural systems engaged in the inverse dynamics computation are preserved in ASC, at least in the observed conditions. Autism Res 2018. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. We tested the ability to estimate force information from motion information, which arises from a specific "inverse dynamics" computation. Autistic adults and a matched control group reported feeling a resistive sensation when moving a delayed cursor and also judged a moving hand to be slower when it was pulling a load. These findings both suggest that the ability to estimate force information from

  6. Manual exploration and the perception of slipperiness.

    Science.gov (United States)

    Grierson, Lawrence E M; Carnahan, Heather

    2006-10-01

    In this article, we report on two experiments that examined the haptic perception of slipperiness. The first experiment aimed to determine whether the type of finger motion across a surface influenced the ability to accurately judge the frictional coefficient (or slipperiness) of that surface. Results showed that when using static contact, participants were not as good at distinguishing between various surfaces, compared with when their finger moved across the surface. This raises the issue of how humans are able to generate the appropriate forces in response to friction during grasping (which involves static finger contact). In a second study, participants lifted objects with surfaces of varying coefficients of friction. The participants were able to accurately perceive the slipperiness of the surfaces that were lifted; however, the grasping forces were not scaled appropriately for the friction. That is, there was a dissociation between haptic perception and motor output.

  7. Atypical activation of the mirror neuron system during perception of hand motion in autism.

    Science.gov (United States)

    Martineau, Joëlle; Andersson, Frédéric; Barthélémy, Catherine; Cottier, Jean-Philippe; Destrieux, Christophe

    2010-03-12

    Disorders in the autism spectrum are characterized by deficits in social and communication skills such as imitation, pragmatic language, theory of mind, and empathy. The discovery of the "mirror neuron system" (MNS) in macaque monkeys may provide a basis from which to explain some of the behavioral dysfunctions seen in individuals with autism spectrum disorders (ASD).We studied seven right-handed high-functioning male autistic and eight normal subjects (TD group) using functional magnetic resonance imaging during observation and execution of hand movements compared to a control condition (rest). The between group comparison of the contrast [observation versus rest] provided evidence of a bilateral greater activation of inferior frontal gyrus during observation of human motion than during rest for the ASD group than for the TD group. This hyperactivation of the pars opercularis (belonging to the MNS) during observation of human motion in autistic subjects provides strong support for the hypothesis of atypical activity of the MNS that may be at the core of the social deficits in autism. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Does the road go up the mountain? Fictive motion between linguistic conventions and cognitive motivations.

    Science.gov (United States)

    Stosic, Dejan; Fagard, Benjamin; Sarda, Laure; Colin, Camille

    2015-09-01

    Fictive motion (FM) characterizes the use of dynamic expressions to describe static scenes. This phenomenon is crucial in terms of cognitive motivations for language use; several explanations have been proposed to account for it, among which mental simulation (Talmy in Toward a cognitive semantics, vol 1. MIT Press, Cambridge, 2000) and visual scanning (Matlock in Studies in linguistic motivation. Mouton de Gruyter, Berlin and New York, pp 221-248, 2004a). The aims of this paper were to test these competing explanations and identify language-specific constraints. To do this, we compared the linguistic strategies for expressing several types of static configurations in four languages, French, Italian, German and Serbian, with an experimental set-up (59 participants). The experiment yielded significant differences for motion-affordance versus no motion-affordance, for all four languages. Significant differences between languages included mean frequency of FM expressions. In order to refine the picture, and more specifically to disentangle the respective roles of language-specific conventions and language-independent (i.e. possibly cognitive) motivations, we completed our study with a corpus approach (besides the four initial languages, we added English and Polish). The corpus study showed low frequency of FM across languages, but a higher frequency and translation ratio for some FM types--among which those best accounted for by enactive perception. The importance of enactive perception could thus explain both the universality of FM and the fact that language-specific conventions appear mainly in very specific contexts--the ones furthest from enaction.

  9. Neural correlates of sensory prediction errors in monkeys: evidence for internal models of voluntary self-motion in the cerebellum.

    Science.gov (United States)

    Cullen, Kathleen E; Brooks, Jessica X

    2015-02-01

    During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new

  10. A synchronous surround increases the motion strength gain of motion.

    Science.gov (United States)

    Linares, Daniel; Nishida, Shin'ya

    2013-11-12

    Coherent motion detection is greatly enhanced by the synchronous presentation of a static surround (Linares, Motoyoshi, & Nishida, 2012). To further understand this contextual enhancement, here we measured the sensitivity to discriminate motion strength for several pedestal strengths with and without a surround. We found that the surround improved discrimination of low and medium motion strengths, but did not improve or even impaired discrimination of high motion strengths. We used motion strength discriminability to estimate the perceptual response function assuming additive noise and found that the surround increased the motion strength gain, rather than the response gain. Given that eye and body movements continuously introduce transients in the retinal image, it is possible that this strength gain occurs in natural vision.

  11. A multistage motion vector processing method for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  12. Motion in radiotherapy

    DEFF Research Database (Denmark)

    Korreman, Stine Sofia

    2012-01-01

    This review considers the management of motion in photon radiation therapy. An overview is given of magnitudes and variability of motion of various structures and organs, and how the motion affects images by producing artifacts and blurring. Imaging of motion is described, including 4DCT and 4DPE...

  13. Is perception of vertical impaired in individuals with chronic stroke with a history of 'pushing'?

    Science.gov (United States)

    Mansfield, Avril; Fraser, Lindsey; Rajachandrakumar, Roshanth; Danells, Cynthia J; Knorr, Svetlana; Campos, Jennifer

    2015-03-17

    Post-stroke 'pushing' behaviour appears to be caused by impaired perception of vertical in the roll plane. While pushing behaviour typically resolves with stroke recovery, it is not known if misperception of vertical persists. The purpose of this study was to determine if perception of vertical is impaired amongst stroke survivors with a history of pushing behaviour. Fourteen individuals with chronic stroke (7 with history of pushing) and 10 age-matched healthy controls participated. Participants sat upright on a chair surrounded by a curved projection screen in a laboratory mounted on a motion base. Subjective visual vertical (SVV) was assessed using a 30 trial, forced-choice protocol. For each trial participants viewed a line projected on the screen and indicated if the line was tilted to the right or the left. For the subjective postural vertical (SPV), participants wore a blindfold and the motion base was tilted to the left or right by 10-20°. Participants were asked to adjust the angular movements of the motion base until they felt upright. SPV was not different between groups. SVV was significantly more biased towards the contralesional side for participants with history of pushing (-3.6 ± 4.1°) than those without (-0.1 ± 1.4°). Two individuals with history of pushing had SVV or SPV outside the maximum for healthy controls. Impaired vertical perception may persist in some individuals with prior post-stroke pushing, despite resolution of pushing behaviours, which could have consequences for functional mobility and falls. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  14. Smoothing Motion Estimates for Radar Motion Compensation.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Simple motion models for complex motion environments are often not adequate for keeping radar data coherent. Eve n perfect motion samples appli ed to imperfect models may lead to interim calculations e xhibiting errors that lead to degraded processing results. Herein we discuss a specific i ssue involving calculating motion for groups of pulses, with measurements only available at pulse-group boundaries. - 4 - Acknowledgements This report was funded by General A tomics Aeronautical Systems, Inc. (GA-ASI) Mission Systems under Cooperative Re search and Development Agre ement (CRADA) SC08/01749 between Sandia National Laboratories and GA-ASI. General Atomics Aeronautical Systems, Inc. (GA-ASI), an affilia te of privately-held General Atomics, is a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, radars, and electro-optic and rel ated mission systems, includin g the Predator(r)/Gray Eagle(r)-series and Lynx(r) Multi-mode Radar.

  15. The Perception of Prototypical Motion: Synchronization Is Enhanced with Quantitatively Morphed Gestures of Musical Conductors

    Science.gov (United States)

    Wollner, Clemens; Deconinck, Frederik J. A.; Parkinson, Jim; Hove, Michael J.; Keller, Peter E.

    2012-01-01

    Aesthetic theories have long suggested perceptual advantages for prototypical exemplars of a given class of objects or events. Empirical evidence confirmed that morphed (quantitatively averaged) human faces, musical interpretations, and human voices are preferred over most individual ones. In this study, biological human motion was morphed and…

  16. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    Science.gov (United States)

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  17. A programmable motion phantom for quality assurance of motion management in radiotherapy

    International Nuclear Information System (INIS)

    Dunn, L.; Franich, R.D.; Kron, T.; Taylor, M.L.; Johnston, P.N.; McDermott, L.N.; Callahan, J.

    2012-01-01

    A commercially available motion phantom (QUASAR, Modus Medical) was modified for programmable motion control with the aim of reproducing patient respiratory motion in one dimension in both the anterior–posterior and superior–inferior directions, as well as, providing controllable breath-hold and sinusoidal patterns for the testing of radiotherapy gating systems. In order to simulate realistic patient motion, the DC motor was replaced by a stepper motor. A separate 'chest-wall' motion platform was also designed to accommodate a variety of surrogate marker systems. The platform employs a second stepper motor that allows for the decoupling of the chest-wall and insert motion. The platform's accuracy was tested by replicating patient traces recorded with the Varian real-time position management (RPM) system and comparing the motion platform's recorded motion trace with the original patient data. Six lung cancer patient traces recorded with the RPM system were uploaded to the motion platform's in-house control software and subsequently replicated through the phantom motion platform. The phantom's motion profile was recorded with the RPM system and compared to the original patient data. Sinusoidal and breath-hold patterns were simulated with the motion platform and recorded with the RPM system to verify the systems potential for routine quality assurance of commercial radiotherapy gating systems. There was good correlation between replicated and actual patient data (P 0.003). Mean differences between the location of maxima in replicated and patient data-sets for six patients amounted to 0.034 cm with the corresponding minima mean equal to 0.010 cm. The upgraded motion phantom was found to replicate patient motion accurately as well as provide useful test patterns to aid in the quality assurance of motion management methods and technologies.

  18. Revisiting the Lissajous figure as a tool to study bistable perception.

    Science.gov (United States)

    Weilnhammer, V A; Ludwig, K; Sterzer, P; Hesselmann, G

    2014-05-01

    During bistable vision perception spontaneously "switches" between two mutually exclusive percepts despite constant sensory input. The endogenous nature of these perceptual transitions has motivated extensive research aimed at the underlying mechanisms, since spontaneous perceptual transitions of bistable stimuli should in principle allow for a dissociation of processes related to sensory stimulation from those related to conscious perception. However, transitions from one conscious percept to another are often not instantaneous, and participants usually report a considerable amount of mixed or unclear percepts. This feature of bistable vision makes it difficult to isolate transition-related visual processes. Here, we revisited an ambiguous depth-from-motion stimulus which was first introduced to experimental psychology more than 80 years ago. This rotating Lissajous figure might prove useful in complementing other bistable stimuli, since its perceptual transitions only occur at critical stimulus configurations and are virtually instantaneous, thus facilitating the construction of a perceptually equivalent replay condition. We found that three parameters of the Lissajous figure - complexity, line width, and rotational speed - differentially modulated its perceptual dominance durations and transition probabilities, thus providing experimenters with a versatile tool to study the perceptual dynamics of bistable vision. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Structural motion engineering

    CERN Document Server

    Connor, Jerome

    2014-01-01

    This innovative volume provides a systematic treatment of the basic concepts and computational procedures for structural motion design and engineering for civil installations. The authors illustrate the application of motion control to a wide spectrum of buildings through many examples. Topics covered include optimal stiffness distributions for building-type structures, the role of damping in controlling motion, tuned mass dampers, base isolation systems, linear control, and nonlinear control. The book's primary objective is the satisfaction of motion-related design requirements, such as restrictions on displacement and acceleration. The book is ideal for practicing engineers and graduate students. This book also: ·         Broadens practitioners' understanding of structural motion control, the enabling technology for motion-based design ·         Provides readers the tools to satisfy requirements of modern, ultra-high strength materials that lack corresponding stiffness, where the motion re...

  20. Gaze direction effects on perceptions of upper limb kinesthetic coordinate system axes.

    Science.gov (United States)

    Darling, W G; Hondzinski, J M; Harper, J G

    2000-12-01

    The effects of varying gaze direction on perceptions of the upper limb kinesthetic coordinate system axes and of the median plane location were studied in nine subjects with no history of neuromuscular disorders. In two experiments, six subjects aligned the unseen forearm to the trunk-fixed anterior-posterior (a/p) axis and earth-fixed vertical while gazing at different visual targets using either head or eye motion to vary gaze direction in different conditions. Effects of support of the upper limb on perceptual errors were also tested in different conditions. Absolute constant errors and variable errors associated with forearm alignment to the trunk-fixed a/p axis and earth-fixed vertical were similar for different gaze directions whether the head or eyes were moved to control gaze direction. Such errors were decreased by support of the upper limb when aligning to the vertical but not when aligning to the a/p axis. Regression analysis showed that single trial errors in individual subjects were poorly correlated with gaze direction, but showed a dependence on shoulder angles for alignment to both axes. Thus, changes in position of the head and eyes do not influence perceptions of upper limb kinesthetic coordinate system axes. However, dependence of the errors on arm configuration suggests that such perceptions are generated from sensations of shoulder and elbow joint angle information. In a third experiment, perceptions of median plane location were tested by instructing four subjects to place the unseen right index fingertip directly in front of the sternum either by motion of the straight arm at the shoulder or by elbow flexion/extension with shoulder angle varied. Gaze angles were varied to the right and left by 0.5 radians to determine effects of gaze direction on such perceptions. These tasks were also carried out with subjects blind-folded and head orientation varied to test for effects of head orientation on perceptions of median plane location. Constant

  1. Spatio-Temporal Saliency Perception via Hypercomplex Frequency Spectral Contrast

    Directory of Open Access Journals (Sweden)

    Zhiqiang Tian

    2013-03-01

    Full Text Available Salient object perception is the process of sensing the salient information from the spatio-temporal visual scenes, which is a rapid pre-attention mechanism for the target location in a visual smart sensor. In recent decades, many successful models of visual saliency perception have been proposed to simulate the pre-attention behavior. Since most of the methods usually need some ad hoc parameters or high-cost preprocessing, they are difficult to rapidly detect salient object or be implemented by computing parallelism in a smart sensor. In this paper, we propose a novel spatio-temporal saliency perception method based on spatio-temporal hypercomplex spectral contrast (HSC. Firstly, the proposed HSC algorithm represent the features in the HSV (hue, saturation and value color space and features of motion by a hypercomplex number. Secondly, the spatio-temporal salient objects are efficiently detected by hypercomplex Fourier spectral contrast in parallel. Finally, our saliency perception model also incorporates with the non-uniform sampling, which is a common phenomenon of human vision that directs visual attention to the logarithmic center of the image/video in natural scenes. The experimental results on the public saliency perception datasets demonstrate the effectiveness of the proposed approach compared to eleven state-of-the-art approaches. In addition, we extend the proposed model to moving object extraction in dynamic scenes, and the proposed algorithm is superior to the traditional algorithms.

  2. Human walking in virtual environments perception, technology, and applications

    CERN Document Server

    Visell, Yon; Campos, Jennifer; Lécuyer, Anatole

    2013-01-01

    This book presents a survey of past and recent developments on human walking in virtual environments with an emphasis on human self-motion perception, the multisensory nature of experiences of walking, conceptual design approaches, current technologies, and applications. The use of virtual reality and movement simulation systems is becoming increasingly popular and more accessible to a wide variety of research fields and applications. While, in the past, simulation technologies have focused on developing realistic, interactive visual environments, it is becoming increasingly obvious that our everyday interactions are highly multisensory. Therefore, investigators are beginning to understand the critical importance of developing and validating locomotor interfaces that can allow for realistic, natural behaviours. The book aims to present an overview of what is currently understood about human perception and performance when moving in virtual environments and to situate it relative to the broader scientific and ...

  3. Embodied memory: effective and stable perception by combining optic flow and image structure.

    Science.gov (United States)

    Pan, Jing Samantha; Bingham, Ned; Bingham, Geoffrey P

    2013-12-01

    Visual perception studies typically focus either on optic flow structure or image structure, but not on the combination and interaction of these two sources of information. Each offers unique strengths in contrast to the other's weaknesses. Optic flow yields intrinsically powerful information about 3D structure, but is ephemeral. It ceases when motion stops. Image structure is less powerful in specifying 3D structure, but is stable. It remains when motion stops. Optic flow and image structure are intrinsically related in vision because the optic flow carries one image to the next. This relation is especially important in the context of progressive occlusion, in which optic flow provides information about the location of targets hidden in subsequent image structure. In four experiments, we investigated the role of image structure in "embodied memory" in contrast to memory that is only in the head. We found that either optic flow (Experiment 1) or image structure (Experiment 2) alone were relatively ineffective, whereas the combination was effective and, in contrast to conditions requiring reliance on memory-in-the-head, much more stable over extended time (Experiments 2 through 4). Limits well documented for visual short memory (that is, memory-in-the-head) were strongly exceeded by embodied memory. The findings support J. J. Gibson's (1979/1986, The Ecological Approach to Visual Perception, Boston, MA, Houghton Mifflin) insights about progressive occlusion and the embodied nature of perception and memory.

  4. Is Diaphragm Motion a Good Surrogate for Liver Tumor Motion?

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Juan [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); School of Information Science and Engineering, Shandong University, Jinan, Shandong (China); Cai, Jing [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Wang, Hongjun [School of Information Science and Engineering, Shandong University, Jinan, Shandong (China); Chang, Zheng; Czito, Brian G. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Bashir, Mustafa R. [Department of Radiology, Duke University Medical Center, Durham, North Carolina (United States); Palta, Manisha [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Yin, Fang-Fang, E-mail: fangfang.yin@duke.edu [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States)

    2014-11-15

    Purpose: To evaluate the relationship between liver tumor motion and diaphragm motion. Methods and Materials: Fourteen patients with hepatocellular carcinoma (10 of 14) or liver metastases (4 of 14) undergoing radiation therapy were included in this study. All patients underwent single-slice cine–magnetic resonance imaging simulations across the center of the tumor in 3 orthogonal planes. Tumor and diaphragm motion trajectories in the superior–inferior (SI), anterior–posterior (AP), and medial–lateral (ML) directions were obtained using an in-house-developed normalized cross-correlation–based tracking technique. Agreement between the tumor and diaphragm motion was assessed by calculating phase difference percentage, intraclass correlation coefficient, and Bland-Altman analysis (Diff). The distance between the tumor and tracked diaphragm area was analyzed to understand its impact on the correlation between the 2 motions. Results: Of all patients, the mean (±standard deviation) phase difference percentage values were 7.1% ± 1.1%, 4.5% ± 0.5%, and 17.5% ± 4.5% in the SI, AP, and ML directions, respectively. The mean intraclass correlation coefficient values were 0.98 ± 0.02, 0.97 ± 0.02, and 0.08 ± 0.06 in the SI, AP, and ML directions, respectively. The mean Diff values were 2.8 ± 1.4 mm, 2.4 ± 1.1 mm, and 2.2 ± 0.5 mm in the SI, AP, and ML directions, respectively. Tumor and diaphragm motions had high concordance when the distance between the tumor and tracked diaphragm area was small. Conclusions: This study showed that liver tumor motion had good correlation with diaphragm motion in the SI and AP directions, indicating diaphragm motion in the SI and AP directions could potentially be used as a reliable surrogate for liver tumor motion.

  5. Curves from Motion, Motion from Curves

    Science.gov (United States)

    2000-01-01

    De linearum curvarum cum lineis rectis comparatione dissertatio geometrica - an appendix to a treatise by de Lalouv~re (this was the only publication... correct solution to the problem of motion in the gravity of a permeable rotating Earth, considered by Torricelli (see §3). If the Earth is a homogeneous...in 1686, which contains the correct solution as part of a remarkably comprehensive theory of orbital motions under centripetal forces. It is a

  6. Spontaneous local alpha oscillations predict motion-induced blindness.

    Science.gov (United States)

    Händel, Barbara F; Jensen, Ole

    2014-11-01

    Bistable visual illusions are well suited for exploring the neuronal states of the brain underlying changes in perception. In this study, we investigated oscillatory activity associated with 'motion-induced blindness' (MIB), which denotes the perceptual disappearance of salient target stimuli when a moving pattern is superimposed on them (Bonneh et al., ). We applied an MIB paradigm in which illusory target disappearances would occur independently in the left and right hemifields. Both illusory and real target disappearance were followed by an alpha lateralization with weaker contralateral than ipsilateral alpha activity (~10 Hz). However, only the illusion showed early alpha lateralization in the opposite direction, which preceded the alpha effect present for both conditions and coincided with the estimated onset of the illusion. The duration of the illusory disappearance was further predicted by the magnitude of this early lateralization when considered over subjects. In the gamma band (60-80 Hz), we found an increase in activity contralateral relative to ipsilateral only after a real disappearance. Whereas early alpha activity was predictive of onset and length of the illusory percept, gamma activity showed no modulation in relation to the illusion. Our study demonstrates that the spontaneous changes in visual alpha activity have perceptual consequences. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Playing with Senses in VR: Alternate Perceptions Combining Vision and Touch.

    Science.gov (United States)

    Lecuyer, Anatole

    2017-01-01

    Virtual reality is an immersive experience based on computer-generated stimulations perceived with multiple sensory channels. It is possible to manipulate these sensory stimulations independently and create conflicting situations in which, for instance, vision and touch are spatially and/or temporally inconsistent. This article discusses how to exploit these ambiguous sensorial situations to generate new kinds of percept using three types of examples: pseudo-haptic effects, self-motion sensations, and body-ownership illusions.

  8. A causal role for V5/MT neurons coding motion-disparity conjunctions in resolving perceptual ambiguity.

    Science.gov (United States)

    Krug, Kristine; Cicmil, Nela; Parker, Andrew J; Cumming, Bruce G

    2013-08-05

    Judgments about the perceptual appearance of visual objects require the combination of multiple parameters, like location, direction, color, speed, and depth. Our understanding of perceptual judgments has been greatly informed by studies of ambiguous figures, which take on different appearances depending upon the brain state of the observer. Here we probe the neural mechanisms hypothesized as responsible for judging the apparent direction of rotation of ambiguous structure from motion (SFM) stimuli. Resolving the rotation direction of SFM cylinders requires the conjoint decoding of direction of motion and binocular depth signals [1, 2]. Within cortical visual area V5/MT of two macaque monkeys, we applied electrical stimulation at sites with consistent multiunit tuning to combinations of binocular depth and direction of motion, while the monkey made perceptual decisions about the rotation of SFM stimuli. For both ambiguous and unambiguous SFM figures, rotation judgments shifted as if we had added a specific conjunction of disparity and motion signals to the stimulus elements. This is the first causal demonstration that the activity of neurons in V5/MT contributes directly to the perception of SFM stimuli and by implication to decoding the specific conjunction of disparity and motion, the two different visual cues whose combination drives the perceptual judgment. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.

    Science.gov (United States)

    Wright, W Geoffrey

    2014-01-01

    Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  10. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds

    Directory of Open Access Journals (Sweden)

    W. Geoffrey Wright

    2014-04-01

    Full Text Available Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS. This mini-review focuses on the use of virtual environments (VE to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.

  11. Visual motion influences the contingent auditory motion aftereffect

    NARCIS (Netherlands)

    Vroomen, J.; de Gelder, B.

    2003-01-01

    In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa).

  12. Encoding of naturalistic optic flow by motion sensitive neurons of nucleus rotundus in the zebra finch (Taeniopygia guttata.

    Directory of Open Access Journals (Sweden)

    Dennis eEckmeier

    2013-09-01

    Full Text Available The retinal image changes that occur during locomotion, the optic flow, carry information about self-motion and the three-dimensional structure of the environment. Especially fast moving animals with only little binocular vision depend on these depth cues for manoeuvring. They actively control their gaze to facilitate perception of depth based on cues in the optic flow. In the visual system of birds, nucleus rotundus neurons were originally found to respond to object motion but not to background motion. However, when background and object were both moving, responses increase the more the direction and velocity of object and background motion on the retina differed. These properties may play a role in representing depth cues in the optic flow. We therefore investigated how neurons in nucleus rotundus respond to optic flow that contains depth cues. We presented simplified and naturalistic optic flow on a panoramic LED display while recording from single neurons in nucleus rotundus of anaesthetized zebra finches. Unlike most studies on motion vision in birds, our stimuli included depth information.We found extensive responses of motion selective neurons in nucleus rotundus to optic flow stimuli. Simplified stimuli revealed preferences for optic flow reflecting translational or rotational self-motion. Naturalistic optic flow stimuli elicited complex response modulations, but the presence of objects was signalled by only few neurons. The neurons that did respond to objects in the optic flow, however, show interesting properties.

  13. Motion control, motion sickness, and the postural dynamics of mobile devices.

    Science.gov (United States)

    Stoffregen, Thomas A; Chen, Yi-Chou; Koslucher, Frank C

    2014-04-01

    Drivers are less likely than passengers to experience motion sickness, an effect that is important for any theoretical account of motion sickness etiology. We asked whether different types of control would affect the incidence of motion sickness, and whether any such effects would be related to participants' control of their own bodies. Participants played a video game on a tablet computer. In the Touch condition, the device was stationary and participants controlled the game exclusively through fingertip inputs via the device's touch screen. In the Tilt condition, participants held the device in their hands and moved the device to control some game functions. Results revealed that the incidence of motion sickness was greater in the Touch condition than in the Tilt condition. During game play, movement of the head and torso differed as a function of the type of game control. Before the onset of subjective symptoms of motion sickness, movement of the head and torso differed between participants who later reported motion sickness and those that did not. We discuss implications of these results for theories of motion sickness etiology.

  14. Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Science.gov (United States)

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou

    2008-01-01

    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515

  15. A TMS study on the contribution of visual area V5 to the perception of implied motion in art and its appreciation.

    Science.gov (United States)

    Cattaneo, Zaira; Schiavi, Susanna; Silvanto, Juha; Nadal, Marcos

    2017-01-01

    Over the last decade, researchers have sought to understand the brain mechanisms involved in the appreciation of art. Previous studies reported an increased activity in sensory processing regions for artworks that participants find more appealing. Here we investigated the intriguing possibility that activity in cortical area V5-a region in the occipital cortex mediating physical and implied motion detection-is related not only to the generation of a sense of motion from visual cues used in artworks, but also to the appreciation of those artworks. Art-naïve participants viewed a series of paintings and quickly judged whether or not the paintings conveyed a sense of motion, and whether or not they liked them. Triple-pulse TMS applied over V5 while viewing the paintings significantly decreased the perceived sense of motion, and also significantly reduced liking of abstract (but not representational) paintings. Our data demonstrate that V5 is involved in extracting motion information even when the objects whose motion is implied are pictorial representations (as opposed to photographs or film frames), and even in the absence of any figurative content. Moreover, our study suggests that, in the case of untrained people, V5 activity plays a causal role in the appreciation of abstract but not of representational art.

  16. Attention and apparent motion.

    Science.gov (United States)

    Horowitz, T; Treisman, A

    1994-01-01

    Two dissociations between short- and long-range motion in visual search are reported. Previous research has shown parallel processing for short-range motion and apparently serial processing for long-range motion. This finding has been replicated and it has also been found that search for short-range targets can be impaired both by using bicontrast stimuli, and by prior adaptation to the target direction of motion. Neither factor impaired search in long-range motion displays. Adaptation actually facilitated search with long-range displays, which is attributed to response-level effects. A feature-integration account of apparent motion is proposed. In this theory, short-range motion depends on specialized motion feature detectors operating in parallel across the display, but subject to selective adaptation, whereas attention is needed to link successive elements when they appear at greater separations, or across opposite contrasts.

  17. The impact of the perception of rhythmic music on self-paced oscillatory movements.

    Science.gov (United States)

    Peckel, Mathieu; Pozzo, Thierry; Bigand, Emmanuel

    2014-01-01

    Inspired by theories of perception-action coupling and embodied music cognition, we investigated how rhythmic music perception impacts self-paced oscillatory movements. In a pilot study, we examined the kinematic parameters of self-paced oscillatory movements, walking and finger tapping using optical motion capture. In accordance with biomechanical constraints accounts of motion, we found that movements followed a hierarchical organization depending on the proximal/distal characteristic of the limb used. Based on these findings, we were interested in knowing how and when the perception of rhythmic music could resonate with the motor system in the context of these constrained oscillatory movements. In order to test this, we conducted an experiment where participants performed four different effector-specific movements (lower leg, whole arm and forearm oscillation and finger tapping) while rhythmic music was playing in the background. Musical stimuli consisted of computer-generated MIDI musical pieces with a 4/4 metrical structure. The musical tempo of each song increased from 60 BPM to 120 BPM by 6 BPM increments. A specific tempo was maintained for 20 s before a 2 s transition to the higher tempo. The task of the participant was to maintain a comfortable pace for the four movements (self-paced) while not paying attention to the music. No instruction on whether to synchronize with the music was given. Results showed that participants were distinctively influenced by the background music depending on the movement used with the tapping task being consistently the most influenced. Furthermore, eight strategies put in place by participants to cope with the task were unveiled. Despite not instructed to do so, participants also occasionally synchronized with music. Results are discussed in terms of the link between perception and action (i.e., motor/perceptual resonance). In general, our results give support to the notion that rhythmic music is processed in a motoric

  18. The impact of the perception of rhythmic music on oscillatory self-paced movements

    Directory of Open Access Journals (Sweden)

    Mathieu ePeckel

    2014-09-01

    Full Text Available Inspired by theories of perception-action coupling and embodied music cognition, we investigated how rhythmic music perception impacts self-paced oscillatory movements. In a pilot study, we examined the kinematic parameters of self-paced oscillatory movements, walking and finger tapping using optical motion capture. In accordance with biomechanical constraints accounts of motion, we found that movements followed a hierarchical organization depending on the proximal/distal characteristic of the limb used. Based on these findings, we were interested in knowing how and when the perception of rhythmic music could resonate with the motor system in the context of these constrained oscillatory movements. In order to test this, we conducted an experiment where participants performed four different effector-specific movements (lower leg, whole arm and forearm oscillation and finger tapping while rhythmic music was playing in the background. Musical stimuli consisted of computer-generated MIDI musical pieces with a 4/4 metrical structure. The musical tempo of each song increased from 60 BPM to 120 BPM by 6 BPM increments. A specific tempo was maintained for 20s before a 2s transition to the higher tempo. The task of the participant was to maintain a comfortable pace for the four movements (self-paced while not paying attention to the music. No instruction on whether to synchronize with the music was given. Results showed that participants were distinctively influenced by the background music depending on the movement used with the tapping task being consistently the most influenced. Furthermore, eight strategies put in place by participants to cope with task were unveiled. Despite not instructed to do so, participants also occasionally synchronized with music. Results are discussed in terms of the link between perception and action (i.e. motor/perceptual resonance. In general, our results give support to the notion that rhythmic music is processed in a

  19. Thinking Sound and Body-Motion Shapes in Music: Public Peer Review of “Gesture and the Sonic Event in Karnatak Music” by Lara Pearson

    Directory of Open Access Journals (Sweden)

    Rolfe Inge Godøy

    2013-12-01

    Full Text Available It seems that the majority of research on music-related body motion has so far been focused on Western music, so this paper by Lara Pearson on music-related body motion in Indian vocal music is a most welcome contribution to this field. But research on music-related body motion does present us with a number of challenges, ranging from issues of method to fundamental issues of perception and multi-modal integration in music. In such research, thinking of perceptually salient features in different modalities (sound, motion, touch, etc. as shapes seems to go well with our cognitive apparatus, and also be quite practical in representing the features in question. The research reported in this paper gives us an insight into how tracing shapes by hand motion is an integral part of teaching Indian vocal music, and the approach of this paper also holds promise for fruitful future research.

  20. From elements to perception: local and global processing in visual neurons.

    Science.gov (United States)

    Spillmann, L

    1999-01-01

    Gestalt psychologists in the early part of the century challenged psychophysical notions that perceptual phenomena can be understood from a punctate (atomistic) analysis of the elements present in the stimulus. Their ideas slowed later attempts to explain vision in terms of single-cell recordings from individual neurons. A rapprochement between Gestalt phenomenology and neurophysiology seemed unlikely when the first ECVP was held in Marburg, Germany, in 1978. Since that time, response properties of neurons have been discovered that invite an interpretation of visual phenomena (including illusions) in terms of neuronal processing by long-range interactions, as first proposed by Mach and Hering in the last century. This article traces a personal journey into the early days of neurophysiological vision research to illustrate the progress that has taken place from the first attempts to correlate single-cell responses with visual perceptions. Whereas initially the receptive-field properties of individual classes of cells--e.g., contrast, wavelength, orientation, motion, disparity, and spatial-frequency detectors--were used to account for relatively simple visual phenomena, nowadays complex perceptions are interpreted in terms of long-range interactions, involving many neurons. This change in paradigm from local to global processing was made possible by recent findings, in the cortex, on horizontal interactions and backward propagation (feedback loops) in addition to classical feedforward processing. These mechanisms are exemplified by studies of the tilt effect and tilt aftereffect, direction-specific motion adaptation, illusory contours, filling-in and fading, figure--ground segregation by orientation and motion contrast, and pop-out in dynamic visual-noise patterns. Major questions for future research and a discussion of their epistemological implications conclude the article.

  1. Example-based human motion denoising.

    Science.gov (United States)

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  2. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    He-Yuan Lin

    2008-03-01

    Full Text Available A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  3. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Li Hsin-Te

    2008-01-01

    Full Text Available Abstract A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  4. Auditory event-related potentials associated with perceptual reversals of bistable pitch motion.

    Science.gov (United States)

    Davidson, Gray D; Pitts, Michael A

    2014-01-01

    Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.

  5. Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters

    Science.gov (United States)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-09-01

    This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.

  6. [Complex visual hallucinations following occipital infarct and perception of optical illusions].

    Science.gov (United States)

    Renou, P; Deltour, S; Samson, Y

    2008-05-01

    The physiopathology of visual hallucinations in the hemianopic field secondary to occipital infarct is uncertain. We report the case of a patient with a history of occipital infarct who presented nonstereotyped complex hallucinations in the quadranopic field resulting from a second controlateral occipital infarct. Based on an experience with motion optical illusions, we suggested that the association of these two occipital lesions, involving the V5 motion area on the one side and the V1 area on the other side, could have produced the complex hallucinations due to a release phenomenon. The patient experienced simultaneously a double visual consciousness, with both hallucinations and real visual perceptions. The study of perceptual illusions in patients with visual hallucinations could illustrate the innovative theory of visual consciousness as being not unified but constituted of multiple microconsciousnesses.

  7. Structural Motion Grammar for Universal Use of Leap Motion: Amusement and Functional Contents Focused

    Directory of Open Access Journals (Sweden)

    Byungseok Lee

    2018-01-01

    Full Text Available Motions using Leap Motion controller are not standardized while the use of it is spreading in media contents. Each content defines its own motions, thereby creating confusion for users. Therefore, to alleviate user inconvenience, this study categorized the commonly used motion by Amusement and Functional Contents and defined the Structural Motion Grammar that can be universally used based on the classification. To this end, the Motion Lexicon was defined, which is a fundamental motion vocabulary, and an algorithm that enables real-time recognition of Structural Motion Grammar was developed. Moreover, the proposed method was verified by user evaluation and quantitative comparison tests.

  8. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  9. Anatomical alterations of the visual motion processing network in migraine with and without aura.

    Directory of Open Access Journals (Sweden)

    Cristina Granziera

    2006-10-01

    Full Text Available Patients suffering from migraine with aura (MWA and migraine without aura (MWoA show abnormalities in visual motion perception during and between attacks. Whether this represents the consequences of structural changes in motion-processing networks in migraineurs is unknown. Moreover, the diagnosis of migraine relies on patient's history, and finding differences in the brain of migraineurs might help to contribute to basic research aimed at better understanding the pathophysiology of migraine.To investigate a common potential anatomical basis for these disturbances, we used high-resolution cortical thickness measurement and diffusion tensor imaging (DTI to examine the motion-processing network in 24 migraine patients (12 with MWA and 12 MWoA and 15 age-matched healthy controls (HCs. We found increased cortical thickness of motion-processing visual areas MT+ and V3A in migraineurs compared to HCs. Cortical thickness increases were accompanied by abnormalities of the subjacent white matter. In addition, DTI revealed that migraineurs have alterations in superior colliculus and the lateral geniculate nucleus, which are also involved in visual processing.A structural abnormality in the network of motion-processing areas could account for, or be the result of, the cortical hyperexcitability observed in migraineurs. The finding in patients with both MWA and MWoA of thickness abnormalities in area V3A, previously described as a source in spreading changes involved in visual aura, raises the question as to whether a "silent" cortical spreading depression develops as well in MWoA. In addition, these experimental data may provide clinicians and researchers with a noninvasively acquirable migraine biomarker.

  10. Time pressure and attention allocation effect on upper limb motion steadiness.

    Science.gov (United States)

    Liu, Sicong; Eklund, Robert C; Tenenbaum, Gershon

    2015-01-01

    Following ironic process theory (IPT), the authors aimed at investigating how attentional allocation affects participants' upper limb motion steadiness under low and high levels of mental load. A secondary purpose was to examine the validity of skin conductance level in measuring perception of pressure. The study consisted of 1 within-participant factor (i.e., phase: baseline, test) and 4 between-participant factors (i.e., gender: male, female; mental load: fake time constraints, no time constraints; attention: positive, suppressive; order: baseline → → → test, test → → baseline). Eighty college students (40 men and 40 women, Mage = 20.20 years, SD(age) = 1.52 years) participated in the study. Gender-stratified random assignment was employed in a 2 × 2 × 2 × 2 × 2 mixed experimental design. The findings generally support IPT but its predictions on motor performance under mental load may not be entirely accurate. Unlike men, women's performance was not susceptible to manipulations of mental load and attention allocation. The validity of skin conductance readings as an index of pressure perception was called into question.

  11. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  12. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  13. Motion and relativity

    CERN Document Server

    Infeld, Leopold

    1960-01-01

    Motion and Relativity focuses on the methodologies, solutions, and approaches involved in the study of motion and relativity, including the general relativity theory, gravitation, and approximation.The publication first offers information on notation and gravitational interaction and the general theory of motion. Discussions focus on the notation of the general relativity theory, field values on the world-lines, general statement of the physical problem, Newton's theory of gravitation, and forms for the equation of motion of the second kind. The text then takes a look at the approximation meth

  14. Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.

    Science.gov (United States)

    Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H

    2017-07-01

    Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.

  15. Slow motion in films and video clips: Music influences perceived duration and emotion, autonomic physiological activation and pupillary responses.

    Science.gov (United States)

    Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning

    2018-01-01

    Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.

  16. Motion Transplantation Techniques: A Survey

    NARCIS (Netherlands)

    van Basten, Ben; Egges, Arjan

    2012-01-01

    During the past decade, researchers have developed several techniques for transplanting motions. These techniques transplant a partial auxiliary motion, possibly defined for a small set of degrees of freedom, on a base motion. Motion transplantation improves motion databases' expressiveness and

  17. Adaptive Changes in the Perception of Fast and Slow Movement at Different Head Positions.

    Science.gov (United States)

    Panichi, Roberto; Occhigrossi, Chiara; Ferraresi, Aldo; Faralli, Mario; Lucertini, Marco; Pettorossi, Vito E

    2017-05-01

    This paper examines the subjective sense of orientation during asymmetric body rotations in normal subjects. Self-motion perception was investigated in 10 healthy individuals during asymmetric whole-body rotation with different head orientations. Both on-vertical axis and off-vertical axis rotations were employed. Subjects tracked a remembered earth-fixed visual target while rotating in the dark for four cycles of asymmetric rotation (two half-sinusoidal cycles of the same amplitude, but of different duration). The rotations induced a bias in the perception of velocity (more pronounced with fast than with slow motion). At the end of rotation, a marked target position error (TPE) was present. For the on-vertical axis rotations, the TPE was no different if the rotations were performed with a 30° nose-down, a 60° nose-up, or a 90° side-down head tilt. With off-vertical axis rotations, the simultaneous activation of the semicircular canals and otolithic receptors produced a significant increase of TPE for all head positions. This difference between on-vertical and off-vertical axis rotation was probably partly due to the vestibular transfer function and partly due to different adaptation to the speed of rotation. Such a phenomenon might be generated in different components of the vestibular system. The adaptive process enhancing the perception of dynamic movement around the vertical axis is not related to the specific semicircular canals that are activated; the addition of an otolithic component results in a significant increase of the TPE.Panichi R, Occhigrossi C, Ferraresi A, Faralli M, Lucertini M, Pettorossi VE. Adaptive changes in the perception of fast and slow movement at different head positions. Aerosp Med Hum Perform. 2017; 88(5):463-468.

  18. Objects in Motion

    Science.gov (United States)

    Damonte, Kathleen

    2004-01-01

    One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…

  19. Marker-Free Human Motion Capture

    DEFF Research Database (Denmark)

    Grest, Daniel

    Human Motion Capture is a widely used technique to obtain motion data for animation of virtual characters. Commercial optical motion capture systems are marker-based. This book is about marker-free motion capture and its possibilities to acquire motion from a single viewing direction. The focus...

  20. The influence of action-effect anticipation on bistable perception: Differences between onset rivalry and ambiguous motion

    NARCIS (Netherlands)

    Dogge, M.; Gayet, S.; Custers, R.; Aarts, H.A.G.

    2018-01-01

    Perception is strongly shaped by the actions we perform. According to the theory of event coding, and forward models of motor control, goal-directed action preparation activates representations of desired effects. These expectations about the precise stimulus identity of one's action-outcomes (i.e.

  1. Respiratory impact on motion sickness induced by linear motion

    NARCIS (Netherlands)

    Mert, A.; Klöpping-Ketelaars, I.; Bles, W.

    2009-01-01

    Motion sickness incidence (MSI) for vertical sinusoidal motion reaches a maximum at 0.167 Hz. Normal breathing frequency is close to this frequency. There is some evidence for synchronization of breathing with this stimulus frequency. If this enforced breathing takes place over a larger frequency

  2. A unified model of heading and path perception in primate MSTd.

    Directory of Open Access Journals (Sweden)

    Oliver W Layton

    2014-02-01

    Full Text Available Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract

  3. A Unified Model of Heading and Path Perception in Primate MSTd

    Science.gov (United States)

    Layton, Oliver W.; Browning, N. Andrew

    2014-01-01

    Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory

  4. Magnetosensitive e-skins with directional perception for augmented reality

    Science.gov (United States)

    Cañón Bermúdez, Gilbert Santiago; Karnaushenko, Dmitriy D.; Karnaushenko, Daniil; Lebanov, Ana; Bischoff, Lothar; Kaltenbrunner, Martin; Fassbender, Jürgen; Schmidt, Oliver G.; Makarov, Denys

    2018-01-01

    Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality. PMID:29376121

  5. Teaching Newton's 3rd law of motion using learning by design approach

    Science.gov (United States)

    Aquino, Jiezel G.; Caliguid, Mariel P.; Buan, Amelia T.; Magsayod, Joy R.; Lahoylahoy, Myrna E.

    2018-01-01

    This paper presents the process and implementation of Learning by Design Approach in teaching Newton's 3rd Law of Motion. A lesson activity from integrative STEM education was adapted, modified and enhanced through pilot testing. After revisions, the implementation was done to one class. The respondent's prior knowledge was first assessed by a pretest. PPIT (present the scenario, plan, implement and test) was the framework followed in the implementation of Learning by Design. Worksheets were then utilized to measure their conceptual understanding and perception. A score guide was also used to evaluate the student's output. Paired t-test analysis showed that there is a significant difference in the pretest and posttest achievement scores. This implies that the performance of the students have improved during the implementation of the Learning by Design. The Analysis of variance also depicts that the low, average and high benefited in the Learning by Design approach. The results of this study suggests that Learning by Design is an effective approach in teaching Newton's 3rd Law of Motion and thus be used in a Science classroom.

  6. Multi-Modal Inference in Animacy Perception for Artificial Object

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2011-10-01

    Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.

  7. Neural correlates of the perception of dynamic versus static facial expressions of emotion.

    Science.gov (United States)

    Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit

    2011-04-20

    This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.

  8. Distance and Size Perception in Astronauts during Long-Duration Spaceflight

    Directory of Open Access Journals (Sweden)

    Gilles Clément

    2013-12-01

    Full Text Available Exposure to microgravity during spaceflight is known to elicit orientation illusions, errors in sensory localization, postural imbalance, changes in vestibulo-spinal and vestibulo-ocular reflexes, and space motion sickness. The objective of this experiment was to investigate whether an alteration in cognitive visual-spatial processing, such as the perception of distance and size of objects, is also taking place during prolonged exposure to microgravity. Our results show that astronauts on board the International Space Station exhibit biases in the perception of their environment. Objects’ heights and depths were perceived as taller and shallower, respectively, and distances were generally underestimated in orbit compared to Earth. These changes may occur because the perspective cues for depth are less salient in microgravity or the eye-height scaling of size is different when an observer is not standing on the ground. This finding has operational implications for human space exploration missions.

  9. What motion is: William Neile and the laws of motion.

    Science.gov (United States)

    Kemeny, Max

    2017-07-01

    In 1668-1669 William Neile and John Wallis engaged in a protracted correspondence regarding the nature of motion. Neile was unhappy with the laws of motion that had been established by the Royal Society in three papers published in 1668, deeming them not explanations of motion at all, but mere descriptions. Neile insisted that science could not be informative without a discussion of causes, meaning that Wallis's purely kinematic account of collision could not be complete. Wallis, however, did not consider Neile's objections to his work to be serious. Rather than engage in a discussion of the proper place of natural philosophy in science, Wallis decided to show how Neile's preferred treatment of motion lead to absurd conclusions. This dispute is offered as a case study of dispute resolution within the early Royal Society.

  10. Early Improper Motion Detection in Golf Swings Using Wearable Motion Sensors: The First Approach

    Science.gov (United States)

    Stančin, Sara; Tomažič, Sašo

    2013-01-01

    This paper presents an analysis of a golf swing to detect improper motion in the early phase of the swing. Led by the desire to achieve a consistent shot outcome, a particular golfer would (in multiple trials) prefer to perform completely identical golf swings. In reality, some deviations from the desired motion are always present due to the comprehensive nature of the swing motion. Swing motion deviations that are not detrimental to performance are acceptable. This analysis is conducted using a golfer's leading arm kinematic data, which are obtained from a golfer wearing a motion sensor that is comprised of gyroscopes and accelerometers. Applying the principal component analysis (PCA) to the reference observations of properly performed swings, the PCA components of acceptable swing motion deviations are established. Using these components, the motion deviations in the observations of other swings are examined. Any unacceptable deviations that are detected indicate an improper swing motion. Arbitrarily long observations of an individual player's swing sequences can be included in the analysis. The results obtained for the considered example show an improper swing motion in early phase of the swing, i.e., the first part of the backswing. An early detection method for improper swing motions that is conducted on an individual basis provides assistance for performance improvement. PMID:23752563

  11. Motion illusions in optical art presented for long durations are temporally distorted.

    Science.gov (United States)

    Nather, Francisco Carlos; Mecca, Fernando Figueiredo; Bueno, José Lino Oliveira

    2013-01-01

    Static figurative images implying human body movements observed for shorter and longer durations affect the perception of time. This study examined whether images of static geometric shapes would affect the perception of time. Undergraduate participants observed two Optical Art paintings by Bridget Riley for 9 or 36 s (group G9 and G36, respectively). Paintings implying different intensities of movement (2.0 and 6.0 point stimuli) were randomly presented. The prospective paradigm in the reproduction method was used to record time estimations. Data analysis did not show time distortions in the G9 group. In the G36 group the paintings were differently perceived: that for the 2.0 point one are estimated to be shorter than that for the 6.0 point one. Also for G36, the 2.0 point painting was underestimated in comparison with the actual time of exposure. Motion illusions in static images affected time estimation according to the attention given to the complexity of movement by the observer, probably leading to changes in the storage velocity of internal clock pulses.

  12. S5-2: Shifting the Perspective on Biological Movement Perception

    Directory of Open Access Journals (Sweden)

    Zsolt Palatinus

    2012-10-01

    Full Text Available Most efforts for understanding biological movement perception seem to agree in assuming a key role of some version of 2D projective geometry being the basis of either computation or invariant detection somewhere between the stimulus and perception. Recent studies invite considering alternatives. Beintemma and Lappe (2002 PNAS 99 5661–5663 constructed sequential walker displays in which points were assigned along limb segments at random position at each frame and still reported correct responses. In our study, point light displays were prepared from motion-capture data of humans performing everyday activities. Animations were rendered either from a fixed camera position or from a curvilinear trajectory around the target. Same mean response time in the two conditions suggests that information for making correct judgments remained accessible despite the superposition of camera movement induced changes on the projection. An alternative approach is offered, based on Gibson's (1986, Lawrence Erlbaum Associates conceptualization of the ambient optic array, a continuous energy field in which perception-action systems are immersed. The registration of these energy distributions manifests in fractal, scale-invariant fluctuations of exploratory movements, suggesting that there may be subtle contributions of previously unrecognized fluctuations. Fractal fluctuations may serve as a modality-general substrate for detection of information for perception and even cognition (Dixon et al., 2012 Topics in Cognitive Science 4 51–62; Stephen & Hajnal, 2011 Attention, Perception, & Psychophysics 73 1302–1328 In this work, we consider the possibility that subtle fluctuations in seated posture and head sway moderate the effects of optical energy arrays upon the perceptual system.

  13. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    Science.gov (United States)

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  14. Motion compensated digital tomosynthesis

    NARCIS (Netherlands)

    van der Reijden, Anneke; van Herk, Marcel; Sonke, Jan-Jakob

    2013-01-01

    Digital tomosynthesis (DTS) is a limited angle image reconstruction method for cone beam projections that offers patient surveillance capabilities during VMAT based SBRT delivery. Motion compensation (MC) has the potential to mitigate motion artifacts caused by respiratory motion, such as blur. The

  15. Rolling Shutter Motion Deblurring

    KAUST Repository

    Su, Shuochen

    2015-06-07

    Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undis torted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.

  16. Stochastic ground motion simulation

    Science.gov (United States)

    Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.

  17. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    Science.gov (United States)

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  18. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish.

    Science.gov (United States)

    Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi

    2018-06-05

    Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.

  19. 41 CFR 60-30.8 - Motions; disposition of motions.

    Science.gov (United States)

    2010-07-01

    ... a supporting memorandum. Within 10 days after a written motion is served, or such other time period... writing. If made at the hearing, motions may be stated orally; but the Administrative Law Judge may require that they be reduced to writing and filed and served on all parties in the same manner as a formal...

  20. Impaired global, and compensatory local, biological motion processing in people with high levels of autistic traits.

    Science.gov (United States)

    van Boxtel, Jeroen J A; Lu, Hongjing

    2013-01-01

    People with Autism Spectrum Disorder (ASD) are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.

  1. Impaired global, and compensatory local, biological motion processing in people with high levels of autistic traits

    Directory of Open Access Journals (Sweden)

    Jeroen J A Van Boxtel

    2013-04-01

    Full Text Available People with Autism Spectrum Disorder (ASD are hypothesized to have poor high-level processing but superior low-level processing, causing impaired social recognition, and a focus on non-social stimulus contingencies. Biological motion perception provides an ideal domain to investigate exactly how ASD modulates the interaction between low and high-level processing, because it involves multiple processing stages, and carries many important social cues. We investigated individual differences among typically developing observers in biological motion processing, and whether such individual differences associate with the number of autistic traits. In Experiment 1, we found that individuals with fewer autistic traits were automatically and involuntarily attracted to global biological motion information, whereas individuals with more autistic traits did not show this pre-attentional distraction. We employed an action adaptation paradigm in the second study to show that individuals with more autistic traits were able to compensate for deficits in global processing with an increased involvement in local processing. Our findings can be interpreted within a predictive coding framework, which characterizes the functional relationship between local and global processing stages, and explains how these stages contribute to the perceptual difficulties associated with ASD.

  2. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Visual search for motion-form conjunctions: is form discriminated within the motion system?

    Science.gov (United States)

    von Mühlenen, A; Müller, H J

    2001-06-01

    Motion-form conjunction search can be more efficient when the target is moving (a moving 45 degrees tilted line among moving vertical and stationary 45 degrees tilted lines) rather than stationary. This asymmetry may be due to aspects of form being discriminated within a motion system representing only moving items, whereas discrimination of stationary items relies on a static form system (J. Driver & P. McLeod, 1992). Alternatively, it may be due to search exploiting differential motion velocity and direction signals generated by the moving-target and distractor lines. To decide between these alternatives, 4 experiments systematically varied the motion-signal information conveyed by the moving target and distractors while keeping their form difference salient. Moving-target search was found to be facilitated only when differential motion-signal information was available. Thus, there is no need to assume that form is discriminated within the motion system.

  4. Sensitivity of neurons in the middle temporal area of marmoset monkeys to random dot motion.

    Science.gov (United States)

    Chaplin, Tristan A; Allitt, Benjamin J; Hagan, Maureen A; Price, Nicholas S C; Rajan, Ramesh; Rosa, Marcello G P; Lui, Leo L

    2017-09-01

    basis of motion perception in the marmoset, a small primate species that is becoming increasingly popular as an experimental model. Copyright © 2017 the American Physiological Society.

  5. Orbit-attitude coupled motion around small bodies: Sun-synchronous orbits with Sun-tracking attitude motion

    Science.gov (United States)

    Kikuchi, Shota; Howell, Kathleen C.; Tsuda, Yuichi; Kawaguchi, Jun'ichiro

    2017-11-01

    The motion of a spacecraft in proximity to a small body is significantly perturbed due to its irregular gravity field and solar radiation pressure. In such a strongly perturbed environment, the coupling effect of the orbital and attitude motions exerts a large influence that cannot be neglected. However, natural orbit-attitude coupled dynamics around small bodies that are stationary in both orbital and attitude motions have yet to be observed. The present study therefore investigates natural coupled motion that involves both a Sun-synchronous orbit and Sun-tracking attitude motion. This orbit-attitude coupled motion enables a spacecraft to maintain its orbital geometry and attitude state with respect to the Sun without requiring active control. Therefore, the proposed method can reduce the use of an orbit and attitude control system. This paper first presents analytical conditions to achieve Sun-synchronous orbits and Sun-tracking attitude motion. These analytical solutions are then numerically propagated based on non-linear coupled orbit-attitude equations of motion. Consequently, the possibility of implementing Sun-synchronous orbits with Sun-tracking attitude motion is demonstrated.

  6. AQUA-motion domain and metaphorization patterns in European Portuguese: AQUA-motion metaphor in AERO-motion and abstract domains

    Directory of Open Access Journals (Sweden)

    Hanna Jakubowicz Batoréo

    2016-03-01

    Full Text Available The AQUA-motion verbs – as studied by Majsak & Rahilina 2003 and 2007, Lander, Majsak & Rahilina [2005] 2008, 2012 and 2013, and Divjak & Lemmens 2007, and in European Portuguese (EP by Batoréo, 2007, 2008, 2009; Batoréo et al., 2007; Casadinho, 2007 – allow typically metaphorical uses, which we postulate can be organized in patterns. Our study shows that in European Portuguese there are two metaphorization patterns to be observed: (i AQUA-motion metaphor in AERO-motion domain and (ii AQUA-motion metaphor in abstract domain (e.g. abundance, arts, politics, etc.. In the first case, where the target domain of the metaphorization is the air, in EP we navigate through a crowd or we float in a waltz, whereas in the second, where it is abstract, we swim in money or in blood, and politicians navigate at sea or face floating currency in finances. In the present paper we survey the EP verbs of AQUA-motion metaphors in non-elicited data from electronically available language corpora (cf. Linguateca. In some cases comparisons are made with typologically diferent languages (as, e.g. Polish, cf. Prokofjeva’s 2007, Batoréo 2009.

  7. Probing links between action perception and action production in Parkinson's disease using Fitts' law.

    Science.gov (United States)

    Sakurada, Takeshi; Knoblich, Guenther; Sebanz, Natalie; Muramatsu, Shin-Ichi; Hirai, Masahiro

    2018-03-01

    Information on how the subcortical brain encodes information required to execute actions or to evaluate others' actions remains scanty. To clarify this link, Fitts'-law tasks for perception and execution were tested in patients with Parkinson's disease (PD). For the perception task, participants were shown apparent motion displays of a person moving their arm between two identical targets and reported whether they judged that the person could realistically move at the perceived speed without missing the targets. For the motor task, participants were required to touch the two targets as quickly and accurately as possible, similarly to the person observed in the perception task. In both tasks, the PD group exhibited, or imputed to others, significantly slower performances than those of the control group. However, in both groups, the relationships of perception and execution with task difficulty were exactly those predicted by Fitts' law. This suggests that despite dysfunction of the subcortical region, motor simulation abilities reflected mechanisms of compensation in the PD group. Moreover, we found that patients with PD had difficulty in switching their strategy for estimating others' actions when asked to do so. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    Directory of Open Access Journals (Sweden)

    Valeriya Gritsenko

    Full Text Available To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery.Descriptive study of motion measured via 2 methods.Academic cancer center oncology clinic.20 women (mean age = 60 yrs were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery following mastectomy (n = 4 or lumpectomy (n = 16 for breast cancer.Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle.Correlation of motion capture with goniometry and detection of motion limitation.Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80, while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more.Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  9. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    Science.gov (United States)

    Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K

    2015-01-01

    To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  10. A single theoretical framework for circular features processing in humans: orientation and direction of motion compared

    Directory of Open Access Journals (Sweden)

    Tzvetomir eTzvetanov

    2012-05-01

    Full Text Available Common computational principles underly processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.

  11. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  12. Designing a compact MRI motion phantom

    Directory of Open Access Journals (Sweden)

    Schmiedel Max

    2016-09-01

    Full Text Available Even today, dealing with motion artifacts in magnetic resonance imaging (MRI is a challenging task. Image corruption due to spontaneous body motion complicates diagnosis. In this work, an MRI phantom for rigid motion is presented. It is used to generate motion-corrupted data, which can serve for evaluation of blind motion compensation algorithms. In contrast to commercially available MRI motion phantoms, the presented setup works on small animal MRI systems. Furthermore, retrospective gating is performed on the data, which can be used as a reference for novel motion compensation approaches. The motion of the signal source can be reconstructed using motor trigger signals and be utilized as the ground truth for motion estimation. The proposed setup results in motion corrected images. Moreover, the importance of preprocessing the MRI raw data, e.g. phase-drift correction, is demonstrated. The gained knowledge can be used to design an MRI phantom for elastic motion.

  13. 12 CFR 747.23 - Motions.

    Science.gov (United States)

    2010-01-01

    ... written motions except as otherwise directed by the administrative law judge. Written memorandum, briefs... Procedure § 747.23 Motions. (a) In writing. (1) Except as otherwise provided herein, an application or request for an order or ruling must be made by written motion. (2) All written motions must state with...

  14. Brain Image Motion Correction

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus

    2015-01-01

    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  15. Motion-to-Motion Gauge for the Electroweak Interaction of Leptons

    Directory of Open Access Journals (Sweden)

    Tselnik F.

    2015-01-01

    Full Text Available Comprised of rods and clocks, a reference system is a mere intermediary between the motion that is of interest in the problem and the motions of auxiliary test bodies the reference system is to be gauged with. However, a theory base d on such reference sys- tems might hide some features of this actual motion-to-motion correspondence, thus leaving these features incomprehensible. It is therefore d esirable to consider this corre- spondence explicitly, if only to substantiate a particular scheme. To this end, the very existence of a (local top-speed signal is shown to be sufficient to explain some peculiar- ities of the weak interaction using symmetrical configurations of auxiliary trajectories as a means for the gauge. In particular, the unification of the electromagnetic and weak interactions, parity violation, SU(2 L × U(1 group structure with the values of its cou- pling constants, and the intermediate vector boson are found to be a direct consequence of this gauge procedure.

  16. Brain mechanisms for simple perception and bistable perception.

    Science.gov (United States)

    Wang, Megan; Arteaga, Daniel; He, Biyu J

    2013-08-27

    When faced with ambiguous sensory inputs, subjective perception alternates between the different interpretations in a stochastic manner. Such multistable perception phenomena have intrigued scientists and laymen alike for over a century. Despite rigorous investigations, the underlying mechanisms of multistable perception remain elusive. Recent studies using multivariate pattern analysis revealed that activity patterns in posterior visual areas correlate with fluctuating percepts. However, increasing evidence suggests that vision--and perception at large--is an active inferential process involving hierarchical brain systems. We applied searchlight multivariate pattern analysis to functional magnetic resonance imaging signals across the human brain to decode perceptual content during bistable perception and simple unambiguous perception. Although perceptually reflective activity patterns during simple perception localized predominantly to posterior visual regions, bistable perception involved additionally many higher-order frontoparietal and temporal regions. Moreover, compared with simple perception, both top-down and bottom-up influences were dramatically enhanced during bistable perception. We further studied the intermittent presentation of ambiguous images--a condition that is known to elicit perceptual memory. Compared with continuous presentation, intermittent presentation recruited even more higher-order regions and was accompanied by further strengthened top-down influences but relatively weakened bottom-up influences. Taken together, these results strongly support an active top-down inferential process in perception.

  17. Dazzle camouflage affects speed perception.

    Directory of Open Access Journals (Sweden)

    Nicholas E Scott-Samuel

    Full Text Available Movement is the enemy of camouflage: most attempts at concealment are disrupted by motion of the target. Faced with this problem, navies in both World Wars in the twentieth century painted their warships with high contrast geometric patterns: so-called "dazzle camouflage". Rather than attempting to hide individual units, it was claimed that this patterning would disrupt the perception of their range, heading, size, shape and speed, and hence reduce losses from, in particular, torpedo attacks by submarines. Similar arguments had been advanced earlier for biological camouflage. Whilst there are good reasons to believe that most of these perceptual distortions may have occurred, there is no evidence for the last claim: changing perceived speed. Here we show that dazzle patterns can distort speed perception, and that this effect is greatest at high speeds. The effect should obtain in predators launching ballistic attacks against rapidly moving prey, or modern, low-tech battlefields where handheld weapons are fired from short ranges against moving vehicles. In the latter case, we demonstrate that in a typical situation involving an RPG7 attack on a Land Rover the reduction in perceived speed is sufficient to make the grenade miss where it was aimed by about a metre, which could be the difference between survival or not for the occupants of the vehicle.

  18. The reference frame for encoding and retention of motion depends on stimulus set size.

    Science.gov (United States)

    Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk

    2017-04-01

    The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.

  19. 6 CFR 13.28 - Motions.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Motions. 13.28 Section 13.28 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.28 Motions. (a) Any application to the ALJ for an order or ruling will be by motion. Motions will state the relief...

  20. 7 CFR 1.327 - Motions.

    Science.gov (United States)

    2010-01-01

    ... be in writing. The ALJ may require that oral motions be reduced to writing. (c) The ALJ may require written motions to be accompanied by supporting memorandums. (d) Within 15 days after a written motion is...) The ALJ may not grant a written motion prior to expiration of the time for filing responses thereto...

  1. Decreased coherent motion discrimination in autism spectrum disorder: the role of attentional zoom-out deficit.

    Directory of Open Access Journals (Sweden)

    Luca Ronconi

    Full Text Available Autism spectrum disorder (ASD has been associated with decreased coherent dot motion (CDM performance, a task that measures magnocellular sensitivity as well as fronto-parietal attentional integration processing. In order to clarify the role of spatial attention in CDM tasks, we measured the perception of coherently moving dots displayed in the central or peripheral visual field in ASD and typically developing children. A dorsal-stream deficit in children with ASD should predict a generally poorer performance in both conditions. In our study, however, we show that in children with ASD, CDM perception was selectively impaired in the central condition. In addition, in the ASD group, CDM efficiency was correlated to the ability to zoom out the attentional focus. Importantly, autism symptoms severity was related to both the CDM and attentional zooming-out impairment. These findings suggest that a dysfunction in the attentional network might help to explain decreased CDM discrimination as well as the "core" social cognition deficits of ASD.

  2. Visual perception of spatial subjects

    International Nuclear Information System (INIS)

    Osterloh, K.R.S.; Ewert, U.

    2007-01-01

    Principally, any imaging technology consists of two consecutive, though strictly separated processes: data acquisition and subsequent processing to generate an image that can be looked at, either on a monitor screen or printed on paper. Likewise, the physiological process of viewing can be separated into vision and perception, though these processes are much more overlapping. Understanding the appearance of a subject requires the entire sequence from receiving the information carried e.g. by photons up to an appropriate processing leading to the perception of the subject shown. As a consequence, the imagination of a subject is a result of both, technological and physiological processes. Whenever an evaluation of an image is critical, also the physiological part of the processing should be considered. However, an image has two dimensions in the first place and reality is spatial, it has three dimensions. This problem has been tackled on a philosophical level at least since Platon's famous discussion on the shadow image in a dark cave. The mere practical point is which structural details can be perceived and what may remain undetected depending on the mode of presentation. This problem cannot be resolved without considering each single step of visual perception. Physiologically, there are three 'tools' available to understanding the spatial structure of a subject: binocular viewing, following the course of perspective projection and motion to collect multiple aspects. Artificially, an object may be cut in various ways to display the interior or covering parts could be made transparent within a model. Samples will be shown how certain details of a subject can be emphasised or hidden depending on the way of presentation. It needs to be discussed what might help to perceive the true spatial structure of a subject with all relevant details and what could be misleading. (authors)

  3. Visual perception of spatial subjects

    Energy Technology Data Exchange (ETDEWEB)

    Osterloh, K.R.S.; Ewert, U. [Federal Institute for Materials Research and Testing (BAM), Berlin (Germany)

    2007-07-01

    Principally, any imaging technology consists of two consecutive, though strictly separated processes: data acquisition and subsequent processing to generate an image that can be looked at, either on a monitor screen or printed on paper. Likewise, the physiological process of viewing can be separated into vision and perception, though these processes are much more overlapping. Understanding the appearance of a subject requires the entire sequence from receiving the information carried e.g. by photons up to an appropriate processing leading to the perception of the subject shown. As a consequence, the imagination of a subject is a result of both, technological and physiological processes. Whenever an evaluation of an image is critical, also the physiological part of the processing should be considered. However, an image has two dimensions in the first place and reality is spatial, it has three dimensions. This problem has been tackled on a philosophical level at least since Platon's famous discussion on the shadow image in a dark cave. The mere practical point is which structural details can be perceived and what may remain undetected depending on the mode of presentation. This problem cannot be resolved without considering each single step of visual perception. Physiologically, there are three 'tools' available to understanding the spatial structure of a subject: binocular viewing, following the course of perspective projection and motion to collect multiple aspects. Artificially, an object may be cut in various ways to display the interior or covering parts could be made transparent within a model. Samples will be shown how certain details of a subject can be emphasised or hidden depending on the way of presentation. It needs to be discussed what might help to perceive the true spatial structure of a subject with all relevant details and what could be misleading. (authors)

  4. Smoothing of respiratory motion traces for motion-compensated radiotherapy.

    Science.gov (United States)

    Ernst, Floris; Schlaefer, Alexander; Schweikard, Achim

    2010-01-01

    The CyberKnife system has been used successfully for several years to radiosurgically treat tumors without the need for stereotactic fixation or sedation of the patient. It has been shown that tumor motion in the lung, liver, and pancreas can be tracked with acceptable accuracy and repeatability. However, highly precise targeting for tumors in the lower abdomen, especially for tumors which exhibit strong motion, remains problematic. Reasons for this are manifold, like the slow tracking system operating at 26.5 Hz, and using the signal from the tracking camera "as is." Since the motion recorded with the camera is used to compensate for system latency by prediction and the predicted signal is subsequently used to infer the tumor position from a correlation model based on x-ray imaging of gold fiducials around the tumor, camera noise directly influences the targeting accuracy. The goal of this work is to establish the suitability of a new smoothing method for respiratory motion traces used in motion-compensated radiotherapy. The authors endeavor to show that better prediction--With a lower rms error of the predicted signal--and/or smoother prediction is possible using this method. The authors evaluated six commercially available tracking systems (NDI Aurora, PolarisClassic, Polaris Vicra, MicronTracker2 H40, FP5000, and accuTrack compact). The authors first tracked markers both stationary and while in motion to establish the systems' noise characteristics. Then the authors applied a smoothing method based on the a trous wavelet decomposition to reduce the devices' noise level. Additionally, the smoothed signal of the moving target and a motion trace from actual human respiratory motion were subjected to prediction using the MULIN and the nLMS2 algorithms. The authors established that the noise distribution for a static target is Gaussian and that when the probe is moved such as to mimic human respiration, it remains Gaussian with the exception of the FP5000 and the

  5. Smoothing of respiratory motion traces for motion-compensated radiotherapy

    International Nuclear Information System (INIS)

    Ernst, Floris; Schlaefer, Alexander; Schweikard, Achim

    2010-01-01

    Purpose: The CyberKnife system has been used successfully for several years to radiosurgically treat tumors without the need for stereotactic fixation or sedation of the patient. It has been shown that tumor motion in the lung, liver, and pancreas can be tracked with acceptable accuracy and repeatability. However, highly precise targeting for tumors in the lower abdomen, especially for tumors which exhibit strong motion, remains problematic. Reasons for this are manifold, like the slow tracking system operating at 26.5 Hz, and using the signal from the tracking camera ''as is''. Since the motion recorded with the camera is used to compensate for system latency by prediction and the predicted signal is subsequently used to infer the tumor position from a correlation model based on x-ray imaging of gold fiducials around the tumor, camera noise directly influences the targeting accuracy. The goal of this work is to establish the suitability of a new smoothing method for respiratory motion traces used in motion-compensated radiotherapy. The authors endeavor to show that better prediction--With a lower rms error of the predicted signal--and/or smoother prediction is possible using this method. Methods: The authors evaluated six commercially available tracking systems (NDI Aurora, PolarisClassic, Polaris Vicra, MicronTracker2 H40, FP5000, and accuTrack compact). The authors first tracked markers both stationary and while in motion to establish the systems' noise characteristics. Then the authors applied a smoothing method based on the a trous wavelet decomposition to reduce the devices' noise level. Additionally, the smoothed signal of the moving target and a motion trace from actual human respiratory motion were subjected to prediction using the MULIN and the nLMS 2 algorithms. Results: The authors established that the noise distribution for a static target is Gaussian and that when the probe is moved such as to mimic human respiration, it remains Gaussian with the

  6. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  7. Audiovisual biofeedback improves the correlation between internal/external surrogate motion and lung tumor motion.

    Science.gov (United States)

    Lee, Danny; Greer, Peter B; Paganelli, Chiara; Ludbrook, Joanna Jane; Kim, Taeho; Keall, Paul

    2018-03-01

    Breathing management can reduce breath-to-breath (intrafraction) and day-by-day (interfraction) variability in breathing motion while utilizing the respiratory motion of internal and external surrogates for respiratory guidance. Audiovisual (AV) biofeedback, an interactive personalized breathing motion management system, has been developed to improve reproducibility of intra- and interfraction breathing motion. However, the assumption of the correlation of respiratory motion between surrogates and tumors is not always verified during medical imaging and radiation treatment. Therefore, the aim of the study was to test the hypothesis that the correlation of respiratory motion between surrogates and tumors is the same under free breathing without guidance (FB) and with AV biofeedback guidance for voluntary motion management. For 13 lung cancer patients receiving radiotherapy, 2D coronal and sagittal cine-MR images were acquired across two MRI sessions (pre- and mid-treatment) with two breathing conditions: (a) FB and (b) AV biofeedback, totaling 88 patient measurements. Simultaneously, the external respiratory motion of the abdomen was measured. The internal respiratory motion of the diaphragm and lung tumor was retrospectively measured from 2D coronal and sagittal cine-MR images. The correlation of respiratory motion between surrogates and tumors was calculated using Pearson's correlation coefficient for: (a) abdomen to tumor (abdomen-tumor) and (b) diaphragm to tumor (diaphragm-tumor). The correlations were compared between FB and AV biofeedback using several metrics: abdomen-tumor and diaphragm-tumor correlations with/without ≥5 mm tumor motion range and with/without adjusting for phase shifts between the signals. Compared to FB, AV biofeedback improved abdomen-tumor correlation by 11% (p = 0.12) from 0.53 to 0.59 and diaphragm-tumor correlation by 13% (p = 0.02) from 0.55 to 0.62. Compared to FB, AV biofeedback improved abdomen-tumor correlation by 17% (p = 0

  8. Human motion simulation predictive dynamics

    CERN Document Server

    Abdel-Malek, Karim

    2013-01-01

    Simulate realistic human motion in a virtual world with an optimization-based approach to motion prediction. With this approach, motion is governed by human performance measures, such as speed and energy, which act as objective functions to be optimized. Constraints on joint torques and angles are imposed quite easily. Predicting motion in this way allows one to use avatars to study how and why humans move the way they do, given specific scenarios. It also enables avatars to react to infinitely many scenarios with substantial autonomy. With this approach it is possible to predict dynamic motion without having to integrate equations of motion -- rather than solving equations of motion, this approach solves for a continuous time-dependent curve characterizing joint variables (also called joint profiles) for every degree of freedom. Introduces rigorous mathematical methods for digital human modelling and simulation Focuses on understanding and representing spatial relationships (3D) of biomechanics Develops an i...

  9. The moving minimum audible angle is smaller during self motion than during source motion.

    Directory of Open Access Journals (Sweden)

    W. Owen eBrimijoin

    2014-09-01

    Full Text Available We are rarely perfectly still: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system – in a manner not unlike the vestibulo-ocular reflex – works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion.We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create head-stabilized signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA. This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this self-motion condition we measured MMAA in a second source-motion condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition.For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1-2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues.

  10. Integration Method of Emphatic Motions and Adverbial Expressions with Scalar Parameters for Robotic Motion Coaching System

    Science.gov (United States)

    Okuno, Keisuke; Inamura, Tetsunari

    A robotic coaching system can improve humans' learning performance of motions by intelligent usage of emphatic motions and adverbial expressions according to user reactions. In robotics, however, method to control both the motions and the expressions and how to bind them had not been adequately discussed from an engineering point of view. In this paper, we propose a method for controlling and binding emphatic motions and adverbial expressions by using two scalar parameters in a phase space. In the phase space, variety of motion patterns and verbal expressions are connected and can be expressed as static points. We show the feasibility of the proposing method through experiments of actual sport coaching tasks for beginners. From the results of participants' improvements in motion learning, we confirmed the feasibility of the methods to control and bind emphatic motions and adverbial expressions, as well as confirmed contribution of the emphatic motions and positive correlation of adverbial expressions for participants' improvements in motion learning. Based on the results, we introduce a hypothesis that individually optimized method for binding adverbial expression is required.

  11. Knee Motion Generation Method for Transfemoral Prosthesis Based on Kinematic Synergy and Inertial Motion.

    Science.gov (United States)

    Sano, Hiroshi; Wada, Takahiro

    2017-12-01

    Previous research has shown that the effective use of inertial motion (i.e., less or no torque input at the knee joint) plays an important role in achieving a smooth gait of transfemoral prostheses in the swing phase. In our previous research, a method for generating a timed knee trajectory close to able-bodied individuals, which leads to sufficient clearance between the foot and the floor and the knee extension, was proposed using the inertial motion. Limb motions are known to correlate with each other during walking. This phenomenon is called kinematic synergy. In this paper, we measure gaits in level walking of able-bodied individuals with a wide range of walking velocities. We show that this kinematic synergy also exists between the motions of the intact limbs and those of the knee as determined by the inertial motion technique. We then propose a new method for generating the motion of the knee joint using its inertial motion close to the able-bodied individuals in mid-swing based on its kinematic synergy, such that the method can adapt to the changes in the motion velocity. The numerical simulation results show that the proposed method achieves prosthetic walking similar to that of able-bodied individuals with a wide range of constant walking velocities and termination of walking from steady-state walking. Further investigations have found that a kinematic synergy also exists at the start of walking. Overall, our method successfully achieves knee motion generation from the initiation of walking through steady-state walking with different velocities until termination of walking.

  12. Motion sickness: a negative reinforcement model.

    Science.gov (United States)

    Bowins, Brad

    2010-01-15

    Theories pertaining to the "why" of motion sickness are in short supply relative to those detailing the "how." Considering the profoundly disturbing and dysfunctional symptoms of motion sickness, it is difficult to conceive of why this condition is so strongly biologically based in humans and most other mammalian and primate species. It is posited that motion sickness evolved as a potent negative reinforcement system designed to terminate motion involving sensory conflict or postural instability. During our evolution and that of many other species, motion of this type would have impaired evolutionary fitness via injury and/or signaling weakness and vulnerability to predators. The symptoms of motion sickness strongly motivate the individual to terminate the offending motion by early avoidance, cessation of movement, or removal of oneself from the source. The motion sickness negative reinforcement mechanism functions much like pain to strongly motivate evolutionary fitness preserving behavior. Alternative why theories focusing on the elimination of neurotoxins and the discouragement of motion programs yielding vestibular conflict suffer from several problems, foremost that neither can account for the rarity of motion sickness in infants and toddlers. The negative reinforcement model proposed here readily accounts for the absence of motion sickness in infants and toddlers, in that providing strong motivation to terminate aberrant motion does not make sense until a child is old enough to act on this motivation.

  13. Neural theory for the perception of causal actions.

    Science.gov (United States)

    Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A

    2012-07-01

    The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.

  14. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  15. P1-17: Pseudo-Haptics Using Motion-in-Depth Stimulus and Second-Order Motion Stimulus

    Directory of Open Access Journals (Sweden)

    Shuichi Sato

    2012-10-01

    Full Text Available Modification of motion of the computer cursor during the manipulation by the observer evokes illusory haptic sensation (Lecuyer et al., 2004 ACM SIGCHI '04 239–246. This study investigates the pseudo-haptics using motion-in-depth and second-order motion. A stereoscopic display and a PHANTOM were used in the first experiment. A subject was asked to move a visual target at a constant speed in horizontal, vertical, or front-back direction. During the manipulation, the speed was reduced to 50% for 500 msec. The haptic sensation was measured using the magnitude estimation method. The result indicates that perceived haptic sensation from motion-in-depth was about 30% of that from horizontal or vertical motion. A 2D display and the PHANTOM were used in the second experiment. The motion cue was second order—in each frame, dots in a square patch reverses in contrast (i.e., all black dots become white and all white dots become black. The patch was moved in a horizontal direction. The result indicates that perceived haptic sensation from second-order motion was about 90% of that from first-order motion.

  16. Prediction of Motion Induced Image Degradation Using a Markerless Motion Tracker

    DEFF Research Database (Denmark)

    Olsen, Rasmus Munch; Johannesen, Helle Hjorth; Henriksen, Otto Mølby

    In this work a markerless motion tracker, TCL2, is used to predict image quality in 3D T1 weighted MPRAGE MRI brain scans. An experienced radiologist scored the image quality for 172 scans as being usable or not usable, i.e. if a repeated scan was required. Based on five motion parameters......, a classification algorithm was trained and an accuracy for identifying not usable images of 95.9% was obtained with a sensitivity of 91.7% and specificity of 96.3%. This work shows the feasibility of the markerless motion tracker for predicting image quality with a high accuracy....

  17. Strong Motion Earthquake Data Values of Digitized Strong-Motion Accelerograms, 1933-1994

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Strong Motion Earthquake Data Values of Digitized Strong-Motion Accelerograms is a database of over 15,000 digitized and processed accelerograph records from...

  18. Motion in images is essential to cause motion sickness symptoms, but not to increase postural sway

    NARCIS (Netherlands)

    Lubeck, A.J.A.; Bos, J.E.; Stins, J.F.

    2015-01-01

    Abstract Objective It is generally assumed that motion in motion images is responsible for increased postural sway as well as for visually induced motion sickness (VIMS). However, this has not yet been tested. To that end, we studied postural sway and VIMS induced by motion and still images. Method

  19. Kinematic parameters that influence the aesthetic perception of beauty in contemporary dance.

    Science.gov (United States)

    Torrents, Carlota; Castañer, Marta; Jofre, Toni; Morey, Gaspar; Reverter, Ferran

    2013-01-01

    Some experiments have stablished that certain kinematic parameters can influence the subjective aesthetic perception of the dance audience. Neave, McCarty, Freynik, Caplan, Hönekopp, and Fink (2010, Biology Letters 7 221-224) reported eleven movement parameters in non-expert male dancers, showing a significant positive correlation with perceived dance quality. We aim to identify some of the kinematic parameters of expert dancers' movements that influence the subjective aesthetic perception of observers in relation to specific skills of contemporary dance. Four experienced contemporary dancers performed three repetitions of four dance-related motor skills. Motion was captured by a VICON-MX system. The resulting 48 animations were viewed by 108 observers. The observers judged beauty using a semantic differential. The data were then subjected to multiple factor analysis. The results suggested that there were strong associations between higher beauty scores and certain kinematic parameters, especially those related to amplitude of movement.

  20. Motion correction options in PET/MRI.

    Science.gov (United States)

    Catana, Ciprian

    2015-05-01

    Subject motion is unavoidable in clinical and research imaging studies. Breathing is the most important source of motion in whole-body PET and MRI studies, affecting not only thoracic organs but also those in the upper and even lower abdomen. The motion related to the pumping action of the heart is obviously relevant in high-resolution cardiac studies. These two sources of motion are periodic and predictable, at least to a first approximation, which means certain techniques can be used to control the motion (eg, by acquiring the data when the organ of interest is relatively at rest). Additionally, nonperiodic and unpredictable motion can also occur during the scan. One obvious limitation of methods relying on external devices (eg, respiratory bellows or the electrocardiogram signal to monitor the respiratory or cardiac cycle, respectively) to trigger or gate the data acquisition is that the complex motion of internal organs cannot be fully characterized. However, detailed information can be obtained using either the PET or MRI data (or both) allowing the more complete characterization of the motion field so that a motion model can be built. Such a model and the information derived from simple external devices can be used to minimize the effects of motion on the collected data. In the ideal case, all the events recorded during the PET scan would be used to generate a motion-free or corrected PET image. The detailed motion field can be used for this purpose by applying it to the PET data before, during, or after the image reconstruction. Integrating all these methods for motion control, characterization, and correction into a workflow that can be used for routine clinical studies is challenging but could potentially be extremely valuable given the improvement in image quality and reduction of motion-related image artifacts. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Clonal selection versus clonal cooperation: the integrated perception of immune objects [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Serge Nataf

    2016-09-01

    Full Text Available Analogies between the immune and nervous systems were first envisioned by the immunologist Niels Jerne who introduced the concepts of antigen "recognition" and immune "memory". However, since then, it appears that only the cognitive immunology paradigm proposed by Irun Cohen, attempted to further theorize the immune system functions through the prism of neurosciences. The present paper is aimed at revisiting this analogy-based reasoning. In particular, a parallel is drawn between the brain pathways of visual perception and the processes allowing the global perception of an "immune object". Thus, in the visual system, distinct features of a visual object (shape, color, motion are perceived separately by distinct neuronal populations during a primary perception task. The output signals generated during this first step instruct then an integrated perception task performed by other neuronal networks. Such a higher order perception step is by essence a cooperative task that is mandatory for the global perception of visual objects. Based on a re-interpretation of recent experimental data, it is suggested that similar general principles drive the integrated perception of immune objects in secondary lymphoid organs (SLOs. In this scheme, the four main categories of signals characterizing an immune object (antigenic, contextual, temporal and localization signals are first perceived separately by distinct networks of immunocompetent cells.  Then, in a multitude of SLO niches, the output signals generated during this primary perception step are integrated by TH-cells at the single cell level. This process eventually generates a multitude of T-cell and B-cell clones that perform, at the scale of SLOs, an integrated perception of immune objects. Overall, this new framework proposes that integrated immune perception and, consequently, integrated immune responses, rely essentially on clonal cooperation rather than clonal selection.

  2. Synthesis of High-Frequency Ground Motion Using Information Extracted from Low-Frequency Ground Motion

    Science.gov (United States)

    Iwaki, A.; Fujiwara, H.

    2012-12-01

    Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion

  3. Decreased cortical activation in response to a motion stimulus in anisometropic amblyopic eyes using functional magnetic resonance imaging.

    Science.gov (United States)

    Bonhomme, Gabrielle R; Liu, Grant T; Miki, Atsushi; Francis, Ellie; Dobre, M-C; Modestino, Edward J; Aleman, David O; Haselgrove, John C

    2006-12-01

    Motion perception abnormalities and extrastriate abnormalities have been suggested in amblyopia. Functional MRI (fMRI) and motion stimuli were used to study whether interocular differences in activation are detectable in motion-sensitive cortical areas in patients with anisometropic amblyopia. We performed fMRI at 1.5 T 4 control subjects (20/20 OU), 1 with monocular suppression (20/25), and 2 with anisometropic amblyopia (20/60, 20/800). Monocular suppression was thought to be form fruste of amblyopia. The experimental stimulus consisted of expanding and contracting concentric rings, whereas the control condition consisted of stationary concentric rings. Activation was determined by contrasting the 2 conditions for each eye. Significant fMRI activation and comparable right and left eye activation was found in V3a and V5 in all control subjects (Average z-values in L vs R contrast 0.42, 0.43) and in the subject with monocular suppression (z = 0.19). The anisometropes exhibited decreased extrastriate activation in their amblyopic eyes compared with the fellow eyes (zs = 2.12, 2.76). Our data suggest motion-sensitive cortical structures may be less active when anisometropic amblyopic eyes are stimulated with moving rings. These results support the hypothesis that extrastriate cortex is affected in anisometropic amblyopia. Although suggestive of a magnocellular defect, the exact mechanism is unclear.

  4. How much motion is too much motion? Determining motion thresholds by sample size for reproducibility in developmental resting-state MRI

    Directory of Open Access Journals (Sweden)

    Julia Leonard

    2017-03-01

    Full Text Available A constant problem developmental neuroimagers face is in-scanner head motion. Children move more than adults and this has led to concerns that developmental changes in resting-state connectivity measures may be artefactual. Furthermore, children are challenging to recruit into studies and therefore researchers have tended to take a permissive stance when setting exclusion criteria on head motion. The literature is not clear regarding our central question: How much motion is too much? Here, we systematically examine the effects of multiple motion exclusion criteria at different sample sizes and age ranges in a large openly available developmental cohort (ABIDE; http://preprocessed-connectomes-project.org/abide. We checked 1 the reliability of resting-state functional magnetic resonance imaging (rs-fMRI pairwise connectivity measures across the brain and 2 the accuracy with which we can separate participants with autism spectrum disorder from typically developing controls based on their rs-fMRI scans using machine learning. We find that reliability on average is primarily sensitive to the number of participants considered, but that increasingly permissive motion thresholds lower case-control prediction accuracy for all sample sizes.

  5. Combining Motion-Induced Blindness with Binocular Rivalry

    Directory of Open Access Journals (Sweden)

    K Jaworska

    2011-04-01

    Full Text Available Motion-induced blindness (MIB and binocular rivalry (BR are examples of multistable phenomena in which our perception varies despite constant retinal input. It has been suggested that both phenomena are related and share a common underlying mechanism. We tried to determine whether experimental manipulations of the target dot and the mask systematically affect MIB and BR in an experimental paradigm that can elicit both phenomena. Eighteen observers fixated the center of a split-screen stereo display that consisted of a distracter mask and a superimposed target dot with different colour (isoluminant Red/Green in corresponding peripheral areas of the left and right eye. Observers reported perceived colour and disappearance of the target dot by pressing and releasing corresponding keys. In a within-subjects design the mask was presented in rivalry or not—with orthogonal drift in the left and right eye or with the same drift in both eyes. In control conditions the mask remained stationary. In addition, the size of the target dot was varied (small, medium, and large. Our results suggest that MIB measured by normalized frequency and duration of target disappearance and BR measured by normalized frequency and duration of colour reversals of the target were both affected by motion in the mask. Surprisingly, binocular rivalry in the mask had only a small effect on BR of the target and virtually no effect on MIB. The overall pattern of normalized MIB and BR measures, however, differed across experimental conditions. In conclusion, the results show some degree of dissociation between MIB and BR. Further analyses will inform whether or not the two phenomena occur independently of each other.

  6. Autonomous vehicle motion control, approximate maps, and fuzzy logic

    Science.gov (United States)

    Ruspini, Enrique H.

    1993-01-01

    Progress on research on the control of actions of autonomous mobile agents using fuzzy logic is presented. The innovations described encompass theoretical and applied developments. At the theoretical level, results of research leading to the combined utilization of conventional artificial planning techniques with fuzzy logic approaches for the control of local motion and perception actions are presented. Also formulations of dynamic programming approaches to optimal control in the context of the analysis of approximate models of the real world are examined. Also a new approach to goal conflict resolution that does not require specification of numerical values representing relative goal importance is reviewed. Applied developments include the introduction of the notion of approximate map. A fuzzy relational database structure for the representation of vague and imprecise information about the robot's environment is proposed. Also the central notions of control point and control structure are discussed.

  7. Motion video analysis using planar parallax

    Science.gov (United States)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  8. Motion sickness increases the risk of accidental hypothermia.

    Science.gov (United States)

    Nobel, Gerard; Eiken, Ola; Tribukait, Arne; Kölegård, Roger; Mekjavic, Igor B

    2006-09-01

    Motion sickness (MS) has been found to increase body-core cooling during immersion in 28 degrees C water, an effect ascribed to attenuation of the cold-induced peripheral vasoconstriction (Mekjavic et al. in J Physiol 535(2):619-623, 2001). The present study tested the hypothesis that a more profound cold stimulus would override the MS effect on peripheral vasoconstriction and hence on the core cooling rate. Eleven healthy subjects underwent two separate head-out immersions in 15 degrees C water. In the control trial (CN), subjects were immersed after baseline measurements. In the MS-trial, subjects were rendered motion sick prior to immersion, by using a rotating chair in combination with a regimen of standardized head movements. During immersion in the MS-trial, subjects were exposed to an optokinetic stimulus (rotating drum). At 5-min intervals subjects rated their temperature perception, thermal comfort and MS discomfort. During immersion mean skin temperature, rectal temperature, the difference in temperature between the non-immersed right forearm and 3rd finger of the right hand (DeltaTff), oxygen uptake and heart rate were recorded. In the MS-trial, rectal temperature decreased substantially faster (33%, P < 0.01). Also, the DeltaTff response, an index of peripheral vasomotor tone, as well as the oxygen uptake, indicative of the shivering response, were significantly attenuated (P < 0.01 and P < 0.001, respectively) by MS. Thus, MS may predispose individuals to hypothermia by enhancing heat loss and attenuating heat production. This might have significant implications for survival in maritime accidents.

  9. Tactile motion adaptation reduces perceived speed but shows no evidence of direction sensitivity.

    Directory of Open Access Journals (Sweden)

    Sarah McIntyre

    Full Text Available INTRODUCTION: While the directionality of tactile motion processing has been studied extensively, tactile speed processing and its relationship to direction is little-researched and poorly understood. We investigated this relationship in humans using the 'tactile speed aftereffect' (tSAE, in which the speed of motion appears slower following prolonged exposure to a moving surface. METHOD: We used psychophysical methods to test whether the tSAE is direction sensitive. After adapting to a ridged moving surface with one hand, participants compared the speed of test stimuli on the adapted and unadapted hands. We varied the direction of the adapting stimulus relative to the test stimulus. RESULTS: Perceived speed of the surface moving at 81 mms(-1 was reduced by about 30% regardless of the direction of the adapting stimulus (when adapted in the same direction, Mean reduction = 23 mms(-1, SD = 11; with opposite direction, Mean reduction = 26 mms(-1, SD = 9. In addition to a large reduction in perceived speed due to adaptation, we also report that this effect is not direction sensitive. CONCLUSIONS: Tactile motion is susceptible to speed adaptation. This result complements previous reports of reliable direction aftereffects when using a dynamic test stimulus as together they describe how perception of a moving stimulus in touch depends on the immediate history of stimulation. Given that the tSAE is not direction sensitive, we argue that peripheral adaptation does not explain it, because primary afferents are direction sensitive with friction-creating stimuli like ours (thus motion in their preferred direction should result in greater adaptation, and if perceived speed were critically dependent on these afferents' response intensity, the tSAE should be direction sensitive. The adaptation that reduces perceived speed therefore seems to be of central origin.

  10. Cross-sensory facilitation reveals neural interactions between visual and tactile motion in humans

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2011-04-01

    Full Text Available Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. We find that while summation produced a generic, non-specific improvement of thresholds, probably reflecting higher-order interaction of decision signals, facilitation reveals a strong, direction-specific interaction, which we believe reflects sensory interactions. We measured visual and tactile velocity discrimination thresholds over a wide range of base velocities and conditions. Thresholds for both visual and tactile stimuli showed the characteristic dipper function, with the minimum thresholds occurring at a given pedestal speed. When visual and tactile coherent stimuli were combined (summation condition the thresholds for these multi-sensory stimuli also showed a dipper function with the minimum thresholds occurring in a similar range to that for unisensory signals. However, the improvement of multisensory thresholds was weak and not directionally specific, well predicted by the maximum likelihood estimation model (agreeing with previous research. A different technique (facilitation did, however, reveal direction-specific enhancement. Adding a non-informative pedestal motion stimulus in one sensory modality (vision or touch selectively lowered thresholds in the other, by the same amount as pedestals in the same modality. Facilitation did not occur for neutral stimuli like sounds (that would also have reduced temporal uncertainty, nor for motion in opposite direction, even in blocked trials where the subjects knew that the motion was in the opposite direction showing that the facilitation was not under subject control. Cross-sensory facilitation is strong evidence for functionally relevant cross-sensory integration at early levels of sensory

  11. Helicopter flight simulation motion platform requirements

    Science.gov (United States)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  12. Analytical Analysis of Motion Separability

    Directory of Open Access Journals (Sweden)

    Marjan Hadian Jazi

    2013-01-01

    Full Text Available Motion segmentation is an important task in computer vision and several practical approaches have already been developed. A common approach to motion segmentation is to use the optical flow and formulate the segmentation problem using a linear approximation of the brightness constancy constraints. Although there are numerous solutions to solve this problem and their accuracies and reliabilities have been studied, the exact definition of the segmentation problem, its theoretical feasibility and the conditions for successful motion segmentation are yet to be derived. This paper presents a simplified theoretical framework for the prediction of feasibility, of segmentation of a two-dimensional linear equation system. A statistical definition of a separable motion (structure is presented and a relatively straightforward criterion for predicting the separability of two different motions in this framework is derived. The applicability of the proposed criterion for prediction of the existence of multiple motions in practice is examined using both synthetic and real image sequences. The prescribed separability criterion is useful in designing computer vision applications as it is solely based on the amount of relative motion and the scale of measurement noise.

  13. Projectile Motion Hoop Challenge

    Science.gov (United States)

    Jordan, Connor; Dunn, Amy; Armstrong, Zachary; Adams, Wendy K.

    2018-04-01

    Projectile motion is a common phenomenon that is used in introductory physics courses to help students understand motion in two dimensions. Authors have shared a range of ideas for teaching this concept and the associated kinematics in The Physics Teacher; however, the "Hoop Challenge" is a new setup not before described in TPT. In this article an experiment is illustrated to explore projectile motion in a fun and challenging manner that has been used with both high school and university students. With a few simple materials, students have a vested interest in being able to calculate the height of the projectile at a given distance from its launch site. They also have an exciting visual demonstration of projectile motion when the lab is over.

  14. WE-G-18C-06: Is Diaphragm Motion a Good Surrogate for Liver Tumor Motion?

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J [Department of Radiation Oncology, Duke University Medical Center, Durham, NC (United States); School of Information Science and Engineering, Shandong University, Jinan, Shandong (China); Cai, J; Zheng, C; Czito, B; Palta, M; Yin, F [Department of Radiation Oncology, Duke University Medical Center, Durham, NC (United States); Wang, H [School of Information Science and Engineering, Shandong University, Jinan, Shandong (China); Bashir, M [Department of Radiology, Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Purpose: To investigate whether diaphragm motion is a good surrogate for liver tumor motion by comparing their motion trajectories obtained from cine-MRI. Methods: Fourteen patients with hepatocellular carcinoma (10/14) or liver metastases (4/14) undergoing radiation therapy were included in this study. All patients underwent single-slice 2D cine-MRI simulations across the center of the tumor in three orthogonal planes. Tumor and diaphragm motion trajectories in the superior-inferior (SI), anteriorposterior (AP), and medial-lateral (ML) directions were obtained using the normalized cross-correlation based tracking technique. Agreement between tumor and diaphragm motions was assessed by calculating the phase difference percentage (PDP), intra-class correlation coefficient (ICC), Bland-Altman analysis (Diffs) and paired t-test. The distance (D) between tumor and tracked diaphragm area was analyzed to understand its impact on the correlation between tumor and diaphragm motions. Results: Of all patients, the means (±standard deviations) of PDP were 7.1 (±1.1)%, 4.5 (±0.5)% and 17.5 (±4.5)% in the SI, AP and ML directions, respectively. The means of ICC were 0.98 (±0.02), 0.97 (±0.02), and 0.08 (±0.06) in the SI, AP and ML directions, respectively. The Diffs were 2.8 (±1.4) mm, 2.4 (±1.1) mm, and 2.2 (±0.5) mm in the SI, AP and ML directions, respectively. The p-values derived from the paired t-test were < 0.02 in SI and AP directions, whereas were > 0.58 in ML direction primarily due to the small motion in ML direction. Tumor and diaphragmatic motion had high concordance when the distance between the tumor and tracked diaphragm areas was small. Conclusion: Preliminary results showed that liver tumor motion had good correlations with diaphragm motion in the SI and AP directions, indicating diaphragm motion in the SI and AP directions could potentially be a reliable surrogate for liver tumor motion. NIH (1R21CA165384-01A1), Golfers Against Cancer (GAC

  15. Haptically Induced Illusory Self-motion and the Influence of Context of Motion

    DEFF Research Database (Denmark)

    Nilsson, Niels Christian; Nordahl, Rolf; Sikström, Erik

    2012-01-01

    of the feet. The experiment was based on the a within-subjects design and included four conditions, each representing one context of motion: an elevator, a train compartment, a bathroom, and a completely dark environment. The audiohaptic stimuli was identical across all conditions. The participants’ sensation...... of movement was assessed by means of existing measures of illusory self-motion, namely, reported self-motion illusion per stimulus type, illusion compellingness, intensity and onset time. Finally the participants were also asked to estimate the experienced direction of movement. While the data obtained from...

  16. Motion sickness symptoms in a ship motion simulator: effects of inside, outside, and no view

    NARCIS (Netherlands)

    Bos, J.E.; MacKinnon, S.N.; Patterson, A.

    2005-01-01

    Vehicle motion characteristics differ between air, road, and sea environments, both vestibularly and visually. Effects of vision on motion sickness have been studied before, though less systematically in a naval setting. It is hypothesized that appropriate visual information on self-motion is

  17. Motion adaptation leads to parsimonious encoding of natural optic flow by blowfly motion vision system

    NARCIS (Netherlands)

    Heitwerth, J.; Kern, R.; Hateren, J.H. van; Egelhaaf, M.

    Neurons sensitive to visual motion change their response properties during prolonged motion stimulation. These changes have been interpreted as adaptive and were concluded, for instance, to adjust the sensitivity of the visual motion pathway to velocity changes or to increase the reliability of

  18. 19 CFR 210.15 - Motions.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Motions. 210.15 Section 210.15 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.15 Motions. (a) Presentation and disposition. (1) During the period...

  19. A motion algorithm to extract physical and motion parameters of mobile targets from cone-beam computed tomographic images.

    Science.gov (United States)

    Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad

    2016-05-17

    A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern

  20. Motion-compensated processing of image signals

    NARCIS (Netherlands)

    2010-01-01

    In a motion-compensated processing of images, input images are down-scaled (scl) to obtain down-scaled images, the down-scaled images are subjected to motion- compensated processing (ME UPC) to obtain motion-compensated images, the motion- compensated images are up-scaled (sc2) to obtain up-scaled

  1. 19 CFR 210.26 - Other motions.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Other motions. 210.26 Section 210.26 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Motions § 210.26 Other motions. Motions pertaining to discovery shall be filed in...

  2. Brain activity dynamics in human parietal regions during spontaneous switches in bistable perception.

    Science.gov (United States)

    Megumi, Fukuda; Bahrami, Bahador; Kanai, Ryota; Rees, Geraint

    2015-02-15

    The neural mechanisms underlying conscious visual perception have been extensively investigated using bistable perception paradigms. Previous functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) studies suggest that the right anterior superior parietal (r-aSPL) and the right posterior superior parietal lobule (r-pSPL) have opposite roles in triggering perceptual reversals. It has been proposed that these two areas are part of a hierarchical network whose dynamics determine perceptual switches. However, how these two parietal regions interact with each other and with the rest of the brain during bistable perception is not known. Here, we investigated such a model by recording brain activity using fMRI while participants viewed a bistable structure-from-motion stimulus. Using dynamic causal modeling (DCM), we found that resolving such perceptual ambiguity was specifically associated with reciprocal interactions between these parietal regions and V5/MT. Strikingly, the strength of bottom-up coupling between V5/MT to r-pSPL and from r-pSPL to r-aSPL predicted individual mean dominance duration. Our findings are consistent with a hierarchical predictive coding model of parietal involvement in bistable perception and suggest that visual information processing underlying spontaneous perceptual switches can be described as changes in connectivity strength between parietal and visual cortical regions. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  4. Design and Voluntary Motion Intention Estimation of a Novel Wearable Full-Body Flexible Exoskeleton Robot

    Directory of Open Access Journals (Sweden)

    Chunjie Chen

    2017-01-01

    Full Text Available The wearable full-body exoskeleton robot developed in this study is one application of mobile cyberphysical system (CPS, which is a complex mobile system integrating mechanics, electronics, computer science, and artificial intelligence. Steel wire was used as the flexible transmission medium and a group of special wire-locking structures was designed. Additionally, we designed passive joints for partial joints of the exoskeleton. Finally, we proposed a novel gait phase recognition method for full-body exoskeletons using only joint angular sensors, plantar pressure sensors, and inclination sensors. The method consists of four procedures. Firstly, we classified the three types of main motion patterns: normal walking on the ground, stair-climbing and stair-descending, and sit-to-stand movement. Secondly, we segregated the experimental data into one gait cycle. Thirdly, we divided one gait cycle into eight gait phases. Finally, we built a gait phase recognition model based on k-Nearest Neighbor perception and trained it with the phase-labeled gait data. The experimental result shows that the model has a 98.52% average correct rate of classification of the main motion patterns on the testing set and a 95.32% average correct rate of phase recognition on the testing set. So the exoskeleton robot can achieve human motion intention in real time and coordinate its movement with the wearer.

  5. Development of a robotic evaluation system for the ability of proprioceptive sensation in slow hand motion.

    Science.gov (United States)

    Tanaka, Yoshiyuki; Mizoe, Genki; Kawaguchi, Tomohiro

    2015-01-01

    This paper proposes a simple diagnostic methodology for checking the ability of proprioceptive/kinesthetic sensation by using a robotic device. The perception ability of virtual frictional forces is examined in operations of the robotic device by the hand at a uniform slow velocity along the virtual straight/circular path. Experimental results by healthy subjects demonstrate that percentage of correct answers for the designed perceptual tests changes in the motion direction as well as the arm configuration and the HFM (human force manipulability) measure. It can be supposed that the proposed methodology can be applied into the early detection of neuromuscular/neurological disorders.

  6. Perception of social interaction compresses subjective duration in an oxytocin-dependent manner.

    Science.gov (United States)

    Liu, Rui; Yuan, Xiangyong; Chen, Kepu; Jiang, Yi; Zhou, Wen

    2018-05-22

    Communication through body gestures permeates our daily life. Efficient perception of the message therein reflects one's social cognitive competency. Here we report that such competency is manifested temporally as shortened subjective duration of social interactions: motion sequences showing agents acting communicatively are perceived to be significantly shorter in duration as compared with those acting noncommunicatively. The strength of this effect is negatively correlated with one's autistic-like tendency. Critically, intranasal oxytocin administration restores the temporal compression effect in socially less proficient individuals, whereas the administration of atosiban, a competitive antagonist of oxytocin, diminishes the effect in socially proficient individuals. These findings indicate that perceived time, rather than being a faithful representation of physical time, is highly idiosyncratic and ingrained with one's personality trait. Moreover, they suggest that oxytocin is involved in mediating time perception of social interaction, further supporting the role of oxytocin in human social cognition. © 2018, Liu et al.

  7. Trajectory of coronary motion and its significance in robotic motion cancellation.

    Science.gov (United States)

    Cattin, Philippe; Dave, Hitendu; Grünenfelder, Jürg; Szekely, Gabor; Turina, Marko; Zünd, Gregor

    2004-05-01

    To characterize remaining coronary artery motion of beating pig hearts after stabilization with an 'Octopus' using an optical remote analysis technique. Three pigs (40, 60 and 65 kg) underwent full sternotomy after receiving general anesthesia. An 8-bit high speed black and white video camera (50 frames/s) coupled with a laser sensor (60 microm resolution) were used to capture heart wall motion in all three dimensions. Dopamine infusion was used to deliberately modulate cardiac contractility. Synchronized ECG, blood pressure, airway pressure and video data of the region around the first branching point of the left anterior descending (LAD) coronary artery after Octopus stabilization were captured for stretches of 8 s each. Several sequences of the same region were captured over a period of several minutes. Computerized off-line analysis allowed us to perform minute characterization of the heart wall motion. The movement of the points of interest on the LAD ranged from 0.22 to 0.81 mm in the lateral plane (x/y-axis) and 0.5-2.6 mm out of the plane (z-axis). Fast excursions (>50 microm/s in the lateral plane) occurred corresponding to the QRS complex and the T wave; while slow excursion phases (movement of the coronary artery after stabilization appears to be still significant. Minute characterization of the trajectory of motion could provide the substrate for achieving motion cancellation for existing robotic systems. Velocity plots could also help improve gated cardiac imaging.

  8. Modeling repetitive motions using structured light.

    Science.gov (United States)

    Xu, Yi; Aliaga, Daniel G

    2010-01-01

    Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

  9. Ground motion input in seismic evaluation studies

    International Nuclear Information System (INIS)

    Sewell, R.T.; Wu, S.C.

    1996-07-01

    This report documents research pertaining to conservatism and variability in seismic risk estimates. Specifically, it examines whether or not artificial motions produce unrealistic evaluation demands, i.e., demands significantly inconsistent with those expected from real earthquake motions. To study these issues, two types of artificial motions are considered: (a) motions with smooth response spectra, and (b) motions with realistic variations in spectral amplitude across vibration frequency. For both types of artificial motion, time histories are generated to match target spectral shapes. For comparison, empirical motions representative of those that might result from strong earthquakes in the Eastern U.S. are also considered. The study findings suggest that artificial motions resulting from typical simulation approaches (aimed at matching a given target spectrum) are generally adequate and appropriate in representing the peak-response demands that may be induced in linear structures and equipment responding to real earthquake motions. Also, given similar input Fourier energies at high-frequencies, levels of input Fourier energy at low frequencies observed for artificial motions are substantially similar to those levels noted in real earthquake motions. In addition, the study reveals specific problems resulting from the application of Western U.S. type motions for seismic evaluation of Eastern U.S. nuclear power plants

  10. Speed and direction changes induce the perception of animacy in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Birgit eTräuble

    2014-10-01

    Full Text Available A large body of research has documented infants’ ability to classify animate and inanimate objects based on static or dynamic information. It has been shown that infants less than one year of age transfer animacy-specific expectations from dynamic point-light displays to static images. The present study examined whether basic motion cues that typically trigger judgments of perceptual animacy in older children and adults lead 7-month-olds to infer an ambiguous object’s identity from dynamic information. Infants were tested with a novel paradigm that required inferring the animacy status of an ambiguous moving shape. An ambiguous shape emerged from behind a screen and its identity could only be inferred from its motion. Its motion pattern varied distinctively between scenes: it either changed speed and direction in an animate way, or it moved along a straight path at a constant speed (i.e. in an inanimate way. At test, the identity of the shape was revealed and it was either consistent or inconsistent with its motion pattern. Infants looked longer on trials with the inconsistent outcome. We conclude that 7-month-olds’ representations of animates and inanimates include category-specific associations between static and dynamic attributes. Moreover, these associations seem to hold for simple dynamic cues that are considered minimal conditions for animacy perception.

  11. Fractional Brownian motion and motion governed by the fractional Langevin equation in confined geometries.

    Science.gov (United States)

    Jeon, Jae-Hyung; Metzler, Ralf

    2010-02-01

    Motivated by subdiffusive motion of biomolecules observed in living cells, we study the stochastic properties of a non-Brownian particle whose motion is governed by either fractional Brownian motion or the fractional Langevin equation and restricted to a finite domain. We investigate by analytic calculations and simulations how time-averaged observables (e.g., the time-averaged mean-squared displacement and displacement correlation) are affected by spatial confinement and dimensionality. In particular, we study the degree of weak ergodicity breaking and scatter between different single trajectories for this confined motion in the subdiffusive domain. The general trend is that deviations from ergodicity are decreased with decreasing size of the movement volume and with increasing dimensionality. We define the displacement correlation function and find that this quantity shows distinct features for fractional Brownian motion, fractional Langevin equation, and continuous time subdiffusion, such that it appears an efficient measure to distinguish these different processes based on single-particle trajectory data.

  12. Motion Learning Based on Bayesian Program Learning

    Directory of Open Access Journals (Sweden)

    Cheng Meng-Zhen

    2017-01-01

    Full Text Available The concept of virtual human has been highly anticipated since the 1980s. By using computer technology, Human motion simulation could generate authentic visual effect, which could cheat human eyes visually. Bayesian Program Learning train one or few motion data, generate new motion data by decomposing and combining. And the generated motion will be more realistic and natural than the traditional one.In this paper, Motion learning based on Bayesian program learning allows us to quickly generate new motion data, reduce workload, improve work efficiency, reduce the cost of motion capture, and improve the reusability of data.

  13. Simultaneous PET-MR acquisition and MR-derived motion fields for correction of non-rigid motion in PET

    International Nuclear Information System (INIS)

    Tsoumpas, C.; Mackewn, J.E.; Halsted, P.; King, A.P.; Buerger, C.; Totman, J.J.; Schaeffter, T.; Marsden, P.K.

    2010-01-01

    Positron emission tomography (PET) provides an accurate measurement of radiotracer concentration in vivo, but performance can be limited by subject motion which degrades spatial resolution and quantitative accuracy. This effect may become a limiting factor for PET studies in the body as PET scanner technology improves. In this work, we propose a new approach to address this problem by employing motion information from images measured simultaneously using a magnetic resonance (MR) scanner. The approach is demonstrated using an MR-compatible PET scanner and PET-MR acquisition with a purpose-designed phantom capable of non-rigid deformations. Measured, simultaneously acquired MR data were used to correct for motion in PET, and results were compared with those obtained using motion information from PET images alone. Motion artefacts were significantly reduced and the PET image quality and quantification was significantly improved by the use of MR motion fields, whilst the use of PET-only motion information was less successful. Combined PET-MR acquisitions potentially allow PET motion compensation in whole-body acquisitions without prolonging PET acquisition time or increasing radiation dose. This, to the best of our knowledge, is the first study to demonstrate that simultaneously acquired MR data can be used to estimate and correct for the effects of non-rigid motion in PET. (author)

  14. COMPARISON OF BACKGROUND SUBTRACTION, SOBEL, ADAPTIVE MOTION DETECTION, FRAME DIFFERENCES, AND ACCUMULATIVE DIFFERENCES IMAGES ON MOTION DETECTION

    Directory of Open Access Journals (Sweden)

    Dara Incam Ramadhan

    2018-02-01

    Full Text Available Nowadays, digital image processing is not only used to recognize motionless objects, but also used to recognize motions objects on video. One use of moving object recognition on video is to detect motion, which implementation can be used on security cameras. Various methods used to detect motion have been developed so that in this research compared some motion detection methods, namely Background Substraction, Adaptive Motion Detection, Sobel, Frame Differences and Accumulative Differences Images (ADI. Each method has a different level of accuracy. In the background substraction method, the result obtained 86.1% accuracy in the room and 88.3% outdoors. In the sobel method the result of motion detection depends on the lighting conditions of the room being supervised. When the room is in bright condition, the accuracy of the system decreases and when the room is dark, the accuracy of the system increases with an accuracy of 80%. In the adaptive motion detection method, motion can be detected with a condition in camera visibility there is no object that is easy to move. In the frame difference method, testing on RBG image using average computation with threshold of 35 gives the best value. In the ADI method, the result of accuracy in motion detection reached 95.12%.

  15. Linearized motion estimation for articulated planes.

    Science.gov (United States)

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  16. Performance assessment of a programmable five degrees-of-freedom motion platform for quality assurance of motion management techniques in radiotherapy.

    Science.gov (United States)

    Huang, Chen-Yu; Keall, Paul; Rice, Adam; Colvill, Emma; Ng, Jin Aun; Booth, Jeremy T

    2017-09-01

    Inter-fraction and intra-fraction motion management methods are increasingly applied clinically and require the development of advanced motion platforms to facilitate testing and quality assurance program development. The aim of this study was to assess the performance of a 5 degrees-of-freedom (DoF) programmable motion platform HexaMotion (ScandiDos, Uppsala, Sweden) towards clinically observed tumor motion range, velocity, acceleration and the accuracy requirements of SABR prescribed in AAPM Task Group 142. Performance specifications for the motion platform were derived from literature regarding the motion characteristics of prostate and lung tumor targets required for real time motion management. The performance of the programmable motion platform was evaluated against (1) maximum range, velocity and acceleration (5 DoF), (2) static position accuracy (5 DoF) and (3) dynamic position accuracy using patient-derived prostate and lung tumor motion traces (3 DoF). Translational motion accuracy was compared against electromagnetic transponder measurements. Rotation was benchmarked with a digital inclinometer. The static accuracy and reproducibility for translation and rotation was quality assurance and commissioning of motion management systems in radiation oncology.

  17. Characteristics of near-field earthquake ground motion

    International Nuclear Information System (INIS)

    Kim, H. K.; Choi, I. G.; Jeon, Y. S.; Seo, J. M.

    2002-01-01

    The near-field ground motions exhibit special response characteristics that are different from those of ordinary ground motions in the velocity and displacement response. This study first examines the characteristics of near-field ground motion depending on fault directivity and fault normal and parallel component. And the response spectra of the near field ground motion are statistically processed, and are compared with the Regulatory Guide 1.60 spectrum that is present design spectrum of the nuclear power plant. The response spectrum of the near filed ground motions shows large spectral velocity and displacement in the low frequency range. The spectral accelerations of near field ground motion are greatly amplified in the high frequency range for the rock site motions, and in the low frequency range for the soil site motions. As a result, the near field ground motion effects should be considered in the seismic design and seismic safety evaluation of the nuclear power plant structures and equipment

  18. Human motion retrieval from hand-drawn sketch.

    Science.gov (United States)

    Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee

    2012-05-01

    The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.

  19. A Control Strategy with Tactile Perception Feedback for EMG Prosthetic Hand

    Directory of Open Access Journals (Sweden)

    Changcheng Wu

    2015-01-01

    Full Text Available To improve the control effectiveness and make the prosthetic hand not only controllable but also perceivable, an EMG prosthetic hand control strategy was proposed in this paper. The control strategy consists of EMG self-learning motion recognition, backstepping controller with stiffness fuzzy observation, and force tactile representation. EMG self-learning motion recognition is used to reduce the influence on EMG signals caused by the uncertainty of the contacting position of the EMG sensors. Backstepping controller with stiffness fuzzy observation is used to realize the position control and grasp force control. Velocity proportional control in free space and grasp force tracking control in restricted space can be realized by the same controller. The force tactile representation helps the user perceive the states of the prosthetic hand. Several experiments were implemented to verify the effect of the proposed control strategy. The results indicate that the proposed strategy has effectiveness. During the experiments, the comments of the participants show that the proposed strategy is a better choice for amputees because of the improved controllability and perceptibility.

  20. Coupled transverse motion

    International Nuclear Information System (INIS)

    Teng, L.C.

    1989-01-01

    The magnetic field in an accelerator or a storage ring is usually so designed that the horizontal (x) and the vertical (y) motions of an ion are uncoupled. However, because of imperfections in construction and alignment, some small coupling is unavoidable. In this lecture, we discuss in a general way what is known about the behaviors of coupled motions in two degrees-of-freedom. 11 refs., 6 figs

  1. Digital anthropomorphic phantoms of non-rigid human respiratory and voluntary body motion for investigating motion correction in emission imaging

    International Nuclear Information System (INIS)

    Könik, Arda; Johnson, Karen L; Dasari, Paul; Pretorius, P H; Dey, Joyoni; King, Michael A; Connolly, Caitlin M; Segars, Paul W; Lindsay, Clifford

    2014-01-01

    The development of methods for correcting patient motion in emission tomography has been receiving increased attention. Often the performance of these methods is evaluated through simulations using digital anthropomorphic phantoms, such as the commonly used extended cardiac torso (XCAT) phantom, which models both respiratory and cardiac motion based on human studies. However, non-rigid body motion, which is frequently seen in clinical studies, is not present in the standard XCAT phantom. In addition, respiratory motion in the standard phantom is limited to a single generic trend. In this work, to obtain a more realistic representation of motion, we developed a series of individual-specific XCAT phantoms, modeling non-rigid respiratory and non-rigid body motions derived from the magnetic resonance imaging (MRI) acquisitions of volunteers. Acquisitions were performed in the sagittal orientation using the Navigator methodology. Baseline (no motion) acquisitions at end-expiration were obtained at the beginning of each imaging session for each volunteer. For the body motion studies, MRI was again acquired only at end-expiration for five body motion poses (shoulder stretch, shoulder twist, lateral bend, side roll, and axial slide). For the respiratory motion studies, an MRI was acquired during free/regular breathing. The magnetic resonance slices were then retrospectively sorted into 14 amplitude-binned respiratory states, end-expiration, end-inspiration, six intermediary states during inspiration, and six during expiration using the recorded Navigator signal. XCAT phantoms were then generated based on these MRI data by interactive alignment of the organ contours of the XCAT with the MRI slices using a graphical user interface. Thus far we have created five body motion and five respiratory motion XCAT phantoms from the MRI acquisitions of six healthy volunteers (three males and three females). Non-rigid motion exhibited by the volunteers was reflected in both respiratory

  2. Digital anthropomorphic phantoms of non-rigid human respiratory and voluntary body motion for investigating motion correction in emission imaging

    Science.gov (United States)

    Könik, Arda; Connolly, Caitlin M.; Johnson, Karen L.; Dasari, Paul; Segars, Paul W.; Pretorius, P. H.; Lindsay, Clifford; Dey, Joyoni; King, Michael A.

    2014-07-01

    The development of methods for correcting patient motion in emission tomography has been receiving increased attention. Often the performance of these methods is evaluated through simulations using digital anthropomorphic phantoms, such as the commonly used extended cardiac torso (XCAT) phantom, which models both respiratory and cardiac motion based on human studies. However, non-rigid body motion, which is frequently seen in clinical studies, is not present in the standard XCAT phantom. In addition, respiratory motion in the standard phantom is limited to a single generic trend. In this work, to obtain a more realistic representation of motion, we developed a series of individual-specific XCAT phantoms, modeling non-rigid respiratory and non-rigid body motions derived from the magnetic resonance imaging (MRI) acquisitions of volunteers. Acquisitions were performed in the sagittal orientation using the Navigator methodology. Baseline (no motion) acquisitions at end-expiration were obtained at the beginning of each imaging session for each volunteer. For the body motion studies, MRI was again acquired only at end-expiration for five body motion poses (shoulder stretch, shoulder twist, lateral bend, side roll, and axial slide). For the respiratory motion studies, an MRI was acquired during free/regular breathing. The magnetic resonance slices were then retrospectively sorted into 14 amplitude-binned respiratory states, end-expiration, end-inspiration, six intermediary states during inspiration, and six during expiration using the recorded Navigator signal. XCAT phantoms were then generated based on these MRI data by interactive alignment of the organ contours of the XCAT with the MRI slices using a graphical user interface. Thus far we have created five body motion and five respiratory motion XCAT phantoms from the MRI acquisitions of six healthy volunteers (three males and three females). Non-rigid motion exhibited by the volunteers was reflected in both respiratory

  3. Respiratory lung motion analysis using a nonlinear motion correction technique for respiratory-gated lung perfusion SPECT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Haneishi, Hideaki; Iwanaga, Hideyuki; Suga, Kazuyoshi

    2007-01-01

    This study evaluated the respiratory motion of lungs using a nonlinear motion correction technique for respiratory-gated single photon emission computed tomography (SPECT) images. The motion correction technique corrects the respiratory motion of the lungs nonlinearly between two-phase images obtained by respiratory-gated SPECT. The displacement vectors resulting from respiration can be computed at every location of the lungs. Respiratory lung motion analysis is carried out by calculating the mean value of the body axis component of the displacement vector in each of the 12 small regions into which the lungs were divided. In order to enable inter-patient comparison, the 12 mean values were normalized by the length of the lung region along the direction of the body axis. This method was applied to 25 Technetium (Tc)-99m-macroaggregated albumin (MAA) perfusion SPECT images, and motion analysis results were compared with the diagnostic results. It was confirmed that the respiratory lung motion reflects the ventilation function. A statistically significant difference in the amount of the respiratory lung motion was observed between the obstructive pulmonary diseases and other conditions, based on an unpaired Student's t test (P<0.0001). A difference in the motion between normal lungs and lungs with a ventilation obstruction was detected by the proposed method. This method is effective for evaluating obstructive pulmonary diseases such as pulmonary emphysema and diffuse panbronchiolitis. (author)

  4. Five-dimensional motion compensation for respiratory and cardiac motion with cone-beam CT of the thorax region

    Science.gov (United States)

    Sauppe, Sebastian; Hahn, Andreas; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc

    2016-03-01

    We propose an adapted method of our previously published five-dimensional (5D) motion compensation (MoCo) algorithm1, developed for micro-CT imaging of small animals, to provide for the first time motion artifact-free 5D cone-beam CT (CBCT) images from a conventional flat detector-based CBCT scan of clinical patients. Image quality of retrospectively respiratory- and cardiac-gated volumes from flat detector CBCT scans is deteriorated by severe sparse projection artifacts. These artifacts further complicate motion estimation, as it is required for MoCo image reconstruction. For high quality 5D CBCT images at the same x-ray dose and the same number of projections as todays 3D CBCT we developed a double MoCo approach based on motion vector fields (MVFs) for respiratory and cardiac motion. In a first step our already published four-dimensional (4D) artifact-specific cyclic motion-compensation (acMoCo) approach is applied to compensate for the respiratory patient motion. With this information a cyclic phase-gated deformable heart registration algorithm is applied to the respiratory motion-compensated 4D CBCT data, thus resulting in cardiac MVFs. We apply these MVFs on double-gated images and thereby respiratory and cardiac motion-compensated 5D CBCT images are obtained. Our 5D MoCo approach processing patient data acquired with the TrueBeam 4D CBCT system (Varian Medical Systems). Our double MoCo approach turned out to be very efficient and removed nearly all streak artifacts due to making use of 100% of the projection data for each reconstructed frame. The 5D MoCo patient data show fine details and no motion blurring, even in regions close to the heart where motion is fastest.

  5. Temporal logic motion planning

    CSIR Research Space (South Africa)

    Seotsanyana, M

    2010-01-01

    Full Text Available In this paper, a critical review on temporal logic motion planning is presented. The review paper aims to address the following problems: (a) In a realistic situation, the motion planning problem is carried out in real-time, in a dynamic, uncertain...

  6. Evaluation of a direct motion estimation/correction method in respiratory-gated PET/MRI with motion-adjusted attenuation.

    Science.gov (United States)

    Bousse, Alexandre; Manber, Richard; Holman, Beverley F; Atkinson, David; Arridge, Simon; Ourselin, Sébastien; Hutton, Brian F; Thielemans, Kris

    2017-06-01

    Respiratory motion compensation in PET/CT and PET/MRI is essential as motion is a source of image degradation (motion blur, attenuation artifacts). In previous work, we developed a direct method for joint image reconstruction/motion estimation (JRM) for attenuation-corrected (AC) respiratory-gated PET, which uses a single attenuation-map (μ-map). This approach was successfully implemented for respiratory-gated PET/CT, but since it relied on an accurate μ-map for motion estimation, the question of its applicability in PET/MRI is open. The purpose of this work is to investigate the feasibility of JRM in PET/MRI and to assess the robustness of the motion estimation when a degraded μ-map is used. We performed a series of JRM reconstructions from simulated PET data using a range of simulated Dixon MRI sequence derived μ-maps with wrong attenuation values in the lungs, from -100% (no attenuation) to +100% (double attenuation), as well as truncated arms. We compared the estimated motions with the one obtained from JRM in ideal conditions (no noise, true μ-map as an input). We also applied JRM on 4 patient datasets of the chest, 3 of them containing hot lesions. Patient list-mode data were gated using a principal component analysis method. We compared SUV max values of the JRM reconstructed activity images and non motion-corrected images. We also assessed the estimated motion fields by comparing the deformed JRM-reconstructed activity with individually non-AC reconstructed gates. Experiments on simulated data showed that JRM-motion estimation is robust to μ-map degradation in the sense that it produces motion fields similar to the ones obtained when using the true μ-map, regardless of the attenuation errors in the lungs (PET/MRI clinical datasets. It provides a potential alternative to existing methods where the motion fields are pre-estimated from separate MRI measurements. © 2017 University College London (UCL). Medical Physics published by Wiley Periodicals, Inc

  7. Blind retrospective motion correction of MR images.

    Science.gov (United States)

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  8. Motion correction in thoracic positron emission tomography

    CERN Document Server

    Gigengack, Fabian; Dawood, Mohammad; Schäfers, Klaus P

    2015-01-01

    Respiratory and cardiac motion leads to image degradation in Positron Emission Tomography (PET), which impairs quantification. In this book, the authors present approaches to motion estimation and motion correction in thoracic PET. The approaches for motion estimation are based on dual gating and mass-preserving image registration (VAMPIRE) and mass-preserving optical flow (MPOF). With mass-preservation, image intensity modulations caused by highly non-rigid cardiac motion are accounted for. Within the image registration framework different data terms, different variants of regularization and parametric and non-parametric motion models are examined. Within the optical flow framework, different data terms and further non-quadratic penalization are also discussed. The approaches for motion correction particularly focus on pipelines in dual gated PET. A quantitative evaluation of the proposed approaches is performed on software phantom data with accompanied ground-truth motion information. Further, clinical appl...

  9. Embodied perception: A proposal to reconcile affordance and spatial perception

    NARCIS (Netherlands)

    Canal Bruland, R.; van der Kamp, J.

    2015-01-01

    Proffitt's embodied approach to perception is deeply indebted to Gibson's ecological approach to visual perception, in particular the idea that the primary objects of perception are affordances or what the environment offers for action. Yet, rather than directly addressing affordance perception,

  10. The perception of emotion in body expressions.

    Science.gov (United States)

    de Gelder, B; de Borst, A W; Watson, R

    2015-01-01

    During communication, we perceive and express emotional information through many different channels, including facial expressions, prosody, body motion, and posture. Although historically the human body has been perceived primarily as a tool for actions, there is now increased understanding that the body is also an important medium for emotional expression. Indeed, research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are processed and understood, at the behavioral and neural levels, with specific reference to their role in emotional communication. The first part of this review outlines brain regions and spectrotemporal dynamics underlying perception of isolated neutral and affective bodies, the second part details the contextual effects on body emotion recognition, and final part discusses body processing on a subconscious level. More specifically, research has shown that body expressions as compared with neutral bodies draw upon a larger network of regions responsible for action observation and preparation, emotion processing, body processing, and integrative processes. Results from neurotypical populations and masking paradigms suggest that subconscious processing of affective bodies relies on a specific subset of these regions. Moreover, recent evidence has shown that emotional information from the face, voice, and body all interact, with body motion and posture often highlighting and intensifying the emotion expressed in the face and voice. © 2014 John Wiley & Sons, Ltd.

  11. Superluminal motion (review)

    Science.gov (United States)

    Malykin, G. B.; Romanets, E. A.

    2012-06-01

    Prior to the development of Special Relativity, no restrictions were imposed on the velocity of the motion of particles and material bodies, as well as on energy transfer and signal propagation. At the end of the 19th century and the beginning of the 20th century, it was shown that a charge that moves at a velocity faster than the speed of light in an optical medium, in particular, in vacuum, gives rise to impact radiation, which later was termed the Vavilov-Cherenkov radiation. Shortly after the development of Special Relativity, some researchers considered the possibility of superluminal motion. In 1923, the Soviet physicist L.Ya. Strum suggested the existence of tachyons, which, however, have not been discovered yet. Superluminal motions can occur only for images, e.g., for so-called "light spots," which were considered in 1972 by V.L. Ginzburg and B.M. Bolotovskii. These spots can move with a superluminal phase velocity but are incapable of transferring energy and information. Nevertheless, these light spots may induce quite real generation of microwave radiation in closed waveguides and create the Vavilov-Cherenkov radiation in vacuum. In this work, we consider various paradoxes, illusions, and artifacts associated with superluminal motion.

  12. Programmable motion of DNA origami mechanisms.

    Science.gov (United States)

    Marras, Alexander E; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E

    2015-01-20

    DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank-slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼ minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach.

  13. Programmable motion of DNA origami mechanisms

    Science.gov (United States)

    Marras, Alexander E.; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E.

    2015-01-01

    DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank–slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach. PMID:25561550

  14. Applications of Phase-Based Motion Processing

    Science.gov (United States)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  15. Psychophysical evidence for auditory motion parallax.

    Science.gov (United States)

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  16. Semi-automatic detection and correction of body organ motion, particularly cardiac motion in SPECT studies

    International Nuclear Information System (INIS)

    Quintana, J.C.; Caceres, F.; Vargas, P.

    2002-01-01

    Aim: Detect patient motion during SPECT imaging. Material and Method: SPECT study is carried out on a patient's body organ, such as the heart, and frame of image data are thereby acquired. The image data in these frames are subjected to a series of mappings and computations, from which frame containing a significant quantity of organ motion can be identified. Quantification of motion occurs by shifting some of the mapped data within a predetermined range, and selecting that data shift which minimizes the magnitude of a motion sensitive mathematical function. The sensitive mathematical function is constructed from all set of image frames using the pixel data within a region covering the body organ. Using cine display of planar image data, the operator defines the working region by marking two points, which define two horizontal lines covering the area of the body organ. This is the only operator intervention. The mathematical function integrates pixel data from all set of image frames and therefore does not use derivatives which may cause distortion in noisy data. Moreover, as a global function, this method is superior than that using frame-to-frame cross-correlation function to identify motion between adjacent frames. Using standard image processing software, the method was implemented computationally. Ten SPECT studies with movement (Sestamibi cardiac studies and 99m-ECD brain SPECT studies) were selected plus two others with no movement. The acquisition SPECT protocol for the cardiac study was as follow: Step and shoot mode, non-circular orbit, 64 stops 20s each, 64x64x16 matrix and LEHR colimator. For the brain SPECT, 128 stops over 360 0 were used. Artificial vertical displacements (±1-2 pixels) over several frames were introduced in those studies with no movement to simulate patient motion. Results: The method was successfully tested in all cases and was capable to recognize SPECT studies with no body motion as well as those with body motion (both from the

  17. Cervical spine motion: radiographic study

    International Nuclear Information System (INIS)

    Morgan, J.P.; Miyabayashi, T.; Choy, S.

    1986-01-01

    Knowledge of the acceptable range of motion of the cervical spine of the dog is used in the radiographic diagnosis of both developmental and degenerative diseases. A series of radiographs of mature Beagle dogs was used to identify motion within sagittal and transverse planes. Positioning of the dog's head and neck was standardized, using a restraining board, and mimicked those thought to be of value in diagnostic radiology. The range of motion was greatest between C2 and C5. Reports of severe disk degeneration in the cervical spine of the Beagle describe the most severely involved disks to be C4 through C7. Thus, a high range of motion between vertebral segments does not seem to be the cause for the severe degenerative disk disease. Dorsoventral slippage between vertebral segments was seen, but was not accurately measured. Wedging of disks was clearly identified. At the atlantoaxio-occipital region, there was a high degree of motion within the sagittal plane at the atlantoaxial and atlanto-occipital joints; the measurement can be a guideline in the radiographic diagnosis of instability due to developmental anomalies in this region. Lateral motion within the transverse plane was detected at the 2 joints; however, motion was minimal, and the measurements seemed to be less accurate because of rotation of the cervical spine. Height of the vertebral canal was consistently noted to be greater at the caudal orifice, giving some warning to the possibility of overdiagnosis in suspected instances of cervical spondylopathy

  18. Sensory memory of illusory depth in structure-from-motion.

    Science.gov (United States)

    Pastukhov, Alexander; Lissner, Anna; Füllekrug, Jana; Braun, Jochen

    2014-01-01

    When multistable displays (stimuli consistent with two or more equally plausible perceptual interpretations) are presented intermittently, their perceptions are stabilized by sensory memory. Independent memory traces are generated not only for different types of multistable displays (Maier, Wilke, Logothetis, & Leopold, Current Biology 13:1076-1085, 2003), but also for different ambiguous features of binocular rivalry (Pearson & Clifford, Journal of Vision 4:196-202, 2004). In the present study, we examined whether a similar independence of sensory memories is observed in structure-from-motion (SFM), a multistable display with two ambiguous properties. In SFM, a 2-D planar motion creates a vivid impression of a rotating 3-D volume. Both the illusory rotation and illusory depth (i.e., how close parts of an object appear to the observer) of an SFM object are ambiguous. We dissociated the sensory memories of these two ambiguous properties by using an intermittent presentation in combination with a forced-ambiguous-switch paradigm (Pastukhov, Vonau, & Braun, PLoS ONE 7:e37734, 2012). We demonstrated that the illusory depth of SFM generates a sensory memory trace that is independent from that of illusory rotation. Despite this independence, the specificities levels of the sensory memories were identical for illusory depth and illusory rotation. The history effect was weakened by a change in the volumetric property of a shape (whether it was a hollow band or a filled drum volume), but not by changes in color or size. We discuss how these new results constrain models of sensory memory and SFM processing.

  19. Method through motion

    DEFF Research Database (Denmark)

    Steijn, Arthur

    2016-01-01

    Contemporary scenography often consists of video-projected motion graphics. The field is lacking in academic methods and rigour: descriptions and models relevant for the creation as well as in the analysis of existing works. In order to understand the phenomenon of motion graphics in a scenographic...... construction as a support to working systematically practice-led research project. The design model is being developed through design laboratories and workshops with students and professionals who provide feedback that lead to incremental improvements. Working with this model construction-as-method reveals...... context, I have been conducting a practice-led research project. Central to the project is construction of a design model describing sets of procedures, concepts and terminology relevant for design and studies of motion graphics in spatial contexts. The focus of this paper is the role of model...

  20. Embodied Perception: A Proposal to Reconcile Affordance and Spatial Perception

    OpenAIRE

    Ca?al-Bruland, Rouwen; van der Kamp, John

    2015-01-01

    Proffitt's embodied approach to perception is deeply indebted to Gibson's ecological approach to visual perception, in particular the idea that the primary objects of perception are affordances or what the environment offers for action. Yet, rather than directly addressing affordance perception, most of the empirical work evaluating Proffitt's approach focuses on the perception of spatial properties of the environment. We propose that theoretical and empirical efforts should be directed towar...