WorldWideScience

Sample records for gaze gaze-following behavior

  1. Exploration of Computer Game Interventions in Improving Gaze Following Behavior in Children with Autism Spectrum Disorders

    OpenAIRE

    Kane, Jessi Lynn

    2011-01-01

    Statistics show the prevalence of autism spectrum disorder (ASD), a developmental delay disorder, is now 1 in 110 children in the United States (Rice, 2009), nearing 1% of the population. Therefore, this study looked at ways modern technology could assist these children and their families. One deficit in ASD is the inability to respond to gaze referencing (i.e. follow the eye gaze of another adult/child/etc), a correlate of the responding to joint attention (RJA) process. This not only aff...

  2. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Face age modulates gaze following in young adults

    OpenAIRE

    Francesca Ciardo; Barbara F. M. Marino; Rossana Actis-Grosso; Angela Rossetti; Paola Ricciardelli

    2014-01-01

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18–25; 35–45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6–10; 18–25; 35–45; over 70). The results show ...

  4. Training for eye contact modulates gaze following in dogs.

    Science.gov (United States)

    Wallis, Lisa J; Range, Friederike; Müller, Corsin A; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-08-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task.

  5. Face age modulates gaze following in young adults.

    Science.gov (United States)

    Ciardo, Francesca; Marino, Barbara F M; Actis-Grosso, Rossana; Rossetti, Angela; Ricciardelli, Paola

    2014-04-22

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).

  6. Attention to the Mouth and Gaze Following in Infancy Predict Language Development

    Science.gov (United States)

    Tenenbaum, Elena J.; Sobel, David M.; Sheinkpof, Stephen J.; Malle, Bertram F.; Morgan, James L.

    2015-01-01

    We investigated longitudinal relations among gaze following and face scanning in infancy and later language development. At 12 months, infants watched videos of a woman describing an object while their passive viewing was measured with an eye-tracker. We examined the relation between infants' face scanning behavior and their tendency to follow the…

  7. Facial Expressions Modulate the Ontogenetic Trajectory of Gaze-Following among Monkeys

    Science.gov (United States)

    Teufel, Christoph; Gutmann, Anke; Pirow, Ralph; Fischer, Julia

    2010-01-01

    Gaze-following, the tendency to direct one's attention to locations looked at by others, is a crucial aspect of social cognition in human and nonhuman primates. Whereas the development of gaze-following has been intensely studied in human infants, its early ontogeny in nonhuman primates has received little attention. Combining longitudinal and…

  8. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    Science.gov (United States)

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  9. Embodied social robots trigger gaze following in real-time

    OpenAIRE

    Wiese, Eva; Weis, Patrick; Lofaro, Daniel

    2018-01-01

    In human-human interaction, we use information from gestures, facial expressions and gaze direction to make inferences about what interaction partners think, feel or intend to do next. Observing changes in gaze direction triggers shifts of attention to gazed-at locations and helps establish shared attention between gazer and observer - a prerequisite for more complex social skills like mentalizing, action understanding and joint action. The ability to follow others’ gaze develops early in lif...

  10. Age differences in conscious versus subconscious social perception: The influence of face age and valence on gaze following.

    OpenAIRE

    Bailey, P.E.; Slessor, G.; Rendell, P.G.; Bennetts, Rachel; Campbell, A.; Ruffman, T.

    2014-01-01

    Gaze following is the primary means of establishing joint attention with others and is subject to age-related decline. In addition, young but not older adults experience an own-age bias in gaze following. The current research assessed the effects of subconscious processing on these age-related differences. Participants responded to targets that were either congruent or incongruent with the direction of gaze displayed in supraliminal and subliminal images of young and older faces. These faces ...

  11. The Interplay between Gaze Following, Emotion Recognition, and Empathy across Adolescence; a Pubertal Dip in Performance?

    Directory of Open Access Journals (Sweden)

    Rianne van Rooijen

    2018-02-01

    Full Text Available During puberty a dip in face recognition is often observed, possibly caused by heightened levels of gonadal hormones which in turn affects the re-organization of relevant cortical circuitry. In the current study we investigated whether a pubertal dip could be observed in three other abilities related to social information processing: gaze following, emotion recognition from the eyes, and empathizing abilities. Across these abilities we further explored whether these measurements revealed sex differences as another way to understand how gonadal hormones affect processing of social information. Results show that across adolescence, there are improvements in emotion recognition from the eyes and in empathizing abilities. These improvements did not show a dip, but are more plateau-like. The gaze cueing effect did not change over adolescence. We only observed sex differences in empathizing abilities, with girls showing higher scores than boys. Based on these results it appears that gonadal hormones are not exerting a unified influence on higher levels of social information processing. Further research should also explore changes in (visual information processing around puberty onset to find a more fitted explanation for changes in social behavior across adolescence.

  12. Age differences in conscious versus subconscious social perception: the influence of face age and valence on gaze following.

    Science.gov (United States)

    Bailey, Phoebe E; Slessor, Gillian; Rendell, Peter G; Bennetts, Rachel J; Campbell, Anna; Ruffman, Ted

    2014-09-01

    Gaze following is the primary means of establishing joint attention with others and is subject to age-related decline. In addition, young but not older adults experience an own-age bias in gaze following. The current research assessed the effects of subconscious processing on these age-related differences. Participants responded to targets that were either congruent or incongruent with the direction of gaze displayed in supraliminal and subliminal images of young and older faces. These faces displayed either neutral (Study 1) or happy and fearful (Study 2) expressions. In Studies 1 and 2, both age groups demonstrated gaze-directed attention by responding faster to targets that were congruent as opposed to incongruent with gaze-cues. In Study 1, subliminal stimuli did not attenuate the age-related decline in gaze-cuing, but did result in an own-age bias among older participants. In Study 2, gaze-cuing was reduced for older relative to young adults in response to supraliminal stimuli, and this could not be attributed to reduced visual acuity or age group differences in the perceived emotional intensity of the gaze-cue faces. Moreover, there were no age differences in gaze-cuing when responding to subliminal faces that were emotionally arousing. In addition, older adults demonstrated an own-age bias for both conscious and subconscious gaze-cuing when faces expressed happiness but not fear. We discuss growing evidence for age-related preservation of subconscious relative to conscious social perception, as well as an interaction between face age and valence in social perception. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Acute oxytocin improves memory and gaze following in male but not female nursery-reared infant macaques.

    Science.gov (United States)

    Simpson, Elizabeth A; Paukner, Annika; Sclafani, Valentina; Kaburu, Stefano S K; Suomi, Stephen J; Ferrari, Pier F

    2017-02-01

    Exogenous oxytocin administration is widely reported to improve social cognition in human and nonhuman primate adults. Risk factors of impaired social cognition, however, emerge in infancy. Early interventions-when plasticity is greatest-are critical to reverse negative outcomes. We tested the hypothesis that oxytocin may exert similar positive effects on infant social cognition, as in adults. To test this idea, we assessed the effectiveness of acute, aerosolized oxytocin on two foundational social cognitive skills: working memory (i.e., ability to briefly hold and process information) and social gaze (i.e., tracking the direction of others' gaze) in 1-month-old nursery-reared macaque monkeys (Macaca mulatta). We did not predict sex differences, but we included sex as a factor in our analyses to test whether our effects would be generalizable across both males and females. In a double-blind, placebo-controlled design, we found that females were more socially skilled at baseline compared to males, and that oxytocin improved working memory and gaze following, but only in males. These sex differences, while unexpected, may be due to interactions with gonadal steroids and may be relevant to sexually dimorphic disorders of social cognition, such as male-biased autism spectrum disorder, for which oxytocin has been proposed as a potential treatment. In sum, we report the first evidence that oxytocin may influence primate infant cognitive abilities. Moreover, these behavioral effects appear sexually dimorphic, highlighting the importance of considering sex differences. Oxytocin effects observed in one sex may not be generalizable to the other sex.

  14. Gaze-Following and Reaction to an Aversive Social Interaction Have Corresponding Associations with Variation in the OXTR Gene in Dogs but Not in Human Infants.

    Science.gov (United States)

    Oláh, Katalin; Topál, József; Kovács, Krisztina; Kis, Anna; Koller, Dóra; Young Park, Soon; Virányi, Zsófia

    2017-01-01

    It has been suggested that dogs' remarkable capacity to use human communicative signals lies in their comparable social cognitive skills; however, this view has been questioned recently. The present study investigated associations between oxytocin receptor gene (OXTR) polymorphisms and social behavior in human infants and dogs with the aim to unravel potentially differential mechanisms behind their responsiveness to human gaze. Sixteen-month-old human infants ( N = 99) and adult Border Collie dogs ( N = 71) participated in two tasks designed to test (1) their use of gaze-direction as a cue to locate a hidden object, and (2) their reactions to an aversive social interaction (using the still face task for children and a threatening approach task for dogs). Moreover, we obtained DNA samples to analyze associations between single nucleotide polymorphisms (SNP) in the OXTR (dogs: -213AG, -94TC, -74CG, rs8679682, children: rs53576, rs1042778, rs2254298) and behavior. We found that OXTR genotype was significantly associated with reactions to an aversive social interaction both in dogs and children, confirming the anxiolytic effect of oxytocin in both species. In dogs, the genotypes linked to less fearful behavior were associated also with a higher willingness to follow gaze whereas in children, OXTR gene polymorphisms did not affect gaze following success. This pattern of gene-behavior associations suggests that for dogs the two situations are more alike (potentially fear-inducing or competitive) than for human children. This raises the possibility that, in contrast to former studies proposing human-like cooperativeness in dogs, dogs may perceive human gaze in an object-choice task in a more antagonistic manner than children.

  15. Gaze-Following and Reaction to an Aversive Social Interaction Have Corresponding Associations with Variation in the OXTR Gene in Dogs but Not in Human Infants

    Directory of Open Access Journals (Sweden)

    Katalin Oláh

    2017-12-01

    Full Text Available It has been suggested that dogs' remarkable capacity to use human communicative signals lies in their comparable social cognitive skills; however, this view has been questioned recently. The present study investigated associations between oxytocin receptor gene (OXTR polymorphisms and social behavior in human infants and dogs with the aim to unravel potentially differential mechanisms behind their responsiveness to human gaze. Sixteen-month-old human infants (N = 99 and adult Border Collie dogs (N = 71 participated in two tasks designed to test (1 their use of gaze-direction as a cue to locate a hidden object, and (2 their reactions to an aversive social interaction (using the still face task for children and a threatening approach task for dogs. Moreover, we obtained DNA samples to analyze associations between single nucleotide polymorphisms (SNP in the OXTR (dogs: −213AG, −94TC, −74CG, rs8679682, children: rs53576, rs1042778, rs2254298 and behavior. We found that OXTR genotype was significantly associated with reactions to an aversive social interaction both in dogs and children, confirming the anxiolytic effect of oxytocin in both species. In dogs, the genotypes linked to less fearful behavior were associated also with a higher willingness to follow gaze whereas in children, OXTR gene polymorphisms did not affect gaze following success. This pattern of gene-behavior associations suggests that for dogs the two situations are more alike (potentially fear-inducing or competitive than for human children. This raises the possibility that, in contrast to former studies proposing human-like cooperativeness in dogs, dogs may perceive human gaze in an object-choice task in a more antagonistic manner than children.

  16. Effects of Observing Eye Contact on Gaze Following in High-Functioning Autism

    NARCIS (Netherlands)

    Böckler, A.; Timmermans, B.; Sebanz, N.; Vogeley, K.; Schilbach, L.

    2014-01-01

    Observing eye contact between others enhances the tendency to subsequently follow their gaze and has been suggested to function as a social signal that adds meaning to an upcoming action or event. The present study investigated effects of observed eye contact in high-functioning autism (HFA). Two

  17. Gaze shifts and fixations dominate gaze behavior of walking cats

    Science.gov (United States)

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behaviorgaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  18. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    Science.gov (United States)

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  19. New perspectives in gaze sensitivity research.

    Science.gov (United States)

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention.

  20. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  1. Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze.

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2016-08-01

    Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Owners' direct gazes increase dogs' attention-getting behaviors.

    Science.gov (United States)

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. "Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze": Correction to Werhahn et al. (2016).

    Science.gov (United States)

    2017-02-01

    Reports an error in "Wolves ( Canis lupus ) and dogs ( Canis familiaris ) differ in following human gaze into distant space but respond similar to their packmates' gaze" by Geraldine Werhahn, Zsófia Virányi, Gabriela Barrera, Andrea Sommese and Friederike Range ( Journal of Comparative Psychology , 2016[Aug], Vol 130[3], 288-298). In the article, the affiliations for the second and fifth authors should be Wolf Science Center, Ernstbrunn, Austria, and Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna/ Medical University of Vienna/University of Vienna. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-26311-001.) Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills

  4. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    , and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...

  5. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  6. Gender and facial dominance in gaze cuing: Emotional context matters in the eyes that we follow

    NARCIS (Netherlands)

    Ohlsen, G.; van Zoest, W.; van Vugt, M.

    2013-01-01

    Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual's environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We

  7. "Gaze Leading": Initiating Simulated Joint Attention Influences Eye Movements and Choice Behavior

    Science.gov (United States)

    Bayliss, Andrew P.; Murphy, Emily; Naughtin, Claire K.; Kritikos, Ada; Schilbach, Leonhard; Becker, Stefanie I.

    2013-01-01

    Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator--the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon…

  8. There is more to gaze than meets the eye: How animals perceive the visual behaviour of others

    NARCIS (Netherlands)

    Goossens, B.M.A.

    2008-01-01

    Gaze following and the ability to understand that another individual sees something different from oneself are considered important components of human and animal social cognition. In animals, gaze following has been documented in various species, however, the underlying cognitive mechanisms and the

  9. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus.

    Directory of Open Access Journals (Sweden)

    Sayoko Ueda

    Full Text Available As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear, B-type (only the eye position is clear, and C-type (both the pupil and eye position are unclear. A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  10. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    Science.gov (United States)

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  11. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    Directory of Open Access Journals (Sweden)

    Cesco Willemse

    2018-02-01

    Full Text Available Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition and in the other condition it looked at the opposite object 80% of the time (disjoint condition. Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  12. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    Science.gov (United States)

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  13. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    Science.gov (United States)

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  14. Gaze Behavior, Believability, Likability and the iCat

    NARCIS (Netherlands)

    Poel, Mannes; Heylen, Dirk K.J.; Meulemans, M.; Nijholt, Antinus; Stock, O.; Nishida, T.

    2007-01-01

    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour

  15. Gaze Behavior, Believability, Likability and the iCat

    NARCIS (Netherlands)

    Nijholt, Antinus; Poel, Mannes; Heylen, Dirk K.J.; Stock, O.; Nishida, T.; Meulemans, M.; van Bremen, A.

    2009-01-01

    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour

  16. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction

    OpenAIRE

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents lik...

  17. The Interplay between Gaze Following, Emotion Recognition, and Empathy across Adolescence; a Pubertal Dip in Performance?

    NARCIS (Netherlands)

    van Rooijen, R.; Junge, C.M.M.; Kemner, C.

    2018-01-01

    During puberty a dip in face recognition is often observed, possibly caused by heightened levels of gonadal hormones which in turn affects the re-organization of relevant cortical circuitry. In the current study we investigated whether a pubertal dip could be observed in three other abilities

  18. Infant Gaze Following during Parent-Infant Coviewing of Baby Videos

    Science.gov (United States)

    Demers, Lindsay B.; Hanson, Katherine G.; Kirkorian, Heather L.; Pempek, Tiffany A.; Anderson, Daniel R.

    2013-01-01

    A total of 122 parent–infant dyads were observed as they watched a familiar or novel infant-directed video in a laboratory setting. Infants were between 12-15 and 18-21 months old. Infants were more likely to look toward the TV immediately following their parents' look toward the TV. This apparent social influence on infant looking at television…

  19. Impact of cognitive and linguistic ability on gaze behavior in children with hearing impairment

    Directory of Open Access Journals (Sweden)

    Olof eSandgren

    2013-11-01

    Full Text Available In order to explore verbal-nonverbal integration, we investigated the influence of cognitive and linguistic ability on gaze behavior during spoken language conversation between children with mild-to-moderate hearing impairment (HI and normal-hearing (NH peers. Ten HI-NH and ten NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Cox proportional hazards regression was used to model associations between performance on cognitive and linguistic tasks and the probability of gaze to the conversational partner’s face. Analyses compare the listeners in each dyad (HI: n = 10, mean age = 12;6 years, SD = 2;0, mean better ear pure-tone average 33.0 dB HL, SD = 7.8; NH: n = 10, mean age = 13;7 years, SD = 1;11. Group differences in gaze behavior – with HI gazing more to the conversational partner than NH – remained significant despite adjustment for ability on receptive grammar, expressive vocabulary, and complex working memory. Adjustment for phonological short term memory, as measured by nonword repetition, removed group differences, revealing an interaction between group membership and nonword repetition ability. Stratified analysis showed a twofold increase of the probability of gaze-to-partner for HI with low phonological short term memory capacity, and a decreased probability for HI with high capacity, as compared to NH peers. The results revealed differences in gaze behavior attributable to performance on a phonological short term memory task. Participants with hearing impairment and low phonological short term memory capacity showed a doubled probability of gaze to the conversational partner, indicative of a visual bias. The results stress the need to look beyond the hearing impairment in diagnostics and intervention. Acknowledgment of the finding requires clinical assessment of children with hearing impairment to be supported by tasks tapping

  20. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    Science.gov (United States)

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  1. The Oxytocin Receptor Gene (OXTR) and gazing behavior during social interaction: An observational study in young adults

    NARCIS (Netherlands)

    Verhagen, M.; Engels, R.C.M.E.; Roekel, G.H. van

    2014-01-01

    Background: In the present study, the relation between a polymorphic marker within the OXTR gene (rs53576) and gazing behavior during two separate social interaction tasks was examined. Gazing behavior was considered to be an integral part of belonging regulation processes. Methods: We conducted an

  2. Intranasal Oxytocin Treatment Increases Eye-Gaze Behavior toward the Owner in Ancient Japanese Dog Breeds

    Directory of Open Access Journals (Sweden)

    Miho Nagasawa

    2017-09-01

    Full Text Available Dogs acquired unique cognitive abilities during domestication, which is thought to have contributed to the formation of the human-dog bond. In European breeds, but not in wolves, a dog’s gazing behavior plays an important role in affiliative interactions with humans and stimulates oxytocin secretion in both humans and dogs, which suggests that this interspecies oxytocin and gaze-mediated bonding was also acquired during domestication. In this study, we investigated whether Japanese breeds, which are classified as ancient breeds and are relatively close to wolves genetically, establish a bond with their owners through gazing behavior. The subject dogs were treated with either oxytocin or saline before the starting of the behavioral testing. We also evaluated physiological changes in the owners during mutual gazing by analyzing their heart rate variability (HRV and subsequent urinary oxytocin levels in both dogs and their owners. We found that oxytocin treatment enhanced the gazing behavior of Japanese dogs and increased their owners’ urinary oxytocin levels, as was seen with European breeds; however, the measured durations of skin contact and proximity to their owners were relatively low. In the owners’ HRV readings, inter-beat (R-R intervals (RRI, the standard deviation of normal to normal inter-beat (R-R intervals (SDNN, and the root mean square of successive heartbeat interval differences (RMSSD were lower when the dogs were treated with oxytocin compared with saline. Furthermore, the owners of female dogs showed lower SDNN than the owners of male dogs. These results suggest that the owners of female Japanese dogs exhibit more tension during interactions, and apart from gazing behavior, the dogs may show sex differences in their interactions with humans as well. They also suggest that Japanese dogs use eye-gazing as an attachment behavior toward humans similar to European breeds; however, there is a disparity between the dog sexes when

  3. Sociability and gazing toward humans in dogs and wolves: Simple behaviors with broad implications.

    Science.gov (United States)

    Bentosela, Mariana; Wynne, C D L; D'Orazio, M; Elgier, A; Udell, M A R

    2016-01-01

    Sociability, defined as the tendency to approach and interact with unfamiliar people, has been found to modulate some communicative responses in domestic dogs, including gaze behavior toward the human face. The objective of this study was to compare sociability and gaze behavior in pet domestic dogs and in human-socialized captive wolves in order to identify the relative influence of domestication and learning in the development of the dog-human bond. In Experiment 1, we assessed the approach behavior and social tendencies of dogs and wolves to a familiar and an unfamiliar person. In Experiment 2, we compared the animal's duration of gaze toward a person's face in the presence of food, which the animals could see but not access. Dogs showed higher levels of interspecific sociability than wolves in all conditions, including those where attention was unavailable. In addition, dogs gazed longer at the person's face than wolves in the presence of out-of-reach food. The potential contributions of domestication, associative learning, and experiences during ontogeny to prosocial behavior toward humans are discussed. © 2016 Society for the Experimental Analysis of Behavior.

  4. The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception.

    Science.gov (United States)

    Buchan, Julie N; Paré, Martin; Munhall, Kevin G

    2008-11-25

    During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

  5. Gaze Behavior of Gymnastics Judges: Where Do Experienced Judges and Gymnasts Look While Judging?

    Science.gov (United States)

    Pizzera, Alexandra; Möller, Carsten; Plessner, Henning

    2018-01-01

    Gymnastics judges and former gymnasts have been shown to be quite accurate in detecting errors and accurately judging performance. Purpose: The purpose of the current study was to examine if this superior judging performance is reflected in judges' gaze behavior. Method: Thirty-five judges were asked to judge 21 gymnasts who performed a skill on…

  6. Gazing behavior reactions of Vietnamese and Austrian consumers to Austrian wafers and their relations to wanting, expected and tasted liking.

    Science.gov (United States)

    Vu, Thi Minh Hang; Tu, Viet Phu; Duerrschmid, Klaus

    2018-05-01

    Predictability of consumers' food choice based on their gazing behavior using eye-tracking has been shown and discussed in recent research. By applying this observational technique and conventional methods on a specific food product, this study aims at investigating consumers' reactions associated with gazing behavior, wanting, building up expectations, and the experience of tasting. The tested food products were wafers from Austria with hazelnut, whole wheat, lemon and vanilla flavors, which are very well known in Austria and not known in Vietnam. 114 Vietnamese and 128 Austrian participants took part in three sections: The results indicate that: i) the gazing behavior parameters are highly correlated in a positive way with the wanting-to-try choice; ii) wanting to try is in compliance with the expected liking for the Austrian consumer panel only, which is very familiar with the products; iii) the expected and tasted liking of the products are highly country and product dependent. The expected liking is strongly correlated with the tasted liking for the Austrian panel only. Differences between the reactions of the Vietnamese and Austrian consumers are discussed in detail. The results, which reflect the complex process from gazing for "wanting to try" to the expected and tasted liking, are discussed in the context of the cognitive theory and food choice habits of the consumers. Copyright © 2018. Published by Elsevier Ltd.

  7. Gaze Behavior in a Natural Environment with a Task-Relevant Distractor: How the Presence of a Goalkeeper Distracts the Penalty Taker

    Directory of Open Access Journals (Sweden)

    Johannes Kurz

    2018-01-01

    Full Text Available Gaze behavior in natural scenes has been shown to be influenced not only by top–down factors such as task demands and action goals but also by bottom–up factors such as stimulus salience and scene context. Whereas gaze behavior in the context of static pictures emphasizes spatial accuracy, gazing in natural scenes seems to rely more on where to direct the gaze involving both anticipative components and an evaluation of ongoing actions. Not much is known about gaze behavior in far-aiming tasks in which multiple task-relevant targets and distractors compete for the allocation of visual attention via gaze. In the present study, we examined gaze behavior in the far-aiming task of taking a soccer penalty. This task contains a proximal target, the ball; a distal target, an empty location within the goal; and a salient distractor, the goalkeeper. Our aim was to investigate where participants direct their gaze in a natural environment with multiple potential fixation targets that differ in task relevance and salience. Results showed that the early phase of the run-up seems to be driven by both the salience of the stimulus setting and the need to perform a spatial calibration of the environment. The late run-up, in contrast, seems to be controlled by attentional demands of the task with penalty takers having habitualized a visual routine that is not disrupted by external influences (e.g., the goalkeeper. In addition, when trying to shoot a ball as accurately as possible, penalty takers directed their gaze toward the ball in order to achieve optimal foot-ball contact. These results indicate that whether gaze is driven by salience of the stimulus setting or by attentional demands depends on the phase of the actual task.

  8. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behavior in Dealing with Uncertainty.

    Science.gov (United States)

    Wittek, Peter; Liu, Ying-Hsang; Darányi, Sándor; Gedeon, Tom; Lim, Ik Soo

    2016-01-01

    Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Our findings from an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain suggest that user traits of cognitive styles and perceived search task difficulty are significantly correlated with eye gaze and search behavior. We also demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favor of ambiguity minimization. Since the pattern of behavior in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.

  9. Gender and facial dominance in gaze cuing: emotional context matters in the eyes that we follow.

    Directory of Open Access Journals (Sweden)

    Garian Ohlsen

    Full Text Available Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual's environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We used the gaze cue task to induce attention to or away from the location of a target stimulus. In the experiment, the gaze cue either belonged to a (dominant looking male face or a (non-dominant looking female face. Critically, prior to the task, individuals were primed with pictures of threat or no threat to induce either a dangerous or safe environment. Findings revealed that the primed emotional context critically influenced the gaze cuing effect. While a gaze cue of the dominant male face influenced performance in both the threat and no-threat conditions, the gaze cue of the non-dominant female face only influenced performance in the no-threat condition. This research suggests an implicit, context-dependent follower bias, which carries implications for research on visual attention, social cognition, and leadership.

  10. Gazing behavior during mixed-sex interactions: Sex and attractiveness effects

    NARCIS (Netherlands)

    Straaten, I. van; Holland, R.W.; Finkenauer, C.; Hollenstein, T.P.; Engels, R.C.M.E.

    2010-01-01

    We investigated to what extent the length of people's gazes during conversations with opposite-sex persons is affected by the physical attractiveness of the partner. Single participants (N = 115) conversed for 5 min with confederates who were rated either as low or high on physical attractiveness.

  11. What happens when a robot favors someone? How a tour guide robot uses gaze behavior to address multiple persons while storytelling about art

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; Sépulveda Bradford, Gilberto; van Dijk, Elisabeth M.A.G.; Lohse, M.; Evers, Vanessa

    2013-01-01

    We report intermediate results of an ongoing study into the effectiveness of robot gaze behaviors when addressing multiple persons. The work is being carried out as part of the EU FP7 project FROG and concerns the design and evaluation of interactive behaviors of a tour guide robot. Our objective is

  12. Latvijas gaze

    International Nuclear Information System (INIS)

    1994-04-01

    A collection of photocopies of materials (such as overheads etc.) used at a seminar (organized by the Board of Directors of the company designated ''Latvijas Gaze'' in connection with The National Oil and Gas Company of Denmark, DONG) comprising an analysis of training needs with regard to marketing of gas technology and consultancy to countries in Europe, especially with regard to Latvia. (AB)

  13. Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior

    Science.gov (United States)

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434

  14. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  15. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  16. Accuracy of outcome anticipation, but not gaze behavior, differs against left- and right-handed penalties in team-handball goalkeeping

    Directory of Open Access Journals (Sweden)

    Florian eLoffing

    2015-12-01

    Full Text Available Low perceptual familiarity with relatively rarer left-handed as opposed to more common right-handed individuals may result in athletes’ poorer ability to anticipate the former’s action intentions. Part of such left-right asymmetry in visual anticipation could be due to an inefficient gaze strategy during confrontation with left-handed individuals. To exemplify, observers may not mirror their gaze when viewing left- vs. right-handed actions but preferentially fixate on an opponent’s right body side, irrespective of an opponent’s handedness, owing to the predominant exposure to right-handed actions. So far empirical verification of such assumption, however, is lacking. Here we report on an experiment where team-handball goalkeepers’ and non-goalkeepers’ gaze behavior was recorded while they predicted throw direction of left- and right-handed seven-meter penalties shown as videos on a computer monitor. As expected, goalkeepers were considerably more accurate than non-goalkeepers and prediction was better against right- than left-handed penalties. However, there was no indication of differences in gaze measures (i.e., number of fixations, overall and final fixation duration, time-course of horizontal or vertical fixation deviation as a function of skill group or the penalty-takers’ handedness. Findings suggest that inferior anticipation of left-handed compared to right-handed individuals’ action intentions may not be associated with misalignment in gaze behavior. Rather, albeit looking similarly, accuracy differences could be due to observers’ differential ability of picking up and interpreting the visual information provided by left- vs. right-handed movements.

  17. Accuracy of Outcome Anticipation, But Not Gaze Behavior, Differs Against Left- and Right-Handed Penalties in Team-Handball Goalkeeping

    Science.gov (United States)

    Loffing, Florian; Sölter, Florian; Hagemann, Norbert; Strauss, Bernd

    2015-01-01

    Low perceptual familiarity with relatively rarer left-handed as opposed to more common right-handed individuals may result in athletes' poorer ability to anticipate the former's action intentions. Part of such left-right asymmetry in visual anticipation could be due to an inefficient gaze strategy during confrontation with left-handed individuals. To exemplify, observers may not mirror their gaze when viewing left- vs. right-handed actions but preferentially fixate on an opponent's right body side, irrespective of an opponent's handedness, owing to the predominant exposure to right-handed actions. So far empirical verification of such assumption, however, is lacking. Here we report on an experiment where team-handball goalkeepers' and non-goalkeepers' gaze behavior was recorded while they predicted throw direction of left- and right-handed 7-m penalties shown as videos on a computer monitor. As expected, goalkeepers were considerably more accurate than non-goalkeepers and prediction was better against right- than left-handed penalties. However, there was no indication of differences in gaze measures (i.e., number of fixations, overall and final fixation duration, time-course of horizontal or vertical fixation deviation) as a function of skill group or the penalty-takers' handedness. Findings suggest that inferior anticipation of left-handed compared to right-handed individuals' action intentions may not be associated with misalignment in gaze behavior. Rather, albeit looking similarly, accuracy differences could be due to observers' differential ability of picking up and interpreting the visual information provided by left- vs. right-handed movements. PMID:26648887

  18. Accuracy of Outcome Anticipation, But Not Gaze Behavior, Differs Against Left- and Right-Handed Penalties in Team-Handball Goalkeeping.

    Science.gov (United States)

    Loffing, Florian; Sölter, Florian; Hagemann, Norbert; Strauss, Bernd

    2015-01-01

    Low perceptual familiarity with relatively rarer left-handed as opposed to more common right-handed individuals may result in athletes' poorer ability to anticipate the former's action intentions. Part of such left-right asymmetry in visual anticipation could be due to an inefficient gaze strategy during confrontation with left-handed individuals. To exemplify, observers may not mirror their gaze when viewing left- vs. right-handed actions but preferentially fixate on an opponent's right body side, irrespective of an opponent's handedness, owing to the predominant exposure to right-handed actions. So far empirical verification of such assumption, however, is lacking. Here we report on an experiment where team-handball goalkeepers' and non-goalkeepers' gaze behavior was recorded while they predicted throw direction of left- and right-handed 7-m penalties shown as videos on a computer monitor. As expected, goalkeepers were considerably more accurate than non-goalkeepers and prediction was better against right- than left-handed penalties. However, there was no indication of differences in gaze measures (i.e., number of fixations, overall and final fixation duration, time-course of horizontal or vertical fixation deviation) as a function of skill group or the penalty-takers' handedness. Findings suggest that inferior anticipation of left-handed compared to right-handed individuals' action intentions may not be associated with misalignment in gaze behavior. Rather, albeit looking similarly, accuracy differences could be due to observers' differential ability of picking up and interpreting the visual information provided by left- vs. right-handed movements.

  19. Between Gazes

    DEFF Research Database (Denmark)

    Elias, Camelia

    2009-01-01

    In the film documentary Zizek! (2006) Astra Taylor, the film’s director, introduces Slavoj Zizek and his central notions of Lacanian psychoanalysis as they tie in with Marxism, ideology, and culture. Apart from following Zizek from New York to his home in Ljubljana, the documentary presents...... delivers his thoughts on philosophy while in bed or in the bathroom. It is clear that one of the devices that the documentary uses in its portrayal of Zizek is the palimpsest, and what is being layered is the gaze. My essay introduces the idea of layering as a case of intermediality between different art...

  20. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Abdulaziz Abubshait

    2017-08-01

    Full Text Available Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human–robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human–robot interaction. We examine this question by manipulating agent appearance (human vs. robot and behavior (reliable vs. random within the same paradigm and examine how congruent (human/reliable vs. robot/random versus incongruent (human/random vs. robot/reliable combinations of these triggers affect performance (i.e., gaze following and attitudes (i.e., agent ratings in human–robot interaction. The results show that both appearance and behavior affect human–robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human–robot interaction are discussed.

  1. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    Science.gov (United States)

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  2. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research.

    Science.gov (United States)

    Kredel, Ralf; Vater, Christian; Klostermann, André; Hossner, Ernst-Joachim

    2017-01-01

    Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  3. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research

    Directory of Open Access Journals (Sweden)

    Ralf Kredel

    2017-10-01

    Full Text Available Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes, while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses. To meet both demands, some promising compromises of methodological solutions have been proposed—in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  4. Human place and response learning: navigation strategy selection, pupil size and gaze behavior.

    Science.gov (United States)

    de Condappa, Olivier; Wiener, Jan M

    2016-01-01

    In this study, we examined the cognitive processes and ocular behavior associated with on-going navigation strategy choice using a route learning paradigm that distinguishes between three different wayfinding strategies: an allocentric place strategy, and the egocentric associative cue and beacon response strategies. Participants approached intersections of a known route from a variety of directions, and were asked to indicate the direction in which the original route continued. Their responses in a subset of these test trials allowed the assessment of strategy choice over the course of six experimental blocks. The behavioral data revealed an initial maladaptive bias for a beacon response strategy, with shifts in favor of the optimal configuration place strategy occurring over the course of the experiment. Response time analysis suggests that the configuration strategy relied on spatial transformations applied to a viewpoint-dependent spatial representation, rather than direct access to an allocentric representation. Furthermore, pupillary measures reflected the employment of place and response strategies throughout the experiment, with increasing use of the more cognitively demanding configuration strategy associated with increases in pupil dilation. During test trials in which known intersections were approached from different directions, visual attention was directed to the landmark encoded during learning as well as the intended movement direction. Interestingly, the encoded landmark did not differ between the three navigation strategies, which is discussed in the context of initial strategy choice and the parallel acquisition of place and response knowledge.

  5. The coupling between gaze behavior and opponent kinematics during anticipation of badminton shots.

    Science.gov (United States)

    Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark

    2014-10-01

    We examined links between the kinematics of an opponent's actions and the visual search behaviors of badminton players responding to those actions. A kinematic analysis of international standard badminton players (n = 4) was undertaken as they completed a range of serves. Video of these players serving was used to create a life-size temporal occlusion test to measure anticipation responses. Expert (n = 8) and novice (n = 8) badminton players anticipated serve location while wearing an eye movement registration system. During the execution phase of the opponent's movement, the kinematic analysis showed between-shot differences in distance traveled and peak acceleration at the shoulder, elbow, wrist and racket. Experts were more accurate at responding to the serves compared to novice players. Expert players fixated on the kinematic locations that were most discriminating between serve types more frequently and for a longer duration compared to novice players. Moreover, players were generally more accurate at responding to serves when they fixated vision upon the discriminating arm and racket kinematics. Findings extend previous literature by providing empirical evidence that expert athletes' visual search behaviors and anticipatory responses are inextricably linked to the opponent action being observed. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Conjugate Gaze Palsies

    Science.gov (United States)

    ... version Home Brain, Spinal Cord, and Nerve Disorders Cranial Nerve Disorders Conjugate Gaze Palsies Horizontal gaze palsy Vertical ... Version. DOCTORS: Click here for the Professional Version Cranial Nerve Disorders Overview of the Cranial Nerves Internuclear Ophthalmoplegia ...

  7. Orienting of attention via observed eye gaze is head-centred.

    Science.gov (United States)

    Bayliss, Andrew P; di Pellegrino, Giuseppe; Tipper, Steven P

    2004-11-01

    Observing averted eye gaze results in the automatic allocation of attention to the gazed-at location. The role of the orientation of the face that produces the gaze cue was investigated. The eyes in the face could look left or right in a head-centred frame, but the face itself could be oriented 90 degrees clockwise or anticlockwise such that the eyes were gazing up or down. Significant cueing effects to targets presented to the left or right of the screen were found in these head orientation conditions. This suggests that attention was directed to the side to which the eyes would have been looking towards, had the face been presented upright. This finding provides evidence that head orientation can affect gaze following, even when the head orientation alone is not a social cue. It also shows that the mechanism responsible for the allocation of attention following a gaze cue can be influenced by intrinsic object-based (i.e. head-centred) properties of the task-irrelevant cue.

  8. Gaze beats mouse

    DEFF Research Database (Denmark)

    Mateo, Julio C.; San Agustin, Javier; Hansen, John Paulin

    2008-01-01

    Facial EMG for selection is fast, easy and, combined with gaze pointing, it can provide completely hands-free interaction. In this pilot study, 5 participants performed a simple point-and-select task using mouse or gaze for pointing and a mouse button or a facial-EMG switch for selection. Gaze...

  9. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were...

  10. Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis

    Directory of Open Access Journals (Sweden)

    Sean eAndrist

    2015-07-01

    Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.

  11. Gaze Tracking Through Smartphones

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Hansen, John Paulin; Møllenbach, Emilie

    Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest.......Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest....

  12. Gaze interaction from bed

    DEFF Research Database (Denmark)

    Hansen, John Paulin; San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner

    2011-01-01

    This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assis...

  13. Gazing and Performing

    DEFF Research Database (Denmark)

    Larsen, Jonas; Urry, John

    2011-01-01

    The Tourist Gaze [Urry J, 1990 (Sage, London)] is one of the most discussed and cited tourism books (with about 4000 citations on Google scholar). Whilst wide ranging in scope, the book is known for the Foucault-inspired concept of the tourist gaze that brings out the fundamentally visual and image...

  14. Gaze Bias in Preference Judgments by Younger and Older Adults

    Directory of Open Access Journals (Sweden)

    Toshiki Saito

    2017-08-01

    Full Text Available Individuals’ gaze behavior reflects the choice they will ultimately make. For example, people confronting a choice among multiple stimuli tend to look longer at stimuli that are subsequently chosen than at other stimuli. This tendency, called the gaze bias effect, is a key aspect of visual decision-making. Nevertheless, no study has examined the generality of the gaze bias effect in older adults. Here, we used a two-alternative forced-choice task (2AFC to compare the gaze behavior reflective of different stages of decision processes demonstrated by younger and older adults. Participants who had viewed two faces were instructed to choose the one that they liked/disliked or the one that they judged to be more/less similar to their own face. Their eye movements were tracked while they chose. The results show that the gaze bias effect occurred during the remaining time in both age groups irrespective of the decision type. However, no gaze bias effect was observed for the preference judgment during the first dwell time. Our study demonstrated that the gaze bias during the remaining time occurred regardless of decision-making task and age. Further study using diverse participants, such as clinic patients or infants, may help to generalize the gaze bias effect and to elucidate the mechanisms underlying the gaze bias.

  15. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    Science.gov (United States)

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  16. Gaze as a biometric

    Science.gov (United States)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  17. Gaze as a biometric

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  18. Infants' Developing Understanding of Social Gaze

    Science.gov (United States)

    Beier, Jonathan S.; Spelke, Elizabeth S.

    2012-01-01

    Young infants are sensitive to self-directed social actions, but do they appreciate the intentional, target-directed nature of such behaviors? The authors addressed this question by investigating infants' understanding of social gaze in third-party interactions (N = 104). Ten-month-old infants discriminated between 2 people in mutual versus…

  19. Intermediate view synthesis for eye-gazing

    Science.gov (United States)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  20. La mirada de los porteros de fútbol sala ante diferentes tipos de respuesta motriz. [Futsal goalkeepers’ gaze behavior with different type of motor response].

    Directory of Open Access Journals (Sweden)

    José Luis Graupera Sanz

    2013-07-01

    Full Text Available En este estudio se exploró y analizó el comportamiento visual de un grupo de porteros expertos de fútbol sala con el objetivo de comprobar cómo el tipo de respuesta motriz solicitada influía en su comportamiento visual. Participaron 4 porteros a los que se les presentó un total de 48 clips de vídeo en una pantalla a tamaño real, bajo dos condiciones de respuesta: con movimiento de parada y sin movimiento de parada. Se registró su mirada con el pupilómetro ASL Mobile Eye durante dos condiciones de tiro de penalti. Se analizó la mirada en el intervalo de -250 a 205 ms en torno al disparo. Los resultados mostraron que cuando respondían con la acción habitual de parada, solo se encontraron fijaciones en la mitad de los casos, estas fijaciones eran de corta duración y localizadas principalmente en la zona del suelo justo enfrente del balón. Por el contrario, cuando se mantenían en posición estática, su mirada se dirigía hacia la zona entre el balón y la pierna de apoyo, empleando fijaciones de una duración más larga. Se puede concluir que el comportamiento visual fue diferente entre las dos condiciones como resultado de la adaptación a las demandas espacio-temporales específicas de cada condición, ya que el grado de movimiento en la respuesta solicitada tuvo influencia en el comportamiento visual asociado.AbstractThis study explored and analyzed the visual behavior of a group of experts from futsal goalkeepers in order to check on if the type of requested motor response influenced their visual behavior. Four goalkeepers were presented with a total of 48 video clips on a real-size screen, under two response conditions: with and without movement. Gaze was recorded with the ASL Mobile Eye eyetracker, and was analyzed in the range of -250 to 205 ms around the penalty kick. The results showed that when responding with the usual stoping action, fixations were found only in the half of the cases, being of short duration, and

  1. E-gaze : create gaze communication for peoplewith visual disability

    NARCIS (Netherlands)

    Qiu, S.; Osawa, H.; Hu, J.; Rauterberg, G.W.M.

    2015-01-01

    Gaze signals are frequently used by the sighted in social interactions as visual cues. However, these signals and cues are hardly accessible for people with visual disability. A conceptual design of E-Gaze glasses is proposed, assistive to create gaze communication between blind and sighted people

  2. Perceptual-cognitive changes during motor learning: The influence of mental and physical practice on mental representation, gaze behavior, and performance of a complex action

    Directory of Open Access Journals (Sweden)

    Cornelia eFrank

    2016-01-01

    Full Text Available Despite the wealth of research on differences between experts and novices with respect to their perceptual-cognitive background (e.g., mental representations, gaze behavior, little is known about the change of these perceptual-cognitive components over the course of motor learning. In the present study, changes in one’s mental representation, quiet eye behavior, and outcome performance were examined over the course of skill acquisition as it related to physical and mental practice. Novices (N = 45 were assigned to one of three conditions: physical practice, physical practice plus mental practice, and no practice. Participants in the practice groups trained on a golf putting task over the course of three days, either by repeatedly executing the putt, or by both executing and imaging the putt. Findings revealed improvements in putting performance across both practice conditions. Regarding the perceptual-cognitive changes, participants practicing mentally and physically revealed longer quiet eye durations as well as more elaborate representation structures in comparison to the control group, while this was not the case for participants who underwent physical practice only. Thus, in the present study, combined mental and physical practice led to both formation of mental representations in long-term memory and longer quiet eye durations. Interestingly, the length of the quiet eye directly related to the degree of elaborateness of the underlying mental representation, supporting the notion that the quiet eye reflects cognitive processing. This study is the first to show that the quiet eye becomes longer in novices practicing a motor action. Moreover, the findings of the present study suggest that perceptual and cognitive adaptations co-occur over the course of motor learning.

  3. A GazeWatch Prototype

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Biermann, Florian; Møllenbach, Emile

    2015-01-01

    We demonstrate potentials of adding a gaze tracking unit to a smartwatch, allowing hands-free interaction with the watch itself and control of the environment. Users give commands via gaze gestures, i.e. looking away and back to the GazeWatch. Rapid presentation of single words on the watch displ...... provides a rich and effective textual interface. Finally, we exemplify how the GazeWatch can be used as a ubiquitous pointer on large displays....

  4. Gazes and Performances

    DEFF Research Database (Denmark)

    Larsen, Jonas

    ethnographic studies I spell out the embodied, hybridised, mobile and performative nature of tourist gazing especially with regard to tourist photography. The talk draws on my recent book Tourism, Performance and the Everyday: Consuming the Orient (Routledge, 2009, With M. Haldrup) and the substantially......Abstract: Recent literature has critiqued this notion of the 'tourist gaze' for reducing tourism to visual experiences 'sightseeing' and neglecting other senses and bodily experiences of doing tourism. A so-called 'performance turn' within tourist studies highlights how tourists experience places...... to onceptualise the corporeality of tourist bodies and the embodied actions of and interactions between tourist workers, tourists and 'locals' on various stages. It has been suggested that it is necessary to choose between gazing and performing as the tourism paradigm (Perkin and Thorns 2001). Rather than...

  5. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements...

  6. The Epistemology of the Gaze

    DEFF Research Database (Denmark)

    Kramer, Mette

    2007-01-01

    In psycho-semiotic film theory the gaze is often considered to be a straitjacket for the female spectator. If we approach the gaze from an empiric so-called ‘naturalised’ lens, it is possible to regard the gaze as a functional devise through which the spectator can obtain knowledge essential for ...... for her self-preservation....

  7. Social Referencing Gaze Behavior during a Videogame Task: Eye Tracking Evidence from Children with and without ASD

    Science.gov (United States)

    Finke, Erinn H.; Wilkinson, Krista M.; Hickerson, Benjamin D.

    2017-01-01

    The purpose of this study was to understand the social referencing behaviors of children with and without autism spectrum disorder (ASD) while visually attending to a videogame stimulus depicting both the face of the videogame player and the videogame play action. Videogames appear to offer a uniquely well-suited environment for the emergence of…

  8. Directing gaze: the effect of disclaimer labels on women's visual attention to fashion magazine advertisements.

    Science.gov (United States)

    Bury, Belinda; Tiggemann, Marika; Slater, Amy

    2014-09-01

    In an effort to combat the known negative effects of exposure to unrealistic thin ideal images, there is increasing worldwide pressure on fashion, media and advertising industries to disclose when images have been digitally altered. The current study used eye tracking technology to investigate experimentally how digital alteration disclaimer labels impact women's visual attention to fashion magazine advertisements. Participants were 60 female undergraduate students who viewed four thin ideal advertisements with either no disclaimer, a generic disclaimer, or a specific more detailed disclaimer. It was established that women did attend to the disclaimers. The nature of the disclaimer had no effect on time spent looking at particular body parts, but did affect the direction of gaze following reading of the disclaimer. This latter effect was found to be greater for women high on trait appearance comparison. Further research is paramount in guiding effective policy around the use of disclaimer labels. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Gaze-controlled Driving

    DEFF Research Database (Denmark)

    Tall, Martin; Alapetite, Alexandre; San Agustin, Javier

    2009-01-01

    We investigate if the gaze (point of regard) can control a remote vehicle driving on a racing track. Five different input devices (on-screen buttons, mouse-pointing low-cost webcam eye tracker and two commercial eye tracking systems) provide heading and speed control on the scene view transmitted...

  10. Gaze Interactive Building Instructions

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Ahmed, Zaheer; Mardanbeigi, Diako

    We combine eye tracking technology and mobile tablets to support hands-free interaction with digital building instructions. As a proof-of-concept we have developed a small interactive 3D environment where one can interact with digital blocks by gaze, keystroke and head gestures. Blocks may be moved...

  11. Narcolepsy, REM sleep behavior disorder, and supranuclear gaze palsy associated with Ma1 and Ma2 antibodies and tonsillar carcinoma.

    Science.gov (United States)

    Adams, Chris; McKeon, Andrew; Silber, Michael H; Kumar, Rajeev

    2011-04-01

    To describe a patient with diencephalic and mesencephalic presentation of a Ma1 and Ma2 antibody-associated paraneoplastic neurological disorder. Case report. The Colorado Neurological Institute Movement Disorders Center in Englewood, Colorado, and the Mayo Clinic in Rochester, Minnesota. A 55-year-old man with a paraneoplastic neurological disorder characterized by rapid eye movement sleep behavior disorder, narcolepsy, and a progressive supranuclear palsy-like syndrome in the setting of tonsillar carcinoma. Immunotherapy for paraneoplastic neurological disorder, surgery and radiotherapy for cancer, and symptomatic treatment for parkinsonism and sleep disorders. Polysomnography, multiple sleep latency test, and neurological examination. The cancer was detected at a limited stage and treatable. After oncological therapy and immunotherapy, symptoms stabilized. Treatment with modafinil improved daytime somnolence. Rapid onset and progression of multifocal deficits may be a clue to paraneoplastic etiology. Early treatment of a limited stage cancer (with or without immunotherapy) may possibly slow progression of neurological symptoms. Symptomatic treatment may be beneficial.

  12. Tonic noradrenergic activity modulates explorative behavior and attentional set shifting: Evidence from pupillometry and gaze pattern analysis.

    Science.gov (United States)

    Pajkossy, Péter; Szőllősi, Ágnes; Demeter, Gyula; Racsmány, Mihály

    2017-12-01

    A constant task for every living organism is to decide whether to exploit rewards associated with current behavior or to explore the environment for more rewarding options. Current empirical evidence indicates that exploitation is related to phasic whereas exploration is related to tonic firing mode of noradrenergic neurons in the locus coeruleus. In humans, this exploration-exploitation trade-off is subserved by the ability to flexibly switch attention between task-related and task-irrelevant information. Here, we investigated whether this function, called attentional set shifting, is related to exploration and tonic noradrenergic discharge. We measured pretrial baseline pupil dilation, proved to be strongly correlated with the activity of the locus coeruleus, while human participants took part in well-known tasks of attentional set shifting. Study 1 used the Wisconsin Card Sorting Task, whereas in Study 2, the Intra/Extradimensional Set Shifting Task was used. Both tasks require participants to choose between different compound stimuli based on feedback provided for their previous decisions. During the task, stimulus-reward contingencies change periodically, thus participants are repeatedly required to reassess which stimulus features are relevant (i.e., they shift their attentional set). Our results showed that baseline pupil diameter steadily decreased when the stimulus-reward contingencies were stable, whereas they suddenly increased when these contingencies changed. Analysis of looking patterns also confirmed the presence of exploratory behavior during attentional set shifting. Thus, our results suggest that tonic firing mode of noradrenergic neurons in the locus coeruleus is implicated in attentional set shifting, as it regulates the amount of exploration. © 2017 Society for Psychophysiological Research.

  13. The “Social Gaze Space”: A Taxonomy for Gaze-Based Communication in Triadic Interactions

    Directory of Open Access Journals (Sweden)

    Mathis Jording

    2018-02-01

    Full Text Available Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a “simultaneous input-output device”: While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act, our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities – and their phylogenetic and ontogenetic impacts – become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the ‘Social Gaze Space’ as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely ‘partner-oriented,’ ‘object-oriented,’ ‘introspective,’ ‘initiating joint attention,’ and ‘responding joint attention.’ Different combinations of these states explain several interpersonal phenomena. We argue that this taxonomy distinguishes the most relevant interactional states along their distinctive features, and will showcase the implications for prominent social gaze phenomena. The taxonomy allows to identify research desiderates that have been neglected so far. We argue for a systematic investigation of these phenomena and discuss some related methodological issues.

  14. EYE GAZE TRACKING

    DEFF Research Database (Denmark)

    2017-01-01

    This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of: detecting the position of at least three reflections on said eye, transforming said positions to spanning...... a normalized coordinate system spanning a frame of reference, wherein said transformation is performed based on a bilinear transformation or a non linear transformation e.g. a möbius transformation or a homographic transformation, detecting the position of said center of the eye relative to the position...... of said reflections and transforming this position to said normalized coordinate system, tracking the eye gaze by tracking the movement of said eye in said normalized coordinate system. Thereby calibration of a camera, such as knowledge of the exact position and zoom level of the camera, is avoided...

  15. Gaze behaviour during space perception and spatial decision making.

    Science.gov (United States)

    Wiener, Jan M; Hölscher, Christoph; Büchner, Simon; Konieczny, Lars

    2012-11-01

    A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screenshots of choice points taken in large virtual environments. Each screenshot depicted alternative path options. In Experiment 1, participants had to decide between them to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently, they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 and 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.

  16. Demo of Gaze Controlled Flying

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Hansen, John Paulin; Scott MacKenzie, I.

    2012-01-01

    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV.......Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV....

  17. Social evolution. Oxytocin-gaze positive loop and the coevolution of human-dog bonds.

    Science.gov (United States)

    Nagasawa, Miho; Mitsui, Shouhei; En, Shiori; Ohtani, Nobuyo; Ohta, Mitsuaki; Sakuma, Yasuo; Onaka, Tatsushi; Mogi, Kazutaka; Kikusui, Takefumi

    2015-04-17

    Human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans. We show that gazing behavior from dogs, but not wolves, increased urinary oxytocin concentrations in owners, which consequently facilitated owners' affiliation and increased oxytocin concentration in dogs. Further, nasally administered oxytocin increased gazing behavior in dogs, which in turn increased urinary oxytocin concentrations in owners. These findings support the existence of an interspecies oxytocin-mediated positive loop facilitated and modulated by gazing, which may have supported the coevolution of human-dog bonding by engaging common modes of communicating social attachment. Copyright © 2015, American Association for the Advancement of Science.

  18. Gaze stabilization reflexes in the mouse: New tools to study vision and sensorimotor

    NARCIS (Netherlands)

    B. van Alphen (Bart)

    2010-01-01

    markdownabstract__abstract__ Gaze stabilization reflexes are a popular model system in neuroscience for connecting neurophysiology and behavior as well as studying the neural correlates of behavioral plasticity. These compensatory eye movements are one of the simplest motor behaviors,

  19. Investigating gaze of children with ASD in naturalistic settings.

    Directory of Open Access Journals (Sweden)

    Basilio Noris

    Full Text Available BACKGROUND: Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD. Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii whether new atypical elements appear when studying visual behavior across the whole field of view. METHODOLOGY/PRINCIPAL FINDINGS: Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS. The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. CONCLUSIONS/SIGNIFICANCE: The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment.

  20. Gaze Strategies in Skateboard Trick Jumps: Spatiotemporal Constraints in Complex Locomotion

    Science.gov (United States)

    Klostermann, André; Küng, Philip

    2017-01-01

    Purpose: This study aimed to further the knowledge on gaze behavior in locomotion by studying gaze strategies in skateboard jumps of different difficulty that had to be performed either with or without an obstacle. Method: Nine experienced skateboarders performed "Ollie" and "Kickflip" jumps either over an obstacle or over a…

  1. AmbiGaze : direct control of ambient devices by gaze

    OpenAIRE

    Velloso, Eduardo; Wirth, Markus; Weichel, Christian; Abreu Esteves, Augusto Emanuel; Gellersen, Hans-Werner Georg

    2016-01-01

    Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes...

  2. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    Science.gov (United States)

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  3. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.

    Science.gov (United States)

    Wilson, Mark R; Vine, Samuel J; Bright, Elizabeth; Masters, Rich S W; Defriend, David; McGrath, John S

    2011-12-01

    The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks.

  4. Autonomous social gaze model for an interactive virtual character in real-life settings

    OpenAIRE

    Yumak, Zerrin; van den Brink, Bram; Egges, Arjan

    2018-01-01

    This paper presents a gaze behavior model for an interactive virtual character situated in the real world. We are interested in estimating which user has an intention to interact, in other words which user is engaged with the virtual character. The model takes into account behavioral cues such as proximity, velocity, posture and sound, estimates an engagement score and drives the gaze behavior of the virtual character. Initially, we assign equal weights to these fea...

  5. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Rizwan Ali Naqvi

    2018-02-01

    Full Text Available A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB. The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  6. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  7. The Effectiveness of Gaze-Contingent Control in Computer Games.

    Science.gov (United States)

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  8. Gaze-based hints during child-robot gameplay

    NARCIS (Netherlands)

    Mwangi, E.; Barakova, Emilia I.; Diaz, M.L.Z.; Mallofre, A.C.; Rauterberg, G.W.M.; Kheddar, A.; Yoshida, E.; Sam Ge, S.; Suzuki, K.; Cabibihan, J.J.; Eyssel, F.; He, H.

    2017-01-01

    This paper presents a study that examines whether gaze hints provided by a robot tutor influences the behavior of children in a card matching game. In this regard, we conducted a within-subjects experiment, in which children played a card game “Memory” in the presence of a robot tutor in two

  9. Depth Compensation Model for Gaze Estimation in Sport Analysis

    DEFF Research Database (Denmark)

    Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2015-01-01

    is tested in a totally controlled environment with aim to check the influences of eye tracker parameters and ocular biometric parameters on its behavior. We also present a gaze estimation method based on epipolar geometry for binocular eye tracking setups. The depth compensation model has shown very...

  10. Quality control of geological voxel models using experts' gaze

    NARCIS (Netherlands)

    Maanen, P.P. van; Busschers, F.S.; Brouwer, A.M.; Meulen, M.J. van der; Erp, J.B.F. van

    2015-01-01

    Due to an expected increase in geological voxel model data-flow and user demands, the development of improved quality control for such models is crucial. This study explores the potential of a new type of quality control that improves the detection of errors by just using gaze behavior of 12

  11. Quality Control of Geological Voxel Models using Experts' Gaze

    NARCIS (Netherlands)

    van Maanen, Peter-Paul; Busschers, Freek S.; Brouwer, Anne-Marie; van der Meulendijk, Michiel J.; van Erp, Johannes Bernardus Fransiscus

    Due to an expected increase in geological voxel model data-flow and user demands, the development of improved quality control for such models is crucial. This study explores the potential of a new type of quality control that improves the detection of errors by just using gaze behavior of 12

  12. Experimental test of spatial updating models for monkey eye-head gaze shifts.

    Directory of Open Access Journals (Sweden)

    Tom J Van Grootel

    Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.

  13. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  14. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  15. A Gaze Interactive Textual Smartwatch Interface

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Biermann, Florian; Askø Madsen, Janus

    2015-01-01

    Mobile gaze interaction is challenged by inherent motor noise. We examined the gaze tracking accuracy and precision of twelve subjects wearing a gaze tracker on their wrist while standing and walking. Results suggest that it will be possible to detect whether people are glancing the watch, but no...

  16. TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile Tablets

    OpenAIRE

    Huang, Qiong; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    We study gaze estimation on tablets, our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet is not constrained. We collected the first large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all o...

  17. Investigating the link between radiologists’ gaze, diagnostic decision, and image content

    Science.gov (United States)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent; Krupinski, Elizabeth

    2013-01-01

    Objective To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Image analysis was performed in mammographic regions that attracted radiologists’ attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results By pooling the data from all readers, machine learning produced highly accurate predictive models linking image content, gaze, and cognition. Potential linking of those with diagnostic error was also supported to some extent. Merging readers’ gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the readers’ diagnostic errors while confirming 97.3% of their correct diagnoses. The readers’ individual perceptual and cognitive behaviors could be adequately predicted by modeling the behavior of others. However, personalized tuning was in many cases beneficial for capturing more accurately individual behavior. Conclusions There is clearly an interaction between radiologists’ gaze, diagnostic decision, and image content which can be modeled with machine learning algorithms. PMID:23788627

  18. Investigating social gaze as an action-perception online performance.

    Science.gov (United States)

    Grynszpan, Ouriel; Simonin, Jérôme; Martin, Jean-Claude; Nadel, Jacqueline

    2012-01-01

    Gaze represents a major non-verbal communication channel in social interactions. In this respect, when facing another person, one's gaze should not be examined as a purely perceptive process but also as an action-perception online performance. However, little is known about processes involved in the real-time self-regulation of social gaze. The present study investigates the impact of a gaze-contingent viewing window on fixation patterns and the awareness of being the agent moving the window. In face-to-face scenarios played by a virtual human character, the task for the 18 adult participants was to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking. The virtual character was embedded in naturalistic backgrounds to enhance realism. Eye-tracking data showed that the viewing window induced changes in gaze behavior, notably longer visual fixations. Notwithstanding, only half of the participants ascribed the window displacements to their eye movements. These participants also spent more time looking at the eyes and mouth regions of the virtual human character. The outcomes of the study highlight the dissociation between non-volitional gaze adaptation and the self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes, linked by bottom-up and top-down processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, in particular concerning autism spectrum disorders (ASD) where the poor pragmatic understanding of oral speech is considered linked to visual peculiarities that impede facial exploration.

  19. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Mahdi Khoramshahi

    Full Text Available The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game, whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a whether participants are able to exploit these gaze cues to improve their coordination, (b how gaze cues affect action prediction and temporal coordination, and (c whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view.43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues. In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT. This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar.This work confirms that people can exploit gaze cues to

  20. Gaze-Based Controlling a Vehicle

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    ) as an example of a complex gaze-based task in environment. This paper discusses the possibilities and limitations of how gaze interaction can be performed for controlling vehicles not only using a remote gaze tracker but also in general challenging situations where the user and robot are mobile...... modality if gaze trackers are embedded into the head- mounted devices. The domain of gaze-based interactive applications increases dramatically as interaction is no longer constrained to 2D displays. This paper proposes a general framework for gaze-based controlling a non- stationary robot (vehicle...... and the movements may be governed by several degrees of freedom (e.g. flying). A case study is also introduced where the mobile gaze tracker is used for controlling a Roomba vacuum cleaner....

  1. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    Energy Technology Data Exchange (ETDEWEB)

    Tourassi, Georgia [ORNL; Voisin, Sophie [ORNL; Paquit, Vincent C [ORNL; Krupinski, Elizabeth [University of Arizona

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By pooling the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.

  2. Differences in gaze anticipation for locomotion with and without vision

    Science.gov (United States)

    Authié, Colas N.; Hilt, Pauline M.; N'Guyen, Steve; Berthoz, Alain; Bennequin, Daniel

    2015-01-01

    Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition. PMID:26106313

  3. Dysfunctional gaze processing in bipolar disorder

    Directory of Open Access Journals (Sweden)

    Cristina Berchio

    2017-01-01

    The present study provides neurophysiological evidence for abnormal gaze processing in BP and suggests dysfunctional processing of direct eye contact as a prominent characteristic of bipolar disorder.

  4. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  5. Adaptive gaze control for object detection

    NARCIS (Netherlands)

    De Croon, G.C.H.E.; Postma, E.O.; Van den Herik, H.J.

    2011-01-01

    We propose a novel gaze-control model for detecting objects in images. The model, named act-detect, uses the information from local image samples in order to shift its gaze towards object locations. The model constitutes two main contributions. The first contribution is that the model’s setup makes

  6. Reading the mind from eye gaze.

    NARCIS (Netherlands)

    Christoffels, I.; Young, A.W.; Owen, A.M.; Scott, S.K.; Keane, J.; Lawrence, A.D.

    2002-01-01

    S. Baron-Cohen (1997) has suggested that the interpretation of gaze plays an important role in a normal functioning theory of mind (ToM) system. Consistent with this suggestion, functional imaging research has shown that both ToM tasks and eye gaze processing engage a similar region of the posterior

  7. Gaze Cueing by Pareidolia Faces

    OpenAIRE

    Kohske Takahashi; Katsumi Watanabe

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cuei...

  8. Brief Report: High and Low Level Initiations of Joint Attention, and Response to Joint Attention--Differential Relationships with Language and Imitation

    Science.gov (United States)

    Pickard, Katherine E.; Ingersoll, Brooke R.

    2015-01-01

    Frequency of high-level (showing/pointing) and low-level (coordinated gaze shifts) behaviors on the Early Social Communication Scales are often used as a measure of joint attention initiations (IJA). This study examined the degree to which these skills and response to joint attention (RJA; e.g. gaze following) were differentially related to…

  9. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  10. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    2015-01-01

    of passing. The analysis of the young Muslim men and women’s narratives points towards the particular embodied, intersectional and local possibilities for becoming and being visible as a legitimate Muslim subject in a society fraught with stereotypical and often negative images and discourses on Islam...

  11. Mobile gaze input system for pervasive interaction

    DEFF Research Database (Denmark)

    2017-01-01

    feedback to the user in response to the received command input. The unit provides feedback to the user on how to position the mobile unit in front of his eyes. The gaze tracking unit interacts with one or more controlled devices via wireless or wired communications. Example devices include a lock......, a thermostat, a light or a TV. The connection between the gaze tracking unit may be temporary or longer-lasting. The gaze tracking unit may detect features of the eye that provide information about the identity of the user....

  12. Wrist-worn pervasive gaze interaction

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Lund, Haakon; Biermann, Florian

    2016-01-01

    This paper addresses gaze interaction for smart home control, conducted from a wrist-worn unit. First we asked ten people to enact the gaze movements they would propose for e.g. opening a door or adjusting the room temperature. On basis of their suggestions we built and tested different versions...... selection. Their subjective evaluations were positive with regard to the speed of the interaction. We conclude that gaze gesture input seems feasible for fast and brief remote control of smart home technology provided that robustness of tracking is improved....

  13. Real-time gaze estimation via pupil center tracking

    Directory of Open Access Journals (Sweden)

    Cazzato Dario

    2018-02-01

    Full Text Available Automatic gaze estimation not based on commercial and expensive eye tracking hardware solutions can enable several applications in the fields of human computer interaction (HCI and human behavior analysis. It is therefore not surprising that several related techniques and methods have been investigated in recent years. However, very few camera-based systems proposed in the literature are both real-time and robust. In this work, we propose a real-time user-calibration-free gaze estimation system that does not need person-dependent calibration, can deal with illumination changes and head pose variations, and can work with a wide range of distances from the camera. Our solution is based on a 3-D appearance-based method that processes the images from a built-in laptop camera. Real-time performance is obtained by combining head pose information with geometrical eye features to train a machine learning algorithm. Our method has been validated on a data set of images of users in natural environments, and shows promising results. The possibility of a real-time implementation, combined with the good quality of gaze tracking, make this system suitable for various HCI applications.

  14. Gaze Cueing by Pareidolia Faces

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2013-12-01

    Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  15. Gaze cueing by pareidolia faces.

    Science.gov (United States)

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  16. Automatic and strategic measures as predictors of mirror gazing among individuals with body dysmorphic disorder symptoms.

    Science.gov (United States)

    Clerkin, Elise M; Teachman, Bethany A

    2009-08-01

    The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n = 32) or low (n = 31) BDD symptoms. Specifically, we examined the extent that (1) explicit interpretations tied to appearance, as well as (2) automatic associations and (3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, whereas strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures.

  17. Automatic and Strategic Measures as Predictors of Mirror Gazing Among Individuals with Body Dysmorphic Disorder Symptoms

    Science.gov (United States)

    Clerkin, Elise M.; Teachman, Bethany A.

    2011-01-01

    The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n=32) or low (n=31) BDD symptoms. Specifically, we examined the extent that 1) explicit interpretations tied to appearance, as well as 2) automatic associations and 3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, while strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures. PMID:19684496

  18. Latvijas Gaze buyback likely to flop

    Index Scriptorium Estoniae

    2003-01-01

    Veerandi Läti gaasifirma Latvijas Gaze omanik Itera kavatseb lähiajal lõpule viia üheksa protsendi Läti firma aktsiate müügi ettevõttele Gazprom. Gazprom'i kontrolli all on praegu 25 protsenti, Ruhrgas'il 28,66 ning E.ON Energie AG-l 18,06 protsenti Latvijas Gaze aktsiatest

  19. Attention to gaze and emotion in schizophrenia.

    Science.gov (United States)

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  20. Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection

    OpenAIRE

    Chen, Zhaokang; Shi, Bertram E.

    2017-01-01

    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...

  1. Joint Attention in Autism: Teaching Smiling Coordinated with Gaze to Respond to Joint Attention Bids

    Science.gov (United States)

    Krstovska-Guerrero, Ivana; Jones, Emily A.

    2013-01-01

    Children with autism demonstrate early deficits in joint attention and expressions of affect. Interventions to teach joint attention have addressed gaze behavior, gestures, and vocalizations, but have not specifically taught an expression of positive affect such as smiling that tends to occur during joint attention interactions. Intervention was…

  2. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  3. The gap between clinical gaze and systematic assessment of movement disorders after stroke

    NARCIS (Netherlands)

    Van der Krogt, H.J.M.; Meskers, C.G.M.; De Groot, J.H.; Klomp, A.; Arendzen, J.H.

    2012-01-01

    Background: Movement disorders after stroke are still captured by clinical gaze and translated to ordinal scores of low resolution. There is a clear need for objective quantification, with outcome measures related to pathophysiological background. Neural and non-neural contributors to joint behavior

  4. Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation

    Science.gov (United States)

    Wirth, Sylvia; Baraduc, Pierre; Planté, Aurélie; Pinède, Serge; Duhamel, Jean-René

    2017-01-01

    To elucidate how gaze informs the construction of mental space during wayfinding in visual species like primates, we jointly examined navigation behavior, visual exploration, and hippocampal activity as macaque monkeys searched a virtual reality maze for a reward. Cells sensitive to place also responded to one or more variables like head direction, point of gaze, or task context. Many cells fired at the sight (and in anticipation) of a single landmark in a viewpoint- or task-dependent manner, simultaneously encoding the animal’s logical situation within a set of actions leading to the goal. Overall, hippocampal activity was best fit by a fine-grained state space comprising current position, view, and action contexts. Our findings indicate that counterparts of rodent place cells in primates embody multidimensional, task-situated knowledge pertaining to the target of gaze, therein supporting self-awareness in the construction of space. PMID:28241007

  5. Visual Foraging With Fingers and Eye Gaze

    Directory of Open Access Journals (Sweden)

    Ómar I. Jóhannesson

    2016-03-01

    Full Text Available A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a The fact that a sizeable number of observers (in particular during gaze foraging had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  6. Predictive Gaze Cues and Personality Judgments: Should Eye Trust You?

    OpenAIRE

    Bayliss, Andrew P.; Tipper, Steven P.

    2006-01-01

    Although following another person's gaze is essential in fluent social interactions, the reflexive nature of this gaze-cuing effect means that gaze can be used to deceive. In a gaze-cuing procedure, participants were presented with several faces that looked to the left or right. Some faces always looked to the target (predictive-valid), some never looked to the target (predictive-invalid), and others looked toward and away from the target in equal proportions (nonpredictive). The standard gaz...

  7. The Gaze as constituent and annihilator

    Directory of Open Access Journals (Sweden)

    Mats Carlsson

    2012-11-01

    Full Text Available This article aims to join the contemporary effort to promote a psychoanalytic renaissance within cinema studies, post Post-Theory. In trying to shake off the burden of the 1970s film theory's distortion of the Lacanian Gaze, rejuvenating it with the strength of the Real and fusing it with Freudian thoughts on the uncanny, hopefully this new dawn can be reached. I aspire to conceptualize the Gaze in a straightforward manner. This in order to obtain an instrument for the identification of certain strategies within the filmic realm aimed at depicting the subjective destabilizing of diegetic characters as well as thwarting techniques directed at the spectorial subject. In setting this capricious Gaze against the uncanny phenomena described by Freud, we find that these two ideas easily intertwine into a draft description of a powerful, potentially reconstitutive force worth being highlighted.

  8. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Alireza Haji Fathaliyan

    2018-04-01

    Full Text Available Human–robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze–object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring may require more visual attention than other actions (reach, pick up, set down, and move. 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The “gaze object sequence” was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that

  9. Altered attentional and perceptual processes as indexed by N170 during gaze perception in schizophrenia: Relationship with perceived threat and paranoid delusions.

    Science.gov (United States)

    Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J

    2015-08-01

    Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).

  10. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  11. Estimating the gaze of a virtuality human.

    Science.gov (United States)

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.

  12. Design gaze simulation for people with visual disability

    NARCIS (Netherlands)

    Qiu, S.

    2017-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react. My doctoral research is to design gaze simulation

  13. Towards Wearable Gaze Supported Augmented Cognition

    DEFF Research Database (Denmark)

    Toshiaki Kurauchi, Andrew; Hitoshi Morimoto, Carlos; Mardanbeigi, Diako

    Augmented cognition applications must deal with the problem of how to exhibit information in an orderly, understandable, and timely fashion. Though context have been suggested to control the kind, amount, and timing of the information delivered, we argue that gaze can be a fundamental tool...... by the wearable computing community to develop a gaze supported augmented cognition application with three interaction modes. The application provides information of the person being looked at. The continuous mode updates information every time the user looks at a different face. The key activated discrete mode...

  14. Spatiotemporal characteristics of gaze of children with autism spectrum disorders while looking at classroom scenes.

    Directory of Open Access Journals (Sweden)

    Takahiro Higuchi

    Full Text Available Children with autism spectrum disorders (ASD who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years and 27 age-matched children with typical development (TD (14 boys and 13 girls; mean age, 8.2 years. We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.

  15. How do we update faces? Effects of gaze direction and facial expressions on working memory updating

    Directory of Open Access Journals (Sweden)

    Caterina eArtuso

    2012-09-01

    Full Text Available The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM. We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g. joy, while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g. fear. Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g. joy-direct gaze were compared to low binding conditions (e.g. joy-averted gaze. Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.

  16. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  17. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  18. Spatiotemporal characteristics of gaze of children with autism spectrum disorders while looking at classroom scenes.

    Science.gov (United States)

    Higuchi, Takahiro; Ishizaki, Yuko; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari

    2017-01-01

    Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.

  19. How do we update faces? Effects of gaze direction and facial expressions on working memory updating.

    Science.gov (United States)

    Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola

    2012-01-01

    The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g., fear). Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g., joy-direct gaze) were compared to low binding conditions (e.g., joy-averted gaze). Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.

  20. Off-the-Shelf Gaze Interaction

    DEFF Research Database (Denmark)

    San Agustin, Javier

    People with severe motor-skill disabilities are often unable to use standard input devices such as a mouse or a keyboard to control a computer and they are, therefore, in strong need for alternative input devices. Gaze tracking offers them the possibility to use the movements of their eyes to int...

  1. The Use of Gaze to Control Drones

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Alapetite, Alexandre; MacKenzie, I. Scott

    2014-01-01

    This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or “drones”). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty, reliabil...

  2. Between Gazes: Feminist, Queer, and 'Other' Films

    DEFF Research Database (Denmark)

    Elias, Camelia

    In this book Camelia Elias introduces key terms in feminist, queer, and postcolonial/diaspora film. Taking her point of departure in the question, "what do you want from me?" she detours through Lacanian theory of the gaze and reframes questions of subjectivity and representation in an entertaining...

  3. Gaze interaction with textual user interface

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Lund, Haakon; Madsen, Janus Askø

    2015-01-01

    ” option for text navigation. People readily understood how to execute RSVP command prompts and a majority of them preferred gaze input to a pen pointer. We present the concept of a smartwatch that can track eye movements and mediate command options whenever in proximity of intelligent devices...

  4. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    Science.gov (United States)

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  5. ANALYSIS OF THE GAZE BEHAVIOUR OF THE WORKER ON THE CARBURETOR ASSEMBLY TASK

    Directory of Open Access Journals (Sweden)

    Novie Susanto

    2015-06-01

    Full Text Available This study presents analysis of the area of interest (AOI and the gaze behavior of human during assembly task. This study aims at investigating the human behavior in detail using an eye‐tracking system during assembly task using LEGO brick and an actual manufactured product, a carburetor. An analysis using heat map data based on the recorded videos from the eye-tracking system is taken into account to examine and investigate the gaze behavior of human. The results of this study show that the carburetor assembly requires more attention than the product made from LEGO bricks. About 50% of the participants experience the necessity to visually inspect the interim state of the work object during the simulation of the assembly sequence on the screen. They also show the tendency to want to be more certain about part fitting in the actual work object.

  6. Reasoning about Beliefs: A Human Specialization?

    Science.gov (United States)

    Povinelli, Daniel J.; Giambrone, Steve

    2001-01-01

    Asserts that theory of mind is unique to humans and that its original function was to provide a more abstract level of describing ancient behavioral patterns, such as deception, reconciliation, and gaze following. Suggests that initial selective advantage of theory of mind may have been increased flexibility of already-existing behaviors, not…

  7. Fearful gaze cueing: gaze direction and facial expression independently influence overt orienting responses in 12-month-olds.

    Directory of Open Access Journals (Sweden)

    Reiko Matsunaka

    Full Text Available Gaze direction cues and facial expressions have been shown to influence object processing in infants. For example, infants around 12 months of age utilize others' gaze directions and facial expressions to regulate their own behaviour toward an ambiguous target (i.e., social referencing. However, the mechanism by which social signals influence overt orienting in infants is unclear. The present study examined the effects of static gaze direction cues and facial expressions (neutral vs. fearful on overt orienting using a gaze-cueing paradigm in 6- and 12-month-old infants. Two experiments were conducted: in Experiment 1, a face with a leftward or rightward gaze direction was used as a cue, and a face with a forward gaze direction was added in Experiment 2. In both experiments, an effect of facial expression was found in 12-month-olds; no effect was found in 6-month-olds. Twelve-month-old infants exhibited more rapid overt orienting in response to fearful expressions than neutral expressions, irrespective of gaze direction. These findings suggest that gaze direction information and facial expressions independently influence overt orienting in infants, and the effect of facial expression emerges earlier than that of static gaze direction. Implications for the development of gaze direction and facial expression processing systems are discussed.

  8. Towards gaze-controlled platform games

    DEFF Research Database (Denmark)

    Muñoz, Jorge; Yannakakis, Georgios N.; Mulvey, Fiona

    2011-01-01

    This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial...... analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character...... controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game...

  9. Gaze perception in social anxiety and social anxiety disorder

    Directory of Open Access Journals (Sweden)

    Lars eSchulze

    2013-12-01

    Full Text Available Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD. Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  10. Mental state attribution and the gaze cueing effect.

    Science.gov (United States)

    Cole, Geoff G; Smith, Daniel T; Atkinson, Mark A

    2015-05-01

    Theory of mind is said to be possessed by an individual if he or she is able to impute mental states to others. Recently, some authors have demonstrated that such mental state attributions can mediate the "gaze cueing" effect, in which observation of another individual shifts an observer's attention. One question that follows from this work is whether such mental state attributions produce mandatory modulations of gaze cueing. Employing the basic gaze cueing paradigm, together with a technique commonly used to assess mental-state attribution in nonhuman animals, we manipulated whether the gazing agent could see the same thing as the participant (i.e., the target) or had this view obstructed by a physical barrier. We found robust gaze cueing effects, even when the observed agent in the display could not see the same thing as the participant. These results suggest that the attribution of "seeing" does not necessarily modulate the gaze cueing effect.

  11. Clinician's gaze behaviour in simulated paediatric emergencies.

    Science.gov (United States)

    McNaughten, Ben; Hart, Caroline; Gallagher, Stephen; Junk, Carol; Coulter, Patricia; Thompson, Andrew; Bourke, Thomas

    2018-03-07

    Differences in the gaze behaviour of experts and novices are described in aviation and surgery. This study sought to describe the gaze behaviour of clinicians from different training backgrounds during a simulated paediatric emergency. Clinicians from four clinical areas undertook a simulated emergency. Participants wore SMI (SensoMotoric Instruments) eye tracking glasses. We measured the fixation count and dwell time on predefined areas of interest and the time taken to key clinical interventions. Paediatric intensive care unit (PICU) consultants performed best and focused longer on the chest and airway. Paediatric consultants and trainees spent longer looking at the defibrillator and algorithm (51 180 ms and 50 551 ms, respectively) than the PICU and paediatric emergency medicine consultants. This study is the first to describe differences in the gaze behaviour between experts and novices in a resuscitation. They mirror those described in aviation and surgery. Further research is needed to evaluate the potential use of eye tracking as an educational tool. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. The impact of visual gaze direction on auditory object tracking

    OpenAIRE

    Pomper, U.; Chait, M.

    2017-01-01

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention wh...

  13. Speaker gaze increases information coupling between infant and adult brains.

    Science.gov (United States)

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  14. Facilitated orienting underlies fearful face-enhanced gaze cueing of spatial location

    Directory of Open Access Journals (Sweden)

    Joshua M. Carlson

    2016-12-01

    Full Text Available Faces provide a platform for non-verbal communication through emotional expression and eye gaze. Fearful facial expressions are salient indicators of potential threat within the environment, which automatically capture observers’ attention. However, the degree to which fearful facial expressions facilitate attention to others’ gaze is unresolved. Given that fearful gaze indicates the location of potential threat, it was hypothesized that fearful gaze facilitates location processing. To test this hypothesis, a gaze cueing study with fearful and neutral faces assessing target localization was conducted. The task consisted of leftward, rightward, and forward/straight gaze trials. The inclusion of forward gaze trials allowed for the isolation of orienting and disengagement components of gaze-directed attention. The results suggest that both neutral and fearful gaze modulates attention through orienting and disengagement components. Fearful gaze, however, resulted in quicker orienting than neutral gaze. Thus, fearful faces enhance gaze cueing of spatial location through facilitated orienting.

  15. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  16. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task

    Directory of Open Access Journals (Sweden)

    Gregor eHardiess

    2013-06-01

    Full Text Available Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements the overall task for the subjects was to predict the potential-of-collision (POC of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing. In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (i which sensory variables can be identified supporting adequate collision detection? (ii How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (iii how do they correlate with task performance? (iv How do patients with homonymous visual field defects use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined

  17. Going deeper on the tourist gaze: considerations about its social determinants

    OpenAIRE

    Erick Silva Omena de Melo

    2009-01-01

    This article aims to a better understanding of social processes which lead tourists to different kinds of behavior at the places they visit, having as a frame the way social classes are related in western modern society. Theoretical contribution  from important scientists as John Urry, Piérre Bourdieu and Jost Krippendorf are developed in search of new approaches to enhence processual knowledge in tourism instead of proposicional. It was found that tourist gaze is a construction crossed by di...

  18. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    Science.gov (United States)

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Wearable Gaze Trackers: Mapping Visual Attention in 3D

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Stets, Jonathan Dyssel; Suurmets, Seidi

    2017-01-01

    gaze trackers allows respondents to move freely in any real world 3D environment, removing the previous restrictions. In this paper we propose a novel approach for processing visual attention of respondents using mobile wearable gaze trackers in a 3D environment. The pipeline consists of 3 steps...

  20. Gaze Shift as an Interactional Resource for Very Young Children

    Science.gov (United States)

    Kidwell, Mardi

    2009-01-01

    This article examines how very young children in a day care center make use of their peers' gaze shifts to differentially locate and prepare for the possibility of a caregiver intervention during situations of their biting, hitting, pushing, and the like. At issue is how the visible character of a gaze shift--that is, the manner in which it is…

  1. Just one look: Direct gaze briefly disrupts visual working memory.

    Science.gov (United States)

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  2. Predicting gaze direction from head pose yaw and pitch

    NARCIS (Netherlands)

    Johnson, D.O.; Cuijpers, R.H.; Arabnia, H.R.; Deligiannidis, L.; Lu, J.; Tinetti, F.G.; You, J.

    2013-01-01

    Abstract - Socially assistive robots (SARs) must be able to interpret non-verbal communication from a human. A person’s gaze direction informs the observer where the visual attention is directed to. Therefore it is useful if a robot can interpret the gaze direction, so that it can assess whether a

  3. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    Science.gov (United States)

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention.

    Science.gov (United States)

    Graham, Reiko; Labar, Kevin S

    2012-04-01

    The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic vs. static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. State-dependent sensorimotor processing: gaze and posture stability during simulated flight in birds.

    Science.gov (United States)

    McArthur, Kimberly L; Dickman, J David

    2011-04-01

    Vestibular responses play an important role in maintaining gaze and posture stability during rotational motion. Previous studies suggest that these responses are state dependent, their expression varying with the environmental and locomotor conditions of the animal. In this study, we simulated an ethologically relevant state in the laboratory to study state-dependent vestibular responses in birds. We used frontal airflow to simulate gliding flight and measured pigeons' eye, head, and tail responses to rotational motion in darkness, under both head-fixed and head-free conditions. We show that both eye and head response gains are significantly higher during flight, thus enhancing gaze and head-in-space stability. We also characterize state-specific tail responses to pitch and roll rotation that would help to maintain body-in-space orientation during flight. These results demonstrate that vestibular sensorimotor processing is not fixed but depends instead on the animal's behavioral state.

  6. Eye gaze performance for children with severe physical impairments using gaze-based assistive technology-A longitudinal study.

    Science.gov (United States)

    Borgestig, Maria; Sandqvist, Jan; Parsons, Richard; Falkmer, Torbjörn; Hemmingsson, Helena

    2016-01-01

    Gaze-based assistive technology (gaze-based AT) has the potential to provide children affected by severe physical impairments with opportunities for communication and activities. This study aimed to examine changes in eye gaze performance over time (time on task and accuracy) in children with severe physical impairments, without speaking ability, using gaze-based AT. A longitudinal study with a before and after design was conducted on 10 children (aged 1-15 years) with severe physical impairments, who were beginners to gaze-based AT at baseline. Thereafter, all children used the gaze-based AT in daily activities over the course of the study. Compass computer software was used to measure time on task and accuracy with eye selection of targets on screen, and tests were performed with the children at baseline, after 5 months, 9-11 months, and after 15-20 months. Findings showed that the children improved in time on task after 5 months and became more accurate in selecting targets after 15-20 months. This study indicates that these children with severe physical impairments, who were unable to speak, could improve in eye gaze performance. However, the children needed time to practice on a long-term basis to acquire skills needed to develop fast and accurate eye gaze performance.

  7. Text Entry by Gazing and Smiling

    Directory of Open Access Journals (Sweden)

    Outi Tuisku

    2013-01-01

    Full Text Available Face Interface is a wearable prototype that combines the use of voluntary gaze direction and facial activations, for pointing and selecting objects on a computer screen, respectively. The aim was to investigate the functionality of the prototype for entering text. First, three on-screen keyboard layout designs were developed and tested (n=10 to find a layout that would be more suitable for text entry with the prototype than traditional QWERTY layout. The task was to enter one word ten times with each of the layouts by pointing letters with gaze and select them by smiling. Subjective ratings showed that a layout with large keys on the edge and small keys near the center of the keyboard was rated as the most enjoyable, clearest, and most functional. Second, using this layout, the aim of the second experiment (n=12 was to compare entering text with Face Interface to entering text with mouse. The results showed that text entry rate for Face Interface was 20 characters per minute (cpm and 27 cpm for the mouse. For Face Interface, keystrokes per character (KSPC value was 1.1 and minimum string distance (MSD error rate was 0.12. These values compare especially well with other similar techniques.

  8. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    Science.gov (United States)

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

  9. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  10. Isolated Horizontal Gaze Palsy: Observations and Explanations

    Directory of Open Access Journals (Sweden)

    Renee Ewe

    2017-11-01

    Full Text Available We present three cases that we suggest require a novel diagnosis and a reconsideration of current understandings of pontine anatomy. In this case series, we highlight a series of patients with monophasic, fully recovering inflammatory lesions in the pontine tegmentum not due to any of the currently recognized causes of this syndrome. We highlight other similar cases in the literature and suggest there may be a particular epitope for an as-yet-undiscovered antibody underlying the tropism for this area. We highlight the potential harm of misdiagnosis with relapsing inflammatory or other serious diagnoses with significant adverse impact on the patient. In addition, we propose that this would support a reinterpretation of the currently accepted anatomy of the pontine gaze inputs to the median longitudinal fasciculus and paramedian pontine reticular formation.

  11. The Politics of the Gaze Foucault, Lacan and Zizek

    Directory of Open Access Journals (Sweden)

    Henry Krips

    2010-03-01

    Full Text Available Joan Copjec accuses orthodox film theory of misrepresenting the Lacanian gaze by assimilating it to Foucauldian panopticon (Copjec 1994: 18-19. Although Copjec is correct that orthodox film theory misrepresents the Lacanian gaze, she, in turn, misrepresents Foucault by choosing to focus exclusively upon those as-pects of his work on the panopticon that have been taken up by orthodox film the-ory (Copjec 1994: 4. In so doing, I argue, Copjec misses key parallels between the Lacanian and Foucauldian concepts of the gaze. More than a narrow academic dispute about how to read Foucault and Lacan, this debate has wider political sig-nificance. In particular, using Slavoj Zizek's work, I show that a correct account of the panoptic gaze leads us to rethink the question of how to oppose modern techniques of surveillance.

  12. The Rethorics of Gaze in Luhrmann's "Postmodern Great Gatsby"

    Directory of Open Access Journals (Sweden)

    Paola Fallerini

    2014-05-01

    Adopting the perspective suggested by the rhetoric of the gaze (Laura Mulvey it is highlighted the metalinguistic and metatextual reflection through which this movie contributes to the critical interpretation of Fitzgerald’s novel.

  13. WOMAN AS OBJECT OF MALE GAZE IN SOME WORKS OF ...

    African Journals Online (AJOL)

    Nkiruka

    marketing and sale of the product, but also an object of male gaze. ... Edward Manet, whose painting Olympia, thought to be inspired by Titian‟s ... encountered in Western art history, whereas unidentifiable nude males were infrequently.

  14. Interaction between gaze and visual and proprioceptive position judgements.

    Science.gov (United States)

    Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P

    2010-06-01

    There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.

  15. Latvian government in double jeopardy with EU, Latvijas Gaze

    Index Scriptorium Estoniae

    2005-01-01

    Läti soovib saada Euroopa Komisjonilt ajapikendust gaasituru liberaliseerimiseks 2010. aastani ning lubab sel juhul sõlmida Latvijas Gaze'ga kokkuleppe, et viimane loobuks gaasitarnete ainuõigusest Lätis

  16. The sensation of the look: The gazes in Laurence Anyways

    OpenAIRE

    Schultz, Corey Kai Nelson

    2018-01-01

    This article analyses the gazes, looks, stares and glares in Laurence Anyways (Xavier Dolan, 2012), and examines their affective, interpretive, and symbolic qualities, and their potential to create viewer empathy through affect. The cinematic gaze can produce sensations of shame and fear, by offering a sequence of varied “encounters” to which viewers can react, before we have been given a character onto which we can deflect them, thus bypassing the representational, narrative and even the sym...

  17. Investigating gaze-controlled input in a cognitive selection test

    OpenAIRE

    Gayraud, Katja; Hasse, Catrin; Eißfeldt, Hinnerk; Pannasch, Sebastian

    2017-01-01

    In the field of aviation, there is a growing interest in developing more natural forms of interaction between operators and systems to enhance safety and efficiency. These efforts also include eye gaze as an input channel for human-machine interaction. The present study investigates the application of gaze-controlled input in a cognitive selection test called Eye Movement Conflict Detection Test. The test enables eye movements to be studied as an indicator for psychological test performance a...

  18. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  19. CULTURAL DISPLAY RULES DRIVE EYE GAZE DURING THINKING.

    Science.gov (United States)

    McCarthy, Anjanie; Lee, Kang; Itakura, Shoji; Muir, Darwin W

    2006-11-01

    The authors measured the eye gaze displays of Canadian, Trinidadian, and Japanese participants as they answered questions for which they either knew, or had to derive, the answers. When they knew the answers, Trinidadians maintained the most eye contact, whereas Japanese maintained the least. When thinking about the answers to questions, Canadians and Trinidadians looked up, whereas Japanese looked down. Thus, for humans, gaze displays while thinking are at least in part culturally determined.

  20. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  1. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Science.gov (United States)

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  2. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Directory of Open Access Journals (Sweden)

    Helene Kreysa

    Full Text Available A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins". Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  3. Gaze-Stabilizing Central Vestibular Neurons Project Asymmetrically to Extraocular Motoneuron Pools.

    Science.gov (United States)

    Schoppik, David; Bianco, Isaac H; Prober, David A; Douglass, Adam D; Robson, Drew N; Li, Jennifer M B; Greenwood, Joel S F; Soucy, Edward; Engert, Florian; Schier, Alexander F

    2017-11-22

    Within reflex circuits, specific anatomical projections allow central neurons to relay sensations to effectors that generate movements. A major challenge is to relate anatomical features of central neural populations, such as asymmetric connectivity, to the computations the populations perform. To address this problem, we mapped the anatomy, modeled the function, and discovered a new behavioral role for a genetically defined population of central vestibular neurons in rhombomeres 5-7 of larval zebrafish. First, we found that neurons within this central population project preferentially to motoneurons that move the eyes downward. Concordantly, when the entire population of asymmetrically projecting neurons was stimulated collectively, only downward eye rotations were observed, demonstrating a functional correlate of the anatomical bias. When these neurons are ablated, fish failed to rotate their eyes following either nose-up or nose-down body tilts. This asymmetrically projecting central population thus participates in both upward and downward gaze stabilization. In addition to projecting to motoneurons, central vestibular neurons also receive direct sensory input from peripheral afferents. To infer whether asymmetric projections can facilitate sensory encoding or motor output, we modeled differentially projecting sets of central vestibular neurons. Whereas motor command strength was independent of projection allocation, asymmetric projections enabled more accurate representation of nose-up stimuli. The model shows how asymmetric connectivity could enhance the representation of imbalance during nose-up postures while preserving gaze stabilization performance. Finally, we found that central vestibular neurons were necessary for a vital behavior requiring maintenance of a nose-up posture: swim bladder inflation. These observations suggest that asymmetric connectivity in the vestibular system facilitates representation of ethologically relevant stimuli without

  4. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Directory of Open Access Journals (Sweden)

    Johanna Palcu

    2017-06-01

    Full Text Available Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a determine whether presenting human faces (static or animated in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product in an advertisement, and (c to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product. Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers

  5. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze

  6. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    Directory of Open Access Journals (Sweden)

    André eKlostermann

    2015-12-01

    Full Text Available For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called colour-cueing method which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behaviour. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path with a conservative control condition (anticipation-related instructions. Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behaviour, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback.

  7. Carryover Effect of Joint Attention to Repeated Events in Chimpanzees and Young Children

    Science.gov (United States)

    Okamoto-Barth, Sanae; Moore, Chris; Barth, Jochen; Subiaul, Francys; Povinelli, Daniel J.

    2011-01-01

    Gaze following is a fundamental component of triadic social interaction which includes events and an object shared with other individuals and is found in both human and nonhuman primates. Most previous work has focused only on the immediate reaction after following another's gaze. In contrast, this study investigated whether gaze following is…

  8. Eye gazing direction inspection based on image processing technique

    Science.gov (United States)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  9. Anxiety symptoms and children's eye gaze during fear learning.

    Science.gov (United States)

    Michalska, Kalina J; Machlin, Laura; Moroney, Elizabeth; Lowet, Daniel S; Hettema, John M; Roberson-Nay, Roxann; Averbeck, Bruno B; Brotman, Melissa A; Nelson, Eric E; Leibenluft, Ellen; Pine, Daniel S

    2017-11-01

    The eye region of the face is particularly relevant for decoding threat-related signals, such as fear. However, it is unclear if gaze patterns to the eyes can be influenced by fear learning. Previous studies examining gaze patterns in adults find an association between anxiety and eye gaze avoidance, although no studies to date examine how associations between anxiety symptoms and eye-viewing patterns manifest in children. The current study examined the effects of learning and trait anxiety on eye gaze using a face-based fear conditioning task developed for use in children. Participants were 82 youth from a general population sample of twins (aged 9-13 years), exhibiting a range of anxiety symptoms. Participants underwent a fear conditioning paradigm where the conditioned stimuli (CS+) were two neutral faces, one of which was randomly selected to be paired with an aversive scream. Eye tracking, physiological, and subjective data were acquired. Children and parents reported their child's anxiety using the Screen for Child Anxiety Related Emotional Disorders. Conditioning influenced eye gaze patterns in that children looked longer and more frequently to the eye region of the CS+ than CS- face; this effect was present only during fear acquisition, not at baseline or extinction. Furthermore, consistent with past work in adults, anxiety symptoms were associated with eye gaze avoidance. Finally, gaze duration to the eye region mediated the effect of anxious traits on self-reported fear during acquisition. Anxiety symptoms in children relate to face-viewing strategies deployed in the context of a fear learning experiment. This relationship may inform attempts to understand the relationship between pediatric anxiety symptoms and learning. © 2017 Association for Child and Adolescent Mental Health.

  10. A novel approach to training attention and gaze in ASD: A feasibility and efficacy pilot study.

    Science.gov (United States)

    Chukoskie, Leanne; Westerfield, Marissa; Townsend, Jeanne

    2018-05-01

    In addition to the social, communicative and behavioral symptoms that define the disorder, individuals with ASD have difficulty re-orienting attention quickly and accurately. Similarly, fast re-orienting saccadic eye movements are also inaccurate and more variable in both endpoint and timing. Atypical gaze and attention are among the earliest symptoms observed in ASD. Disruption of these foundation skills critically affects the development of higher level cognitive and social behavior. We propose that interventions aimed at these early deficits that support social and cognitive skills will be broadly effective. We conducted a pilot clinical trial designed to demonstrate the feasibility and preliminary efficacy of using gaze-contingent video games for low-cost in-home training of attention and eye movement. Eight adolescents with ASD participated in an 8-week training, with pre-, mid- and post-testing of eye movement and attention control. Six of the eight adolescents completed the 8 weeks of training and all six showed improvement in attention (orienting, disengagement) and eye movement control or both. All game systems remained intact for the duration of training and all participants could use the system independently. We delivered a robust, low-cost, gaze-contingent game system for home use that, in our pilot training sample, improved the attention orienting and eye movement performance of adolescent participants in 8 weeks of training. We are currently conducting a clinical trial to replicate these results and to examine what, if any, aspects of training transfer to more real-world tasks. © 2017 Wiley Periodicals, Inc. Develop Neurobiol 78: 546-554, 2018. © 2017 Wiley Periodicals, Inc.

  11. Remote gaze tracking system for 3D environments.

    Science.gov (United States)

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  12. Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs.

    Science.gov (United States)

    Liang, Jiali; Wilkinson, Krista

    2018-04-18

    A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.

  13. Gaze strategy in the free flying zebra finch (Taeniopygia guttata.

    Directory of Open Access Journals (Sweden)

    Dennis Eckmeier

    Full Text Available Fast moving animals depend on cues derived from the optic flow on their retina. Optic flow from translational locomotion includes information about the three-dimensional composition of the environment, while optic flow experienced during a rotational self motion does not. Thus, a saccadic gaze strategy that segregates rotations from translational movements during locomotion will facilitate extraction of spatial information from the visual input. We analysed whether birds use such a strategy by highspeed video recording zebra finches from two directions during an obstacle avoidance task. Each frame of the recording was examined to derive position and orientation of the beak in three-dimensional space. The data show that in all flights the head orientation was shifted in a saccadic fashion and was kept straight between saccades. Therefore, birds use a gaze strategy that actively stabilizes their gaze during translation to simplify optic flow based navigation. This is the first evidence of birds actively optimizing optic flow during flight.

  14. COGAIN2009 - "Gaze interaction for those who want it most"

    DEFF Research Database (Denmark)

    , with substantial amounts of applications to support communication, learning and entertainment already in use. However, there are still some uncertainties about this new technology amongst communication specialists and funding institutions. The 5th COGAIN conference will focus on spreading the experiences of people...... using gaze interaction in their daily life to potential users and specialists who have yet to benefit from it. The theme of the conference is "Gaze interaction for those who want it most". We present a total of 18 papers that have been reviewed and accepted by leading researchers and communication...... specialists. Several papers address gaze-based access to computer applications and several papers focus on environmental control. Previous COGAIN conferences have been a most effective launch pad for original new research ideas. Some of them have since found their way into journals and other conferences...

  15. Investigating social gaze as an action-perception online performance

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2012-04-01

    Full Text Available In interpersonal interactions, linguistic information is complemented by non-linguistic information originating largely from facial expressions. The study of online face-to-face social interaction thus entails investigating the multimodal simultaneous processing of oral and visual percepts. Moreover, gaze in and of itself functions as a powerful communicative channel. In this respect, gaze should not be examined as a purely perceptive process but also as an active social performance. We designed a task involving multimodal deciphering of social information based on virtual characters, embedded in naturalistic backgrounds, who directly address the participant with non-literal speech and meaningful facial expressions. Eighteen adult participants were to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking to them face-to-face. To examine self-control and self-awareness of gaze in this context, visual feedback is provided to the participant by a real-time gaze-contingent viewing window centered on the focal point, while the rest of the display is blurred. Eye-tracking data showed that the viewing window induced changes in gaze behaviour, notably longer visual fixations. Notwithstanding, only half the participants ascribed the window displacements to their eye movements. These results highlight the dissociation between non volitional gaze adaptation and self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, especially concerning Autism Spectrum Disorders (ASD where poor pragmatic understanding of oral speech are considered linked to visual peculiarities that impede face exploration.

  16. Model-driven gaze simulation for the blind person in face-to-face communication

    NARCIS (Netherlands)

    Qiu, S.; Anas, S.A.B.; Osawa, H.; Rauterberg, G.W.M.; Hu, J.

    2016-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses

  17. Right Hemispheric Dominance in Gaze-Triggered Reflexive Shift of Attention in Humans

    Science.gov (United States)

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-01-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects,…

  18. Look together : Using gaze for assisting co-located collaborative search

    NARCIS (Netherlands)

    Zhang, Y.; Pfeuffer, Ken; Chong, Ming Ki; Alexander, Jason; Bulling, Andreas; Gellersen, Hans

    2017-01-01

    Gaze information provides indication of users focus which complements remote collaboration tasks, as distant users can see their partner’s focus. In this paper, we apply gaze for co-located collaboration, where users’ gaze locations are presented on the same display, to help collaboration between

  19. Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)

    Science.gov (United States)

    Bogartz, Richard S.; Staub, Adrian

    2012-01-01

    In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…

  20. In the presence of conflicting gaze cues, fearful expression and eye-size guide attention.

    Science.gov (United States)

    Carlson, Joshua M; Aday, Jacob

    2017-10-19

    Humans are social beings that often interact in multi-individual environments. As such, we are frequently confronted with nonverbal social signals, including eye-gaze direction, from multiple individuals. Yet, the factors that allow for the prioritisation of certain gaze cues over others are poorly understood. Using a modified conflicting gaze paradigm, we tested the hypothesis that fearful gaze would be favoured amongst competing gaze cues. We further hypothesised that this effect is related to the increased sclera exposure, which is characteristic of fearful expressions. Across three experiments, we found that fearful, but not happy, gaze guides observers' attention over competing non-emotional gaze. The guidance of attention by fearful gaze appears to be linked to increased sclera exposure. However, differences in sclera exposure do not prioritise competing gazes of other types. Thus, fearful gaze guides attention among competing cues and this effect is facilitated by increased sclera exposure - but increased sclera exposure per se does not guide attention. The prioritisation of fearful gaze over non-emotional gaze likely represents an adaptive means of selectively attending to survival-relevant spatial locations.

  1. A comprehensive gaze stabilization controller based on cerebellar internal models

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Falotico, Egidio; Tolu, Silvia

    2017-01-01

    . The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows to move the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we implement on a humanoid robot a model of gaze stabilization...... based on the coordination of VCR and VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot...

  2. Eye gaze tracking based on the shape of pupil image

    Science.gov (United States)

    Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.

  3. Examining the durability of incidentally learned trust from gaze cues.

    Science.gov (United States)

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  4. Attention, Exposure Duration, and Gaze Shifting in Naming Performance

    Science.gov (United States)

    Roelofs, Ardi

    2011-01-01

    Two experiments are reported in which the role of attribute exposure duration in naming performance was examined by tracking eye movements. Participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to…

  5. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  6. Biasing moral decisions by exploiting the dynamics of eye gaze.

    Science.gov (United States)

    Pärnamets, Philip; Johansson, Petter; Hall, Lars; Balkenius, Christian; Spivey, Michael J; Richardson, Daniel C

    2015-03-31

    Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

  7. Self-Monitoring of Gaze in High Functioning Autism

    Science.gov (United States)

    Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques

    2012-01-01

    Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…

  8. The Relationship between Children's Gaze Reporting and Theory of Mind

    Science.gov (United States)

    D'Entremont, Barbara; Seamans, Elizabeth; Boudreau, Elyse

    2012-01-01

    Seventy-nine 3- and 4-year-old children were tested on gaze-reporting ability and Wellman and Liu's (2004) continuous measure of theory of mind (ToM). Children were better able to report where someone was looking when eye and head direction were provided as a cue compared with when only eye direction cues were provided. With the exception of…

  9. Maori in the Kingdom of the Gaze: Subjects or Critics?

    Science.gov (United States)

    Mika, Carl; Stewart, Georgina

    2016-01-01

    For Maori, a real opportunity exists to flesh out some terms and concepts that Western thinkers have adopted and that precede disciplines but necessarily inform them. In this article, we are intent on describing one of these precursory phenomena--Foucault's Gaze--within a framework that accords with a Maori philosophical framework. Our discussion…

  10. Gaze Embeddings for Zero-Shot Image Classification

    NARCIS (Netherlands)

    Karessli, N.; Akata, Z.; Schiele, B.; Bulling, A.

    2017-01-01

    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have

  11. Strange-face illusions during inter-subjective gazing.

    Science.gov (United States)

    Caputo, Giovanni B

    2013-03-01

    In normal observers, gazing at one's own face in the mirror for a few minutes, at a low illumination level, triggers the perception of strange faces, a new visual illusion that has been named 'strange-face in the mirror'. Individuals see huge distortions of their own faces, but they often see monstrous beings, archetypal faces, faces of relatives and deceased, and animals. In the experiment described here, strange-face illusions were perceived when two individuals, in a dimly lit room, gazed at each other in the face. Inter-subjective gazing compared to mirror-gazing produced a higher number of different strange-faces. Inter-subjective strange-face illusions were always dissociative of the subject's self and supported moderate feeling of their reality, indicating a temporary lost of self-agency. Unconscious synchronization of event-related responses to illusions was found between members in some pairs. Synchrony of illusions may indicate that unconscious response-coordination is caused by the illusion-conjunction of crossed dissociative strange-faces, which are perceived as projections into each other's visual face of reciprocal embodied representations within the pair. Inter-subjective strange-face illusions may be explained by the subject's embodied representations (somaesthetic, kinaesthetic and motor facial pattern) and the other's visual face binding. Unconscious facial mimicry may promote inter-subjective illusion-conjunction, then unconscious joint-action and response-coordination. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Children's Bricolage under the Gaze of Teachers in Sociodramatic Play

    Science.gov (United States)

    Tam, Po Chi

    2013-01-01

    Drawing on the theory of dialogism and the literature on children's culture and cultural resistance, this article investigates the contextual and textual features of the cultural making of a group of children in sociodramatic play in a Hong Kong kindergarten. Different from other, similar studies, this study reports that under the gaze of the…

  13. Learning to interact with a computer by gaze

    DEFF Research Database (Denmark)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    that inefficient eye movements was dramatically reduced after only 15 to 25 sentences of typing, equal to approximately 3-4 hours of practice. The performance data fits a general learning model based on the power law of practice. The learning model can be used to estimate further improvements in gaze typing...

  14. Watch out! Magnetoencephalographic evidence for early modulation of attention orienting by fearful gaze cueing.

    Directory of Open Access Journals (Sweden)

    Fanny Lachat

    Full Text Available Others' gaze and emotional facial expression are important cues for the process of attention orienting. Here, we investigated with magnetoencephalography (MEG whether the combination of averted gaze and fearful expression may elicit a selectively early effect of attention orienting on the brain responses to targets. We used the direction of gaze of centrally presented fearful and happy faces as the spatial attention orienting cue in a Posner-like paradigm where the subjects had to detect a target checkerboard presented at gazed-at (valid trials or non gazed-at (invalid trials locations of the screen. We showed that the combination of averted gaze and fearful expression resulted in a very early attention orienting effect in the form of additional parietal activity between 55 and 70 ms for the valid versus invalid targets following fearful gaze cues. No such effect was obtained for the targets following happy gaze cues. This early cue-target validity effect selective of fearful gaze cues involved the left superior parietal region and the left lateral middle occipital region. These findings provide the first evidence for an effect of attention orienting induced by fearful gaze in the time range of C1. In doing so, they demonstrate the selective impact of combined gaze and fearful expression cues in the process of attention orienting.

  15. Flexible coordination of stationary and mobile conversations with gaze: Resource allocation among multiple joint activities

    Directory of Open Access Journals (Sweden)

    Eric Mayor

    2016-10-01

    Full Text Available Gaze is instrumental in coordinating face-to-face social interactions. But little is known about gaze use when social interactions co-occur with other joint activities. We investigated the case of walking while talking. We assessed how gaze gets allocated among various targets in mobile conversations, whether allocation of gaze to other targets affects conversational coordination, and whether reduced availability of gaze for conversational coordination affects conversational performance and content. In an experimental study, pairs were videotaped in four conditions of mobility (standing still, talking while walking along a straight-line itinerary, talking while walking along a complex itinerary, or walking along a complex itinerary with no conversational task. Gaze to partners was substantially reduced in mobile conversations, but gaze was still used to coordinate conversation via displays of mutual orientation, and conversational performance and content was not different between stationary and mobile conditions. Results expand the phenomena of multitasking to joint activities.

  16. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  17. Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia.

    Science.gov (United States)

    Han, Kiwan; Shin, Jungeun; Yoon, Sang Young; Jang, Dong-Pyo; Kim, Jae-Jin

    2014-06-01

    Virtual reality has been used to measure abnormal social characteristics, particularly in one-to-one situations. In real life, however, conversations with multiple companions are common and more complicated than two-party conversations. In this study, we explored the features of social behaviors in patients with schizophrenia during virtual multiparty conversations. Twenty-three patients with schizophrenia and 22 healthy controls performed the virtual three-party conversation task, which included leading and aiding avatars, positive- and negative-emotion-laden situations, and listening and speaking phases. Patients showed a significant negative correlation in the listening phase between the amount of gaze on the between-avatar space and reasoning ability, and demonstrated increased gaze on the between-avatar space in the speaking phase that was uncorrelated with attentional ability. These results suggest that patients with schizophrenia have active avoidance of eye contact during three-party conversations. Virtual reality may provide a useful way to measure abnormal social characteristics during multiparty conversations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability.

    Science.gov (United States)

    Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando

    2008-01-01

    This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.

  19. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    Science.gov (United States)

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  20. Proximity and Gaze Influences Facial Temperature: A Thermal Infrared Imaging Study.

    Directory of Open Access Journals (Sweden)

    Stephanos eIoannou

    2014-08-01

    Full Text Available Direct gaze and interpersonal proximity are known to lead to changes in psycho-physiology, behaviour and brain function. We know little, however, about subtler facial reactions such as rise and fall in temperature, which may be sensitive to contextual effects and functional in social interactions. Using thermal infrared imaging cameras 18 female adult participants were filmed at two interpersonal distances (intimate and social and two gaze conditions (averted and direct. The order of variation in distance was counterbalanced: half the participants experienced a female experimenter’s gaze at the social distance first before the intimate distance (a socially ‘normal’ order and half experienced the intimate distance first and then the social distance (an odd social order. At both distances averted gaze always preceded direct gaze. We found strong correlations in thermal changes between six areas of the face (forehead, chin, cheeks, nose, maxilliary and periorbital regions for all experimental conditions and developed a composite measure of thermal shifts for all analyses. Interpersonal proximity led to a thermal rise, but only in the ‘normal’ social order. Direct gaze, compared to averted gaze, led to a thermal increase at both distances with a stronger effect at intimate distance, in both orders of distance variation. Participants reported direct gaze as more intrusive than averted gaze, especially at the intimate distance. These results demonstrate the powerful effects of another person’s gaze on psycho-physiological responses, even at a distance and independent of context.

  1. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    Science.gov (United States)

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception. © 2016 by the Society for Personality and Social Psychology, Inc.

  2. Influence of Gaze Direction on Face Recognition: A Sensitive Effect

    Directory of Open Access Journals (Sweden)

    Noémy Daury

    2011-08-01

    Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.

  3. Horizontal gaze palsy with progressive scoliosis: CT and MR findings

    Energy Technology Data Exchange (ETDEWEB)

    Bomfim, Rodrigo C.; Tavora, Daniel G.F.; Nakayama, Mauro; Gama, Romulo L. [Sarah Network of Rehabilitation Hospitals, Department of Radiology, Ceara (Brazil)

    2009-02-15

    Horizontal gaze palsy with progressive scoliosis (HGPPS) is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and progressive scoliosis developing in childhood and adolescence. We present a child with clinical and neuroimaging findings typical of HGPPS. CT and MRI of the brain demonstrated pons hypoplasia, absence of the facial colliculi, butterfly configuration of the medulla and a deep midline pontine cleft. We briefly discuss the imaging aspects of this rare entity in light of the current literature. (orig.)

  4. Does the 'P300' speller depend on eye gaze?

    Science.gov (United States)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  5. Going deeper on the tourist gaze: considerations about its social determinants

    Directory of Open Access Journals (Sweden)

    Erick Silva Omena de Melo

    2009-09-01

    Full Text Available This article aims to a better understanding of social processes which lead tourists to different kinds of behavior at the places they visit, having as a frame the way social classes are related in western modern society. Theoretical contribution  from important scientists as John Urry, Piérre Bourdieu and Jost Krippendorf are developed in search of new approaches to enhence processual knowledge in tourism instead of proposicional. It was found that tourist gaze is a construction crossed by dialectic and complex relations among some determinant factors, embedded in the different kind of roles played by subjects in the social arena, which have been consolidated during a long process beginning in XVII century and taking its present feature in the middle of the XXth.

  6. A preliminary study to understand tacit knowledge and visual routines of medical experts through gaze tracking.

    Science.gov (United States)

    Anderson, Blake; Shyu, Chi-Ren

    2010-11-13

    Many decisions made by medical experts are based on scans from advanced imaging technologies. Interpreting a medical image is a trained, systematic procedure and an excellent target for identifying potential visual routines through image informatics. These visual routines derived from experts contain many clues about visual knowledge and its representation. This study uses an inexpensive webcam-based gaze tracking method to collect data from multiple technologists' survey of medical and non-medical images. Through computational analysis of the results, we expect to provide insight into the behaviors and properties related to medical visual routines. Discovering the visual processes associated with medical images will help us recognize and understand the tacit knowledge gained from extensive experience with medical imagery. These expert routines could potentially be used to reduce medical error, train new experts, and provide an understanding of the human visual system in medicine.

  7. Eye blinking in an avian species is associated with gaze shifts.

    Science.gov (United States)

    Yorzinski, Jessica L

    2016-08-30

    Even when animals are actively monitoring their environment, they lose access to visual information whenever they blink. They can strategically time their blinks to minimize information loss and improve visual functioning but we have little understanding of how this process operates in birds. This study therefore examined blinking in freely-moving peacocks (Pavo cristatus) to determine the relationship between their blinks, gaze shifts, and context. Peacocks wearing a telemetric eye-tracker were exposed to a taxidermy predator (Vulpes vulpes) and their blinks and gaze shifts were recorded. Peacocks blinked during the majority of their gaze shifts, especially when gaze shifts were large, thereby timing their blinks to coincide with periods when visual information is already suppressed. They inhibited their blinks the most when they exhibited high rates of gaze shifts and were thus highly alert. Alternative hypotheses explaining the link between blinks and gaze shifts are discussed.

  8. Eye and head movements shape gaze shifts in Indian peafowl.

    Science.gov (United States)

    Yorzinski, Jessica L; Patricelli, Gail L; Platt, Michael L; Land, Michael F

    2015-12-01

    Animals selectively direct their visual attention toward relevant aspects of their environments. They can shift their attention using a combination of eye, head and body movements. While we have a growing understanding of eye and head movements in mammals, we know little about these processes in birds. We therefore measured the eye and head movements of freely behaving Indian peafowl (Pavo cristatus) using a telemetric eye-tracker. Both eye and head movements contributed to gaze changes in peafowl. When gaze shifts were smaller, eye movements played a larger role than when gaze shifts were larger. The duration and velocity of eye and head movements were positively related to the size of the eye and head movements, respectively. In addition, the coordination of eye and head movements in peafowl differed from that in mammals; peafowl exhibited a near-absence of the vestibulo-ocular reflex, which may partly result from the peafowl's ability to move their heads as quickly as their eyes. © 2015. Published by The Company of Biologists Ltd.

  9. Gazes and Bodies in the Pornographies of Desire

    Directory of Open Access Journals (Sweden)

    Mirko Lino

    2013-06-01

    Full Text Available Il saggio propone una lettura sociale della pornografia cinematografica usando la categoria dello sguardo e le sue declinazioni di genere. Partendo dal famoso saggio di Laura Mulvey, Visual Pleasure and Narrative Cinema (1975, si sposterà il centro dell’analisi dal cinema narrativo tradizionale al cinema pornografico, considerando quest’ultimo un modello che fa contrappunto al voyeurismo dello sguardo maschile (male gaze che la Mulvey riscontrava in alcuni film di Sternberg e di Hitckcock. Per dimostrare la natura contrappuntistica del cinema pornografico si metterà a confronto la medesima messa in scena di uno sguardo che desidera in due film molto differenti tra loro, Rear Window (1954 di Hitchcock e Behind the Green Door (1972 dei fratelli Mitchell. La natura contrappuntistica del porn movie si rintraccia anche nel dialogo che instaura con il cinema mainstream riguardo ai limiti del visibile in materia sessuale. Infine, la pornografia soddisfa la rappresentazione del desiderio delle altre identità di genere, mettendo in scena tipologie di sguardo adeguate – female gaze e queer gaze –, che diventano presto strumenti per una affermazione sociale della diversità sessuali.

  10. Subjects and Objects of the Embodied Gaze: Abbas Kiarostami and the Real of the Individual Perspective

    OpenAIRE

    Gyenge Zsolt

    2016-01-01

    It is widely accepted that Abbas Kiarostami’s cinema revolves around the representation of the gaze. Many critics argue that he should be considered a late modernist who repeats the self-reflexive gestures of modernist European cinema decades after they were first introduced. The present paper will contradict this assertion by investigating the problematic of the Kiarostamian gaze and analyzing the perceptual side of the act of looking. I will argue that instead of focusing on the gaze of the...

  11. Assessing the Usability of Gaze-Adapted Interface against Conventional Eye-based Input Emulation

    OpenAIRE

    Kumar, Chandan; Menges, Raphael; Staab, Steffen

    2017-01-01

    In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse and keyboard devices in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work, we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze inpu...

  12. How Do We Update Faces? Effects of Gaze Direction and Facial Expressions on Working Memory Updating

    OpenAIRE

    Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola

    2012-01-01

    The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enh...

  13. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua

    2013-01-01

    In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment the langu......In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment...

  14. Conflict Tasks of Different Types Divergently Affect the Attentional Processing of Gaze and Arrow.

    Science.gov (United States)

    Fan, Lingxia; Yu, Huan; Zhang, Xuemin; Feng, Qing; Sun, Mengdan; Xu, Mengsi

    2018-01-01

    The present study explored the attentional processing mechanisms of gaze and arrow cues in two different types of conflict tasks. In Experiment 1, participants performed a flanker task in which gaze and arrow cues were presented as central targets or bilateral distractors. The congruency between the direction of the target and the distractors was manipulated. Results showed that arrow distractors greatly interfered with the attentional processing of gaze, while the processing of arrow direction was immune to conflict from gaze distractors. Using a spatial compatibility task, Experiment 2 explored the conflict effects exerted on gaze and arrow processing by their relative spatial locations. When the direction of the arrow was in conflict with its spatial layout on screen, response times were slowed; however, the encoding of gaze was unaffected by spatial location. In general, processing to an arrow cue is less influenced by bilateral gaze cues but is affected by irrelevant spatial information, while processing to a gaze cue is greatly disturbed by bilateral arrows but is unaffected by irrelevant spatial information. Different effects on gaze and arrow cues by different types of conflicts may reflect two relatively distinct specific modes of the attentional process.

  15. Direct gaze elicits atypical activation of the theory-of-mind network in autism spectrum conditions.

    Science.gov (United States)

    von dem Hagen, Elisabeth A H; Stoyanova, Raliza S; Rowe, James B; Baron-Cohen, Simon; Calder, Andrew J

    2014-06-01

    Eye contact plays a key role in social interaction and is frequently reported to be atypical in individuals with autism spectrum conditions (ASCs). Despite the importance of direct gaze, previous functional magnetic resonance imaging in ASC has generally focused on paradigms using averted gaze. The current study sought to determine the neural processing of faces displaying direct and averted gaze in 18 males with ASC and 23 matched controls. Controls showed an increased response to direct gaze in brain areas implicated in theory-of-mind and gaze perception, including medial prefrontal cortex, temporoparietal junction, posterior superior temporal sulcus region, and amygdala. In contrast, the same regions showed an increased response to averted gaze in individuals with an ASC. This difference was confirmed by a significant gaze direction × group interaction. Relative to controls, participants with ASC also showed reduced functional connectivity between these regions. We suggest that, in the typical brain, perceiving another person gazing directly at you triggers spontaneous attributions of mental states (e.g. he is "interested" in me), and that such mental state attributions to direct gaze may be reduced or absent in the autistic brain.

  16. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    Science.gov (United States)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  17. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli

    Science.gov (United States)

    Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming

    2016-01-01

    Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573

  18. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    Directory of Open Access Journals (Sweden)

    Colas N. Authié

    2017-07-01

    Full Text Available In retinitis pigmentosa (RP, loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present, of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures.

  19. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    Science.gov (United States)

    Authié, Colas N.; Berthoz, Alain; Sahel, José-Alain; Safran, Avinoam B.

    2017-01-01

    In retinitis pigmentosa (RP), loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present), of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures. PMID:28798674

  20. Gaze stability of observers watching Op Art pictures.

    Science.gov (United States)

    Zanker, Johannes M; Doyle, Melanie; Robin, Walker

    2003-01-01

    It has been the matter of some debate why we can experience vivid dynamic illusions when looking at static pictures composed from simple black and white patterns. The impression of illusory motion is particularly strong when viewing some of the works of 'Op Artists, such as Bridget Riley's painting Fall. Explanations of the illusory motion have ranged from retinal to cortical mechanisms, and an important role has been attributed to eye movements. To assess the possible contribution of eye movements to the illusory-motion percept we studied the strength of the illusion under different viewing conditions, and analysed the gaze stability of observers viewing the Riley painting and control patterns that do not produce the illusion. Whereas the illusion was reduced, but not abolished, when watching the painting through a pinhole, which reduces the effects of accommodation, it was not perceived in flash afterimages, suggesting an important role for eye movements in generating the illusion for this image. Recordings of eye movements revealed an abundance of small involuntary saccades when looking at the Riley pattern, despite the fact that gaze was kept within the dedicated fixation region. The frequency and particular characteristics of these rapid eye movements can vary considerably between different observers, but, although there was a tendency for gaze stability to deteriorate while viewing a Riley painting, there was no significant difference in saccade frequency between the stimulus and control patterns. Theoretical considerations indicate that such small image displacements can generate patterns of motion signals in a motion-detector network, which may serve as a simple and sufficient, but not necessarily exclusive, explanation for the illusion. Why such image displacements lead to perceptual results with a group of Op Art and similar patterns, but remain invisible for other stimuli, is discussed.

  1. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu

    2013-01-01

    and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant...

  2. Tactile band : accessing gaze signals from the sighted in face-to-face communication

    NARCIS (Netherlands)

    Qiu, S.; Rauterberg, G.W.M.; Hu, J.

    2016-01-01

    Gaze signals, frequently used by the sighted in social interactions as visual cues, are hardly accessible for low-vision and blind people. A concept is proposed to help the blind people access and react to gaze signals in face-to-face communication. 20 blind and low-vision participants were

  3. Attention and Gaze Control in Picture Naming, Word Reading, and Word Categorizing

    Science.gov (United States)

    Roelofs, Ardi

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to…

  4. Gaze-based interaction with public displays using off-the-shelf components

    DEFF Research Database (Denmark)

    San Agustin, Javier; Hansen, John Paulin; Tall, Martin Henrik

    Eye gaze can be used to interact with high-density information presented on large displays. We have built a system employing off-the-shelf hardware components and open-source gaze tracking software that enables users to interact with an interface displayed on a 55” screen using their eye movement...

  5. Face and gaze perception in borderline personality disorder: An electrical neuroimaging study.

    Science.gov (United States)

    Berchio, Cristina; Piguet, Camille; Gentsch, Kornelia; Küng, Anne-Lise; Rihs, Tonia A; Hasler, Roland; Aubry, Jean-Michel; Dayer, Alexandre; Michel, Christoph M; Perroud, Nader

    2017-11-30

    Humans are sensitive to gaze direction from early life, and gaze has social and affective values. Borderline personality disorder (BPD) is a clinical condition characterized by emotional dysregulation and enhanced sensitivity to affective and social cues. In this study we wanted to investigate the temporal-spatial dynamics of spontaneous gaze processing in BPD. We used a 2-back-working-memory task, in which neutral faces with direct and averted gaze were presented. Gaze was used as an emotional modulator of event-related-potentials to faces. High density EEG data were acquired in 19 females with BPD and 19 healthy women, and analyzed with a spatio-temporal microstates analysis approach. Independently of gaze direction, BPD patients showed altered N170 and P200 topographies for neutral faces. Source localization revealed that the anterior cingulate and other prefrontal regions were abnormally activated during the N170 component related to face encoding, while middle temporal deactivations were observed during the P200 component. Post-task affective ratings showed that BPD patients had difficulty to disambiguate neutral gaze. This study provides first evidence for an early neural bias toward neutral faces in BPD independent of gaze direction and also suggests the importance of considering basic aspects of social cognition in identifying biological risk factors of BPD. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. It Takes Time and Experience to Learn How to Interpret Gaze in Mentalistic Terms

    Science.gov (United States)

    Leavens, David A.

    2006-01-01

    What capabilities are required for an organism to evince an "explicit" understanding of gaze as a mentalistic phenomenon? One possibility is that mentalistic interpretations of gaze, like concepts of unseen, supernatural beings, are culturally-specific concepts, acquired through cultural learning. These abstract concepts may either require a…

  7. Coding gaze tracking data with chromatic gradients for VR Exposure Therapy

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Grillon, Helena; De Heras Ciechomski, Pablo

    2007-01-01

    This article presents a simple and intuitive way to represent the eye-tracking data gathered during immersive virtual reality exposure therapy sessions. Eye-tracking technology is used to observe gaze movements during vir- tual reality sessions and the gaze-map chromatic gradient coding allows to...... is fully compatible with different VR exposure systems and provides clinically meaningful data....

  8. Reading pleasure : Light in August and the theory of the gendered gaze

    NARCIS (Netherlands)

    Visser, [No Value

    1997-01-01

    This article discusses various components of the theory of the gendered gaze, in order to construct a framework for an analysis of how William Faulkner understood and fictionally presented the mechanism of the gaze in Light in August. Linking theory and praxis, this article explores the relationship

  9. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    Science.gov (United States)

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  10. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  11. On uniform resampling and gaze analysis of bidirectional texture functions

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Chantler, M.J.; Haindl, Michal

    2009-01-01

    Roč. 6, č. 3 (2009), s. 1-15 ISSN 1544-3558 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie(BE) 41358 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF * texture * eye tracking Subject RIV: BD - Theory of Information Impact factor: 1.447, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-on uniform resampling and gaze analysis of bidirectional texture functions.pdf

  12. Refracted Gazes: A Woman Photographer during Mandate Lebanon

    Directory of Open Access Journals (Sweden)

    Yasmine Nachabe

    2012-11-01

    This paper explores the relationship between modernity and femininity as manifest through the women’s activity, gaze and attire in Marie al-Khazen as well as other photographs taken in the Middle East region between the 1930s and the 1940s. Al-Khazen and the women represented in the photographs of this period were part of a cosmopolitan sensibility that reveals the Middle East region to be far more international than one might have imagined. Caught between projecting a self-image of the cosmopolitan woman and one of the traditional bedouin, these photographs provide a rich field for tracking the ambiguities of the modern.

  13. Sexualized Branded Entertainment and the Male Consumer Gaze

    Directory of Open Access Journals (Sweden)

    Matthew P. McAllister

    2014-03-01

    Full Text Available This article applies the “male consumer gaze” – integrating work influenced by Erving Goffman and Laura Mulvey – to two branded televised events: the 2011 Victoria’s Secret Fashion Show and the 2012 Hooters International Swimsuit Pageant. Critiqued elements include gendered body positioning, televisual and narrativizing techniques, social and integrated media, and branding strategies that combine to create a flow of consumption-based male gazing. Such trends may intensify with changes in media economics and niche marketing.

  14. The effects of social pressure and emotional expression on the cone of gaze in patients with social anxiety disorder.

    Science.gov (United States)

    Harbort, Johannes; Spiegel, Julia; Witthöft, Michael; Hecht, Heiko

    2017-06-01

    Patients with social anxiety disorder suffer from pronounced fears in social situations. As gaze perception is crucial in these situations, we examined which factors influence the range of gaze directions where mutual gaze is experienced (the cone of gaze). The social stimulus was modified by changing the number of people (heads) present and the emotional expression of their faces. Participants completed a psychophysical task, in which they had to adjust the eyes of a virtual head to gaze at the edge of the range where mutual eye-contact was experienced. The number of heads affected the width of the gaze cone: the more heads, the wider the gaze cone. The emotional expression of the virtual head had no consistent effect on the width of the gaze cone, it did however affect the emotional state of the participants. Angry expressions produced the highest arousal values. Highest valence emerged from happy faces, lowest valence from angry faces. These results suggest that the widening of the gaze cone in social anxiety disorder is not primarily mediated by their altered emotional reactivity. Implications for gaze assessment and gaze training in therapeutic contexts are discussed. Due to interindividual variability, enlarged gaze cones are not necessarily indicative of social anxiety disorder, they merely constitute a correlate at the group level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    Science.gov (United States)

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  16. Subjects and Objects of the Embodied Gaze: Abbas Kiarostami and the Real of the Individual Perspective

    Directory of Open Access Journals (Sweden)

    Gyenge Zsolt

    2016-12-01

    Full Text Available It is widely accepted that Abbas Kiarostami’s cinema revolves around the representation of the gaze. Many critics argue that he should be considered a late modernist who repeats the self-reflexive gestures of modernist European cinema decades after they were first introduced. The present paper will contradict this assertion by investigating the problematic of the Kiarostamian gaze and analyzing the perceptual side of the act of looking. I will argue that instead of focusing on the gaze of the spectator directed towards the filmic image, he exposes a gaze that is fully integrated into the reality to be captured on film. The second part of the paper will explain this by linking the concept of gaze to the Lacanian concept of the order of the Real. Finally, I will contextualize all this by discussing the Iranian director’s position between Eastern and Western traditions of representation.

  17. A testimony to Muzil: Hervé Guibert, Foucault, and the medical gaze.

    Science.gov (United States)

    Rendell, Joanne

    2004-01-01

    Testimony to Muzil: Hervé Guibert, Michel Foucault, and the "Medical Gaze" examines the fictional/autobiographical AIDS writings of the French writer Hervé Guibert. Locating Guibert's writings alongside the work of his friend Michel Foucault, the article explores how they echo Foucault's evolving notions of the "medical gaze." The article also explores how Guilbert's narrators and Guibert himself (as writer) resist and challenge the medical gaze; a gaze which particularly in the era of AIDS has subjected, objectified, and even sometimes punished the body of the gay man. It is argued that these resistances to the gaze offer a literary extension to Foucault's later work on power and resistance strategies.

  18. DIAGNOSIS OF MYASTHENIA GRAVIS USING FUZZY GAZE TRACKING SOFTWARE

    Directory of Open Access Journals (Sweden)

    Javad Rasti

    2015-04-01

    Full Text Available Myasthenia Gravis (MG is an autoimmune disorder, which may lead to paralysis and even death if not treated on time. One of its primary symptoms is severe muscular weakness, initially arising in the eye muscles. Testing the mobility of the eyeball can help in early detection of MG. In this study, software was designed to analyze the ability of the eye muscles to focus in various directions, thus estimating the MG risk. Progressive weakness in gazing at the directions prompted by the software can reveal abnormal fatigue of the eye muscles, which is an alert sign for MG. To assess the user’s ability to keep gazing at a specified direction, a fuzzy algorithm was applied to images of the user’s eyes to determine the position of the iris in relation to the sclera. The results of the tests performed on 18 healthy volunteers and 18 volunteers in early stages of MG confirmed the validity of the suggested software.

  19. Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.

    Science.gov (United States)

    Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P

    2018-02-01

    Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Trait Anxiety Impacts the Perceived Gaze Direction of Fearful But Not Angry Faces

    Directory of Open Access Journals (Sweden)

    Zhonghua Hu

    2017-07-01

    Full Text Available Facial expression and gaze direction play an important role in social communication. Previous research has demonstrated the perception of anger is enhanced by direct gaze, whereas, it is unclear whether perception of fear is enhanced by averted gaze. In addition, previous research has shown the anxiety affects the processing of facial expression and gaze direction, but hasn’t measured or controlled for depression. As a result, firm conclusions cannot be made regarding the impact of individual differences in anxiety and depression on perceptions of face expressions and gaze direction. The current study attempted to reexamine the effect of the anxiety level on the processing of facial expressions and gaze direction by matching participants on depression scores. A reliable psychophysical index of the range of eye gaze angles judged as being directed at oneself [the cone of direct gaze (CoDG] was used as the dependent variable in this study. Participants were stratified into high/low trait anxiety groups and asked to judge the gaze of angry, fearful, and neutral faces across a range of gaze directions. The result showed: (1 the perception of gaze direction was influenced by facial expression and this was modulated by trait anxiety. For the high trait anxiety group, the CoDG for angry expressions was wider than for fearful and neutral expressions, and no significant difference emerged between fearful and neutral expressions; For the low trait anxiety group, the CoDG for both angry and fearful expressions was wider than for neutral, and no significant difference emerged between angry and fearful expressions. (2 Trait anxiety modulated the perception of gaze direction only in the fearful condition, such that the fearful CoDG for the high trait anxiety group was narrower than the low trait anxiety group. This demonstrated that anxiety distinctly affected gaze perception in expressions that convey threat (angry, fearful, such that a high trait anxiety

  1. P2-23: Deficits on Preference but Not Attention in Patients with Depression: Evidence from Gaze Cue

    Directory of Open Access Journals (Sweden)

    Jingling Li

    2012-10-01

    Full Text Available Gaze is an important social cue and can easily capture attention. Our preference judgment is biased by others' gaze; that is, we prefer objects gazed by happy or neutral faces and dislike objects gazed by disgust faces. Since patients with depression have a negative bias in emotional perception, we hypothesized that they may have different preference judgment on the gazed objects than healthy controls. Twenty-one patients with major depressive disorder and 21 healthy age-matched controls completed an object categorization task and then rated their preference on those objects. In the categorization task, a schematic face either gazed toward or away from the to-be-categorized object. The results showed that both groups categorized faster for gazed objects than non-gazed objects, suggesting that patients did not have deficits on their attention to gaze cues. Nevertheless, healthy controls preferred gazed objects more than non-gazed objects, while patients did not have significant preference. Our result indicated that patients with depression have deficits on their social cognition rather than basic attentional mechanism.

  2. A closer look at the size of the gaze-liking effect: a preregistered replication.

    Science.gov (United States)

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  3. Gaze-Contingent Music Reward Therapy for Social Anxiety Disorder: A Randomized Controlled Trial.

    Science.gov (United States)

    Lazarov, Amit; Pine, Daniel S; Bar-Haim, Yair

    2017-07-01

    Patients with social anxiety disorder exhibit increased attentional dwelling on social threats, providing a viable target for therapeutics. This randomized controlled trial examined the efficacy of a novel gaze-contingent music reward therapy for social anxiety disorder designed to reduce attention dwelling on threats. Forty patients with social anxiety disorder were randomly assigned to eight sessions of either gaze-contingent music reward therapy, designed to divert patients' gaze toward neutral stimuli rather than threat stimuli, or to a control condition. Clinician and self-report measures of social anxiety were acquired pretreatment, posttreatment, and at 3-month follow-up. Dwell time on socially threatening faces was assessed during the training sessions and at pre- and posttreatment. Gaze-contingent music reward therapy yielded greater reductions of symptoms of social anxiety disorder than the control condition on both clinician-rated and self-reported measures. Therapeutic effects were maintained at follow-up. Gaze-contingent music reward therapy, but not the control condition, also reduced dwell time on threat, which partially mediated clinical effects. Finally, gaze-contingent music reward therapy, but not the control condition, also altered dwell time on socially threatening faces not used in training, reflecting near-transfer training generalization. This is the first randomized controlled trial to examine a gaze-contingent intervention in social anxiety disorder. The results demonstrate target engagement and clinical effects. This study sets the stage for larger randomized controlled trials and testing in other emotional disorders.

  4. Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder.

    Science.gov (United States)

    Kim, Haena; Shin, Jung Eun; Hong, Yeon-Ju; Shin, Yu-Bin; Shin, Young Seok; Han, Kiwan; Kim, Jae-Jin; Choi, Soo-Hee

    2018-03-01

    One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients' gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.

  5. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Science.gov (United States)

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  6. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Directory of Open Access Journals (Sweden)

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  7. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Directory of Open Access Journals (Sweden)

    Simon Ho

    Full Text Available Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials, which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1 validate the overall results from earlier aggregated analyses and 2 provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1 speakers end their turn with direct gaze at the listener and 2 the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  8. Animating Flames: Recovering Fire-Gazing as a Moving-Image Technology

    Directory of Open Access Journals (Sweden)

    Anne Sullivan

    2017-12-01

    Full Text Available In nineteenth-century England, the industrialization of heat and light rendered fire-gazing increasingly obsolete. Fire-gazing is a form of flame-based reverie that typically involves a solitary viewer who perceives animated, moving images dissolving into and out of view in a wood or coal fire. When fire-gazing, the viewer may perceive arbitrary pictures, fantastic landscapes, or more familiar forms, such as the faces of friends and family. This article recovers fire-gazing as an early and more intimate animation technology by examining remediations of fire-gazing in print. After reviewing why an analysis of fire-gazing requires a joint literary and media history approach, I build from Michael Faraday’s mid-nineteenth-century theorization of flame as a moving image to argue that fire-gazing must be included in the history of animation technologies. I then demonstrate the uneasy connections that form between automatism, mechanical reproduction, and creativity in Leigh Hunt’s description of fire-gazing in his 1811 essay ‘A Day by the Fire’. The tension between conscious and unconscious modes of production culminates in a discussion of fireside scenes of (reanimation in Charles Dickens’s 'Our Mutual Friend' (1864–65, including those featuring one of his more famous fire-gazers, Lizzie Hexam. The article concludes with a brief discussion of the 1908 silent film 'Fireside Reminiscences' as an example of the continued remediations of fire-gazing beyond the nineteenth century.

  9. Perceiving where another person is looking: The integration of head and body information in estimating another person’s gaze.

    Directory of Open Access Journals (Sweden)

    Pieter eMoors

    2015-06-01

    Full Text Available The process through which an observer allocates his/her attention based on the attention of another person is known as joint attention. To be able to do this, the observer effectively has to compute where the other person is looking. It has been shown that observers integrate information from the head and the eyes to determine the gaze of another person. Most studies have documented that observers show a bias called the overshoot effect when eyes and head are misaligned. The present study addresses whether body information is also used as a cue to compute perceived gaze direction. In Experiment 1, we observed a similar overshoot effect in both behavioral and saccadic responses when manipulating body orientation. In Experiment 2, we explored whether the overshoot effect was due to observers assuming that the eyes are oriented further than the head when head and body orientation are misaligned. We removed horizontal eye information by presenting the stimulus from a side view. Head orientation was now manipulated in a vertical direction and the overshoot effect was replicated. In summary, this study shows that body orientation is indeed used as a cue to determine where another person is looking.

  10. Gaze and power. A post-structuralist interpretation on Perseus’ myth

    Directory of Open Access Journals (Sweden)

    Olaya Fernández Guerrero

    2015-12-01

    Full Text Available Gaze hierarchizes, manages and labels reality. Then, according to Foucault, gaze can be understood as a practice of power. This paper is inspired by his theories, and it applies them to one of the most powerful symbolic spheres of Western culture: Greek Myths. Notions such as visibility, invisibility and panopticism bring new light into the story of Perseus and Medusa, and they enable a re-reading of this Myth focused on the different ways of power that emerge from the gaze.

  11. Clinical Approach to Supranuclear Brainstem Saccadic Gaze Palsies

    Directory of Open Access Journals (Sweden)

    Alexandra Lloyd-Smith Sequeira

    2017-08-01

    Full Text Available Failure of brainstem supranuclear centers for saccadic eye movements results in the clinical presence of a brainstem-mediated supranuclear saccadic gaze palsy (SGP, which is manifested as slowing of saccades with or without range of motion limitation of eye movements and as loss of quick phases of optokinetic nystagmus. Limitation in the range of motion of eye movements is typically worse with saccades than with smooth pursuit and is overcome with vestibular–ocular reflexive eye movements. The differential diagnosis of SGPs is broad, although acute-onset SGP is most often from brainstem infarction and chronic vertical SGP is most commonly caused by the neurodegenerative condition progressive supranuclear palsy. In this review, we discuss the brainstem anatomy and physiology of the brainstem saccade-generating network; we discuss the clinical features of SGPs, with an emphasis on insights from quantitative ocular motor recordings; and we consider the broad differential diagnosis of SGPs.

  12. Gaze inspired subtitle position evaluation for MOOCs videos

    Science.gov (United States)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  13. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    Science.gov (United States)

    Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen

    2018-01-01

    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  14. Gaze recognition in high-functioning autistic patients. Evidence from functional MRI

    International Nuclear Information System (INIS)

    Takebayashi, Hiroko; Ogai, Masahiro; Matsumoto, Hideo

    2006-01-01

    We examined whether patients with high-functioning autistic disorder (AD) would exhibit abnormal activation in brain regions implicated in the functioning of theory of mind (TOM) during gaze recognition. We investigated brain activity during gaze recognition in 5 patients with high-functioning AD and 9 normal subjects, using functional magnetic resonance imaging. On the gaze task, more activation was found in the left middle frontal gyrus, the right intraparietal sulcus, and the precentral and inferior parietal gyri bilaterally in controls than in AD patients, whereas the patient group showed more powerful signal changes in the left superior temporal gyrus, the right insula, and the right medial frontal gyrus. These results suggest that high-functioning AD patients have functional abnormalities not only in TOM-related brain regions, but also in widely distributed brain regions that are not normally activated upon the processing of information from another person's gaze. (author)

  15. Children with ASD Can Use Gaze in Support of Word Recognition and Learning

    Science.gov (United States)

    McGregor, Karla K.; Rost, Gwyneth; Arenas, Rick; Farris-Trimble, Ashley; Stiles, Derek

    2013-01-01

    Background: Many children with autism spectrum disorders (ASD) struggle to understand familiar words and learn unfamiliar words. We explored the extent to which these problems reflect deficient use of probabilistic gaze in the

  16. A neural-based remote eye gaze tracker under natural head motion.

    Science.gov (United States)

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  17. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    Directory of Open Access Journals (Sweden)

    Ľuboš Hládek

    Full Text Available The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG, which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  18. Impact of stride-coupled gaze shifts of walking blowflies on the neuronal representation of visual targets

    Directory of Open Access Journals (Sweden)

    Daniel eKress

    2014-09-01

    Full Text Available During locomotion animals rely heavily on visual cues gained from the environment to guide their behavior. Examples are basic behaviors like collision avoidance or the approach to a goal. The saccadic gaze strategy of flying flies, which separates translational from rotational phases of locomotion, has been suggested to facilitate the extraction of environmental information, because only image flow evoked by translational self-motion contains relevant distance information about the surrounding world. In contrast to the translational phases of flight during which gaze direction is kept largely constant, walking flies experience continuous rotational image flow that is coupled to their stride-cycle. The consequences of these self-produced image shifts for the extraction of environmental information are still unclear. To assess the impact of stride-coupled image shifts on visual information processing, we performed electrophysiological recordings from the HSE cell, a motion sensitive wide-field neuron in the blowfly visual system. This cell has been concluded to play a key role in mediating optomotor behavior, self-motion estimation and spatial information processing. We used visual stimuli that were based on the visual input experienced by walking blowflies while approaching a black vertical bar. The response of HSE to these stimuli was dominated by periodic membrane potential fluctuations evoked by stride-coupled image shifts. Nevertheless, during the approach the cell’s response contained information about the bar and its background. The response components evoked by the bar were larger than the responses to its background, especially during the last phase of the approach. However, as revealed by targeted modifications of the visual input during walking, the extraction of distance information on the basis of HSE responses is much impaired by stride-coupled retinal image shifts. Possible mechanisms that may cope with these stride

  19. The duality of gaze: Eyes extract and signal social information during sustained cooperative and competitive dyadic gaze

    Directory of Open Access Journals (Sweden)

    Michelle eJarick

    2015-09-01

    Full Text Available In contrast to nonhuman primate eyes, which have a dark sclera surrounding a dark iris, human eyes have a white sclera that surrounds a dark iris. This high contrast morphology allows humans to determine quickly and easily where others are looking and infer what they are attending to. In recent years an enormous body of work has used photos and schematic images of faces to study these aspects of social attention, e.g., the selection of the eyes of others and the shift of attention to where those eyes are directed. However, evolutionary theory holds that humans did not develop a high contrast morphology simply to use the eyes of others as attentional cues; rather they sacrificed camouflage for communication, that is, to signal their thoughts and intentions to others. In the present study we demonstrate the importance of this by taking as our starting point the hypothesis that a cornerstone of nonverbal communication is the eye contact between individuals and the time that it is held. In a single simple study we show experimentally that the effect of eye contact can be quickly and profoundly altered merely by having participants, who had never met before, play a game in a cooperative or competitive manner. After the game participants were asked to make eye contact for a prolonged period of time (10 minutes. Those who had played the game cooperatively found this terribly difficult to do, repeatedly talking and breaking gaze. In contrast, those who had played the game competitively were able to stare quietly at each other for a sustained period. Collectively these data demonstrate that when looking at the eyes of a real person one both acquires and signals information to the other person. This duality of gaze is critical to nonverbal communication, with the nature of that communication shaped by the relationship between individuals, e.g., cooperative or competitive.

  20. Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze and Touch

    OpenAIRE

    Turner , Jayson; Alexander , Jason; Bulling , Andreas; Schmidt , Dominik; Gellersen , Hans

    2013-01-01

    Part 4: Gaze-Enabled Interaction Design; International audience; Previous work has validated the eyes and mobile input as a viable approach for pointing at, and selecting out of reach objects. This work presents Eye Pull, Eye Push, a novel interaction concept for content transfer between public and personal devices using gaze and touch. We present three techniques that enable this interaction: Eye Cut & Paste, Eye Drag & Drop, and Eye Summon & Cast. We outline and discuss several scenarios in...

  1. Infants in control: rapid anticipation of action outcomes in a gaze-contingent paradigm.

    Directory of Open Access Journals (Sweden)

    Quan Wang

    Full Text Available Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.

  2. Attention training through gaze-contingent feedback: Effects on reappraisal and negative emotions.

    Science.gov (United States)

    Sanchez, Alvaro; Everaert, Jonas; Koster, Ernst H W

    2016-10-01

    Reappraisal is central to emotion regulation but its mechanisms are unclear. This study tested the theoretical prediction that emotional attention bias is linked to reappraisal of negative emotion-eliciting stimuli and subsequent emotional responding using a novel attentional control training. Thirty-six undergraduates were randomly assigned to either the control or the attention training condition and were provided with different task instructions while they performed an interpretation task. Whereas control participants freely created interpretations, participants in the training condition were instructed to allocate attention toward positive words to efficiently create positive interpretations (i.e., recruiting attentional control) while they were provided with gaze-contingent feedback on their viewing behavior. Transfer to attention bias and reappraisal success was evaluated using a dot-probe task and an emotion regulation task which were administered before and after the training. The training condition was effective at increasing attentional control and resulted in beneficial effects on the transfer tasks. Analyses supported a serial indirect effect with larger attentional control acquisition in the training condition leading to negative attention bias reduction, in turn predicting greater reappraisal success which reduced negative emotions. Our results indicate that attentional mechanisms influence the use of reappraisal strategies and its impact on negative emotions. The novel attention training highlights the importance of tailored feedback to train attentional control. The findings provide an important step toward personalized delivery of attention training. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Human-like object tracking and gaze estimation with PKD android.

    Science.gov (United States)

    Wijayasinghe, Indika B; Miller, Haylie L; Das, Sumit K; Bugnariu, Nicoleta L; Popa, Dan O

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  4. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  5. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    Science.gov (United States)

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  6. Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations.

    Science.gov (United States)

    Norris, Scott A; Hathaway, Emily N; Taylor, Jordan A; Thach, W Thomas

    2011-05-01

    Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20-30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation.

  7. Investigating positioning and gaze behaviors of social robots : people's preferences, perceptions, and behaviors

    NARCIS (Netherlands)

    Joosse, Michiel Pieter

    2017-01-01

    As technology advances, application areas for robots are no longer limited to the factories where they perform repetitive tasks behind fences. Robots are envisioned to provide services to us in everyday public spaces - in which they will encounter and interact with people. These types of robots can

  8. Gaze-based assistive technology used in daily life by children with severe physical impairments - parents' experiences.

    Science.gov (United States)

    Borgestig, Maria; Rytterström, Patrik; Hemmingsson, Helena

    2017-07-01

    To describe and explore parents' experiences when their children with severe physical impairments receive gaze-based assistive technology (gaze-based assistive technology (AT)) for use in daily life. Semi-structured interviews were conducted twice, with one year in between, with parents of eight children with cerebral palsy that used gaze-based AT in their daily activities. To understand the parents' experiences, hermeneutical interpretations were used during data analysis. The findings demonstrate that for parents, children's gaze-based AT usage meant that children demonstrated agency, provided them with opportunities to show personality and competencies, and gave children possibilities to develop. Overall, children's gaze-based AT provides hope for a better future for their children with severe physical impairments; a future in which the children can develop and gain influence in life. Gaze-based AT provides children with new opportunities to perform activities and take initiatives to communicate, giving parents hope about the children's future.

  9. Gaze direction effects on perceptions of upper limb kinesthetic coordinate system axes.

    Science.gov (United States)

    Darling, W G; Hondzinski, J M; Harper, J G

    2000-12-01

    The effects of varying gaze direction on perceptions of the upper limb kinesthetic coordinate system axes and of the median plane location were studied in nine subjects with no history of neuromuscular disorders. In two experiments, six subjects aligned the unseen forearm to the trunk-fixed anterior-posterior (a/p) axis and earth-fixed vertical while gazing at different visual targets using either head or eye motion to vary gaze direction in different conditions. Effects of support of the upper limb on perceptual errors were also tested in different conditions. Absolute constant errors and variable errors associated with forearm alignment to the trunk-fixed a/p axis and earth-fixed vertical were similar for different gaze directions whether the head or eyes were moved to control gaze direction. Such errors were decreased by support of the upper limb when aligning to the vertical but not when aligning to the a/p axis. Regression analysis showed that single trial errors in individual subjects were poorly correlated with gaze direction, but showed a dependence on shoulder angles for alignment to both axes. Thus, changes in position of the head and eyes do not influence perceptions of upper limb kinesthetic coordinate system axes. However, dependence of the errors on arm configuration suggests that such perceptions are generated from sensations of shoulder and elbow joint angle information. In a third experiment, perceptions of median plane location were tested by instructing four subjects to place the unseen right index fingertip directly in front of the sternum either by motion of the straight arm at the shoulder or by elbow flexion/extension with shoulder angle varied. Gaze angles were varied to the right and left by 0.5 radians to determine effects of gaze direction on such perceptions. These tasks were also carried out with subjects blind-folded and head orientation varied to test for effects of head orientation on perceptions of median plane location. Constant

  10. Gaze-based assistive technology used in daily life by children with severe physical impairments - parents' experiences

    OpenAIRE

    Borgestig, Maria; Rytterstrom, Patrik; Hemmingsson, Helena

    2017-01-01

    Objective: To describe and explore parents' experiences when their children with severe physical impairments receive gaze-based assistive technology (gaze-based assistive technology (AT)) for use in daily life. Methods: Semi-structured interviews were conducted twice, with one year in between, with parents of eight children with cerebral palsy that used gaze-based AT in their daily activities. To understand the parents' experiences, hermeneutical interpretations were used during data analysis...

  11. Race perception and gaze direction differently impair visual working memory for faces: An event-related potential study.

    Science.gov (United States)

    Sessa, Paola; Dalmaso, Mario

    2016-01-01

    Humans are amazingly experts at processing and recognizing faces, however there are moderating factors of this ability. In the present study, we used the event-related potential technique to investigate the influence of both race and gaze direction on visual working memory (i.e., VWM) face representations. In a change detection task, we orthogonally manipulated race (own-race vs. other-race faces) and eye-gaze direction (direct gaze vs. averted gaze). Participants were required to encode identities of these faces. We quantified the amount of information encoded in VWM by monitoring the amplitude of the sustained posterior contralateral negativity (SPCN) time-locked to the faces. Notably, race and eye-gaze direction differently modulated SPCN amplitude such that other-race faces elicited reduced SPCN amplitudes compared with own-race faces only when displaying a direct gaze. On the other hand, faces displaying averted gaze, independently of their race, elicited increased SPCN amplitudes compared with faces displaying direct gaze. We interpret these findings as denoting that race and eye-gaze direction affect different face processing stages.

  12. Ezra Pound and Du Fu: Gazing at Mt. Tai

    Directory of Open Access Journals (Sweden)

    Kent Su

    2017-08-01

    Full Text Available Confined to a six-by-six-foot outdoor steel cage, Ezra Pound saw a series of mountain hills from a few miles to the east of Pisa. The poet compared one of these small 800-metre hills to the sacred Chinese Mt. Tai, which becomes the most common geographical name in The Pisan Cantos. Pound’s poetic summoning of this particular mountain is related to the fact that Mt. Tai is historically and culturally connected to the philosophy of Confucius, who personally ascended the mountain several times. Pound, as a devout Confucian disciple, closely follows the philosophical doctrines and attempts to mentally trace the footsteps of Confucius. This paper will argue how Pound’s poetic evocation of the mountain shares a striking similarity to an eighth-century Chinese poem called “Gazing at Mt. Tai,” which was written by the famous literatus - Du Fu 杜甫(712 – 770 . In spite of living in two completely different eras and countries, Pound’s and Du Fu’s reference to Mt. Tai demonstrates the confluence of their poetic spirits. Neither of them ascended mountain personally. They instead made use of their poetic imagination to follow the paths of Confucius and perceived the mountain as an earthly paradise, one which represents tranquillity and serenity away from the moral and physical corruption of the external world.

  13. Gaze-contingent training enhances perceptual skill acquisition.

    Science.gov (United States)

    Ryu, Donghyun; Mann, David L; Abernethy, Bruce; Poolton, Jamie M

    2016-01-01

    The purpose of this study was to determine whether decision-making skill in perceptual-cognitive tasks could be enhanced using a training technique that impaired selective areas of the visual field. Recreational basketball players performed perceptual training over 3 days while viewing with a gaze-contingent manipulation that displayed either (a) a moving window (clear central and blurred peripheral vision), (b) a moving mask (blurred central and clear peripheral vision), or (c) full (unrestricted) vision. During the training, participants watched video clips of basketball play and at the conclusion of each clip made a decision about to which teammate the player in possession of the ball should pass. A further control group watched unrelated videos with full vision. The effects of training were assessed using separate tests of decision-making skill conducted in a pretest, posttest, and 2-week retention test. The accuracy of decision making was greater in the posttest than in the pretest for all three intervention groups when compared with the control group. Remarkably, training with blurred peripheral vision resulted in a further improvement in performance from posttest to retention test that was not apparent for the other groups. The type of training had no measurable impact on the visual search strategies of the participants, and so the training improvements appear to be grounded in changes in information pickup. The findings show that learning with impaired peripheral vision offers a promising form of training to support improvements in perceptual skill.

  14. 3D recovery of human gaze in natural environments

    Science.gov (United States)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  15. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    Science.gov (United States)

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  16. Quantifying the cognitive cost of laparo-endoscopic single-site surgeries: Gaze-based indices.

    Science.gov (United States)

    Di Stasi, Leandro L; Díaz-Piedra, Carolina; Ruiz-Rabelo, Juan Francisco; Rieiro, Héctor; Sanchez Carrion, Jose M; Catena, Andrés

    2017-11-01

    Despite the growing interest concerning the laparo-endoscopic single-site surgery (LESS) procedure, LESS presents multiple difficulties and challenges that are likely to increase the surgeon's cognitive cost, in terms of both cognitive load and performance. Nevertheless, there is currently no objective index capable of assessing the surgeon cognitive cost while performing LESS. We assessed if gaze-based indices might offer unique and unbiased measures to quantify LESS complexity and its cognitive cost. We expect that the assessment of surgeon's cognitive cost to improve patient safety by measuring fitness-for-duty and reducing surgeons overload. Using a wearable eye tracker device, we measured gaze entropy and velocity of surgical trainees and attending surgeons during two surgical procedures (LESS vs. multiport laparoscopy surgery [MPS]). None of the participants had previous experience with LESS. They performed two exercises with different complexity levels (Low: Pattern Cut vs. High: Peg Transfer). We also collected performance and subjective data. LESS caused higher cognitive demand than MPS, as indicated by increased gaze entropy in both surgical trainees and attending surgeons (exploration pattern became more random). Furthermore, gaze velocity was higher (exploration pattern became more rapid) for the LESS procedure independently of the surgeon's expertise. Perceived task complexity and laparoscopic accuracy confirmed gaze-based results. Gaze-based indices have great potential as objective and non-intrusive measures to assess surgeons' cognitive cost and fitness-for-duty. Furthermore, gaze-based indices might play a relevant role in defining future guidelines on surgeons' examinations to mark their achievements during the entire training (e.g. analyzing surgical learning curves). Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  18. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    Science.gov (United States)

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  19. Gazing toward humans: a study on water rescue dogs using the impossible task paradigm.

    Science.gov (United States)

    D'Aniello, Biagio; Scandurra, Anna; Prato-Previde, Emanuela; Valsecchi, Paola

    2015-01-01

    Various studies have assessed the role of life experiences, including learning opportunities, living conditions and the quality of dog-human relationships, in the use of human cues and problem-solving ability. The current study investigates how and to what extent training affects the behaviour of dogs and the communication of dogs with humans by comparing dogs trained for a water rescue service and untrained pet dogs in the impossible task paradigm. Twenty-three certified water rescue dogs (the water rescue group) and 17 dogs with no training experience (the untrained group) were tested using a modified version of the impossible task described by Marshall-Pescini et al. in 2009. The results demonstrated that the water rescue dogs directed their first gaze significantly more often towards the owner and spent more time gazing toward two people compared to the untrained pet dogs. There was no difference between the dogs of the two groups as far as in the amount of time spent gazing at the owner or the stranger; neither in the interaction with the apparatus attempting to obtain food. The specific training regime, aimed at promoting cooperation during the performance of water rescue, could account for the longer gazing behaviour shown toward people by the water rescue dogs and the priority of gazing toward the owner. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Gaze differences in processing pictures with emotional content.

    Science.gov (United States)

    Budimir, Sanja; Palmović, Marijan

    2011-01-01

    The International Affective Picture System (IAPS) is a set of standardized emotionally evocative color photographs developed by NIMH Center for Emotion and Attention at the University of Florida. It contains more than 900 emotional pictures indexed by emotional valence, arousal and dominance. However, when IAPS pictures were used in studying emotions with the event-related potentials, the results have shown a great deal of variation and inconsistency. In this research arousal and dominance of pictures were controlled while emotional valence was manipulated as 3 categories, pleasant, neutral and unpleasant pictures. Two experiments were conducted with an eye-tracker in order to determine to what the participants turn their gaze. Participants were 25 psychology students with normal vision. Every participant saw all pictures in color and same pictures in black/white version. This makes 200 analyzed units for color pictures and 200 for black and white pictures. Every picture was divided into figure and ground. Considering that perception can be influenced by color, edges, luminosity and contrast and since all those factors are collapsed on the pictures in IAPS, we compared color pictures with same black and white pictures. In first eye-tracking IAPS research we analyzed 12 emotional pictures and showed that participants have higher number of fixations for ground on neutral and unpleasant pictures and for figure on pleasant pictures. Second experiment was conducted with 4 sets of emotional complementary pictures (pleasant/unpleasant) which differ only on the content in the figure area and it was shown that participants were more focused on the figure area than on the ground area. Future ERP (event related potential) research with IAPS pictures should take into consideration these findings and to either choose pictures with blank ground or adjust pictures in the way that ground is blank. For the following experiments suggestion is to put emotional content in the figure

  1. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    OpenAIRE

    Saki Takao; Yusuke Yamani; Atsunori Ariga

    2018-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural st...

  2. Behaviorism

    Science.gov (United States)

    Moore, J.

    2011-01-01

    Early forms of psychology assumed that mental life was the appropriate subject matter for psychology, and introspection was an appropriate method to engage that subject matter. In 1913, John B. Watson proposed an alternative: classical S-R behaviorism. According to Watson, behavior was a subject matter in its own right, to be studied by the…

  3. The neurophysiology of human touch and eye gaze and its effects on therapeutic relationships and healing: a scoping review protocol.

    Science.gov (United States)

    Kerr, Fiona; Wiechula, Rick; Feo, Rebecca; Schultz, Tim; Kitson, Alison

    2016-04-01

    The objective of this scoping review is to examine and map the range of neurophysiological impacts of human touch and eye gaze, and better understand their possible links to the therapeutic relationship and the process of healing. The specific question is "what neurophysiological impacts of human touch and eye gaze have been reported in relation to therapeutic relationships and healing?"

  4. When What You See Is What You Get: The Consequences of the Objectifying Gaze for Women and Men

    Science.gov (United States)

    Gervais, Sarah J.; Vescio, Theresa K.; Allen, Jill

    2011-01-01

    This research examined the effects of the objectifying gaze on math performance, interaction motivation, body surveillance, body shame, and body dissatisfaction. In an experiment, undergraduate participants (67 women and 83 men) received an objectifying gaze during an interaction with a trained confederate of the other sex. As hypothesized, the…

  5. Attention and gaze shifting in dual-task and go/no-go performance with vocal responding

    NARCIS (Netherlands)

    Lamers, M.J.M.; Roelofs, A.P.A.

    2011-01-01

    Evidence from go/no-go performance on the Eriksen flanker task with manual responding suggests that individuals gaze at stimuli just as long as needed to identify them (e.g.. Sanders, 1998). In contrast, evidence from dual-task performance with vocal responding suggests that gaze shifts occur after

  6. Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition?

    Science.gov (United States)

    Estudillo, Alejandro J; Bindemann, Markus

    2017-05-01

    This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed.

  7. Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Julia eIrwin

    2014-05-01

    Full Text Available Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD and those with typical development (TD. Patterns of gaze were observed under three conditions: audiovisual (AV speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.

  8. The influence of social and symbolic cues on observers' gaze behaviour.

    Science.gov (United States)

    Hermens, Frouke; Walker, Robin

    2016-08-01

    Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task. © 2015 The British Psychological Society.

  9. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Science.gov (United States)

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  10. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    Science.gov (United States)

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. How does image noise affect actual and predicted human gaze allocation in assessing image quality?

    Science.gov (United States)

    Röhrbein, Florian; Goddard, Peter; Schneider, Michael; James, Georgina; Guo, Kun

    2015-07-01

    A central research question in natural vision is how to allocate fixation to extract informative cues for scene perception. With high quality images, psychological and computational studies have made significant progress to understand and predict human gaze allocation in scene exploration. However, it is unclear whether these findings can be generalised to degraded naturalistic visual inputs. In this eye-tracking and computational study, we methodically distorted both man-made and natural scenes with Gaussian low-pass filter, circular averaging filter and Additive Gaussian white noise, and monitored participants' gaze behaviour in assessing perceived image qualities. Compared with original high quality images, distorted images attracted fewer numbers of fixations but longer fixation durations, shorter saccade distance and stronger central fixation bias. This impact of image noise manipulation on gaze distribution was mainly determined by noise intensity rather than noise type, and was more pronounced for natural scenes than for man-made scenes. We furthered compared four high performing visual attention models in predicting human gaze allocation in degraded scenes, and found that model performance lacked human-like sensitivity to noise type and intensity, and was considerably worse than human performance measured as inter-observer variance. Furthermore, the central fixation bias is a major predictor for human gaze allocation, which becomes more prominent with increased noise intensity. Our results indicate a crucial role of external noise intensity in determining scene-viewing gaze behaviour, which should be considered in the development of realistic human-vision-inspired attention models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Gaze stability, dynamic balance and participation deficits in people with multiple sclerosis at fall-risk.

    Science.gov (United States)

    Garg, Hina; Dibble, Leland E; Schubert, Michael C; Sibthorp, Jim; Foreman, K Bo; Gappmaier, Eduard

    2018-05-05

    Despite the common complaints of dizziness and demyelination of afferent or efferent pathways to and from the vestibular nuclei which may adversely affect the angular Vestibulo-Ocular Reflex (aVOR) and vestibulo-spinal function in persons with Multiple Sclerosis (PwMS), few studies have examined gaze and dynamic balance function in PwMS. 1) Determine the differences in gaze stability, dynamic balance and participation measures between PwMS and controls, 2) Examine the relationships between gaze stability, dynamic balance and participation. Nineteen ambulatory PwMS at fall-risk and 14 age-matched controls were recruited. Outcomes included (a) gaze stability [angular Vestibulo-Ocular Reflex (aVOR) gain (ratio of eye to head velocity); number of Compensatory Saccades (CS) per head rotation; CS latency; gaze position error; Coefficient of Variation (CV) of aVOR gain], (b) dynamic balance [Functional Gait Assessment, FGA; four square step test], and (c) participation [dizziness handicap inventory; activities-specific balance confidence scale]. Separate independent t-tests and Pearson's correlations were calculated. PwMS were age = 53 ± 11.7yrs and had 4.2 ± 3.3 falls/yr. PwMS demonstrated significant (pbalance and participation measures compared to controls. CV of aVOR gain and CS latency were significantly correlated with FGA. Deficits and correlations across a spectrum of disability measures highlight the relevance of gaze and dynamic balance assessment in PwMS. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  13. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    Directory of Open Access Journals (Sweden)

    Jorge Torres-Marín

    2017-11-01

    Full Text Available Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40 indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40, we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence

  14. Synergistic convergence and split pons in horizontal gaze palsy and progressive scoliosis in two sisters

    Directory of Open Access Journals (Sweden)

    Jain Nitin

    2011-01-01

    Full Text Available Synergistic convergence is an ocular motor anomaly where on attempted abduction or on attempted horizontal gaze, both the eyes converge. It has been related to peripheral causes such as congenital fibrosis of extraocular muscles (CFEOM, congenital cranial dysinnervation syndrome, ocular misinnervation or rarely central causes like horizontal gaze palsy with progressive scoliosis, brain stem dysplasia. We hereby report the occurrence of synergistic convergence in two sisters. Both of them also had kyphoscoliosis. Magnetic resonance imaging (MRI brain and spine in both the patients showed signs of brain stem dysplasia (split pons sign differing in degree (younger sister had more marked changes.

  15. Head mounted device for point-of-gaze estimation in three dimensions

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Lidegaard, Morten; Krüger, Norbert

    2014-01-01

    This paper presents a fully calibrated extended geometric approach for gaze estimation in three dimensions (3D). The methodology is based on a geometric approach utilising a fully calibrated binocular setup constructed as a head-mounted system. The approach is based on utilisation of two ordinary...... in the horizontal and vertical dimensions regarding fixations. However, even though the workspace is limited, the fact that the system is designed as a head-mounted device, the workspace volume is relatively positioned to the pose of the device. Hence gaze can be estimated in 3D with relatively free head...

  16. Evaluation of a low-cost open-source gaze tracker

    DEFF Research Database (Denmark)

    San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner; Møllenbach, Emilie

    2010-01-01

    This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user's eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending...... on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The user successfully typed on a wall-projected interface using his eye movements....

  17. EyeDroid: An Open Source Mobile Gaze Tracker on Android for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbeigi, Diako; Sintos, Ioannis

    2015-01-01

    In this paper we report on development and evaluation of a video-based mobile gaze tracker for eyewear computers. Unlike most of the previous work, our system performs all its processing workload on an Android device and sends the coordinates of the gaze point to an eyewear device through wireless...... connection. We propose a lightweight software architecture for Android to increase the efficiency of image processing needed for eye tracking. The evaluation of the system indicated an accuracy of 1:06 and a battery lifetime of approximate 4.5 hours....

  18. Look at my poster! Active gaze, preference and memory during a poster session.

    Science.gov (United States)

    Foulsham, Tom; Kingstone, Alan

    2011-01-01

    In science, as in advertising, people often present information on a poster, yet little is known about attention during a poster session. A mobile eye-tracker was used to record participants' gaze during a mock poster session featuring a range of academic psychology posters. Participants spent the most time looking at introductions and conclusions. Larger posters were looked at for longer, as were posters rated more interesting (but not necessarily more aesthetically pleasing). Interestingly, gaze did not correlate with memory for poster details or liking, suggesting that attracting someone towards your poster may not be enough.

  19. Gaze characteristics of elite and near-elite athletes in ice hockey defensive tactics.

    Science.gov (United States)

    Martell, Stephen G; Vickers, Joan N

    2004-04-01

    Traditional visual search experiments, where the researcher pre-selects video-based scenes for the participant to respond to, shows that elite players make more efficient decisions than non-elites, but disagree on how they temporally regulate their gaze. Using the vision-in-action [J.N. Vickers, J. Exp. Psychol.: Human Percept. Perform. 22 (1996) 342] approach, we tested whether the significant gaze that differentiates elite and non-elite athletes occurred either: early in the task and was of more rapid duration [A.M. Williams et al., Res. Quart. Exer. Sport 65 (1994) 127; A.M. Williams and K. Davids, Res. Quart. Exer. Sport 69 (1998) 111], or late in the task and was of longer duration [W. Helsen, J.M. Pauwels, A cognitive approach to visual search in sport, in: D. Brogan, K. Carr (Eds.), Visual Search, vol. II, Taylor and Francis, London, 1992], or whether a more complex gaze control strategy was used that consisted of both early and rapid fixations followed by a late fixation of long duration prior to the final execution. We tested this using a live defensive zone task in ice hockey. Results indicated that athletes temporally regulated their gaze using two different gaze control strategies. First, fixation/tracking (F/T) gaze early in the trial were significantly shorter than the final F/T and confirmed that the elite group fixated the tactical locations more rapidly than the non-elite on successful plays. And secondly, the final F/T prior to critical movement initiation (i.e. F/T-1) was significantly longer for both groups, averaging 30% of the final part of the phase and occurred as the athletes isolated a single object or location to end the play. The results imply that expertise in defensive tactics is defined by a cascade of F/T, which began with the athletes fixating or tracking specific locations for short durations at the beginning of the play, and concluded with a final gaze of long duration to a relatively stable target at the end. The results are

  20. Radiologically defining horizontal gaze using EOS imaging-a prospective study of healthy subjects and a retrospective audit.

    Science.gov (United States)

    Hey, Hwee Weng Dennis; Tan, Kimberly-Anne; Ho, Vivienne Chien-Lin; Azhar, Syifa Bte; Lim, Joel-Louis; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2018-06-01

    As sagittal alignment of the cervical spine is important for maintaining horizontal gaze, it is important to determine the former for surgical correction. However, horizontal gaze remains poorly-defined from a radiological point of view. The objective of this study was to establish radiological criteria to define horizontal gaze. This study was conducted at a tertiary health-care institution over a 1-month period. A prospective cohort of healthy patients was used to determine the best radiological criteria for defining horizontal gaze. A retrospective cohort of patients without rigid spinal deformities was used to audit the incidence of horizontal gaze. Two categories of radiological parameters for determining horizontal gaze were tested: (1) the vertical offset distances of key identifiable structures from the horizontal gaze axis and (2) imaginary lines convergent with the horizontal gaze axis. Sixty-seven healthy subjects underwent whole-body EOS radiographs taken in a directed standing posture. Horizontal gaze was radiologically defined using each parameter, as represented by their means, 95% confidence intervals (CIs), and associated 2 standard deviations (SDs). Subsequently, applying the radiological criteria, we conducted a retrospective audit of such radiographs (before the implementation of a strict radioimaging standardization). The mean age of our prospective cohort was 46.8 years, whereas that of our retrospective cohort was 37.2 years. Gender was evenly distributed across both cohorts. The four parameters with the lowest 95% CI and 2 SD were the distance offsets of the midpoint of the hard palate (A) and the base of the sella turcica (B), the horizontal convergents formed by the tangential line to the hard palate (C), and the line joining the center of the orbital orifice with the internal occipital protuberance (D). In the prospective cohort, good sensitivity (>98%) was attained when two or more parameters were used. Audit using Criterion B

  1. In situ examination of decision-making skills and gaze behaviour of basketball players

    NARCIS (Netherlands)

    van Maarseveen, Mariëtte J.J.; Savelsbergh, Geert J.P.; Oudejans, Raôul R.D.

    2018-01-01

    In this study we examined in situ decision-making skills and gaze behaviour of skilled female basketball players. Players participated as ball carriers in a specific 3 vs 3 pick-and-roll basketball play. Playing both on the right and left side of the court and facing three types of defensive play,

  2. Towards grim voyeurism: The poetics of the gaze on Africa | Gahutu ...

    African Journals Online (AJOL)

    In literature as well as in the press, television and the cinema, voyeurism is an attitude which is very typical to the Western gaze on Africa. By definition, the voyeur is in a morally inferior position. This is the connotation we wish to give the touristic attitude inclined to enjoy the misfortune of the other. Concerning the case in ...

  3. Prior Knowledge Facilitates Mutual Gaze Convergence and Head Nodding Synchrony in Face-to-face Communication.

    Science.gov (United States)

    Thepsoonthorn, C; Yokozuka, T; Miura, S; Ogawa, K; Miyake, Y

    2016-12-02

    As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony.

  4. Single dose testosterone administration alleviates gaze avoidance in women with Social Anxiety Disorder

    NARCIS (Netherlands)

    Enter, Dorien; Terburg, David|info:eu-repo/dai/nl/32304087X; Harrewijn, Anita; Spinhoven, Philip; Roelofs, Karin

    2015-01-01

    Gaze avoidance is one of the most characteristic and persistent social features in people with Social Anxiety Disorder (SAD). It signals social submissiveness and hampers adequate social interactions. Patients with SAD typically show reduced testosterone levels, a hormone that facilitates socially

  5. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia

    Science.gov (United States)

    2014-01-01

    Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356

  6. Queering the Adult Gaze: Young Male Hustlers and Their Alliances with Older Gay Men

    Science.gov (United States)

    Raible, John

    2011-01-01

    Based on ethnographic data collected at a gay bar with sexual minority youths as dancers or strippers, this study calls attention to the gazes through which adults view and position male youths. It highlights a dancer named Austin, who at times engaged in the underground hustling economy centered in the bar. The findings suggest that the social…

  7. Illusory shadow person causing paradoxical gaze deviations during temporal lobe seizures

    NARCIS (Netherlands)

    Zijlmans, M.; van Eijsden, P.; Ferrier, C. H.; Kho, K. H.; van Rijen, P. C.; Leijten, F. S. S.

    Generally, activation of the frontal eye field during seizures can cause versive (forced) gaze deviation, while non-versive head deviation is hypothesised to result from ictal neglect after inactivation of the ipsilateral temporoparietal area. Almost all non-versive head deviations occurring during

  8. Fear of Negative Evaluation Influences Eye Gaze in Adolescents with Autism Spectrum Disorder: A Pilot Study

    Science.gov (United States)

    White, Susan W.; Maddox, Brenna B.; Panneton, Robin K.

    2015-01-01

    Social anxiety is common among adolescents with Autism Spectrum Disorder (ASD). In this modest-sized pilot study, we examined the relationship between social worries and gaze patterns to static social stimuli in adolescents with ASD (n = 15) and gender-matched adolescents without ASD (control; n = 18). Among cognitively unimpaired adolescents with…

  9. The head tracks and gaze predicts: how the world's best batters hit the ball

    NARCIS (Netherlands)

    Mann, D.L.; Spratford, W.; Abernethy, B.

    2013-01-01

    Hitters in fast ball-sports do not align their gaze with the ball throughout ball-flight; rather, they use predictive eye movement strategies that contribute towards their level of interceptive skill. Existing studies claim that (i) baseball and cricket batters cannot track the ball because it moves

  10. An Open Conversation on Using Eye-Gaze Methods in Studies of Neurodevelopmental Disorders

    Science.gov (United States)

    Venker, Courtney E.; Kover, Sara T.

    2015-01-01

    Purpose: Eye-gaze methods have the potential to advance the study of neurodevelopmental disorders. Despite their increasing use, challenges arise in using these methods with individuals with neurodevelopmental disorders and in reporting sufficient methodological detail such that the resulting research is replicable and interpretable. Method: This…

  11. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    Science.gov (United States)

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  12. Can children take advantage of nao gaze-based hints during gameplay?

    NARCIS (Netherlands)

    Mwangi, E.N.; Diaz, M.; Barakova, E.I.; Catala, A.; Rauterberg, G.W.M.

    2017-01-01

    This paper presents a study that analyzes the effects of robots' gaze hints on children's performance in a card-matching game. We conducted a within-subjects study, in which children played a card matching game "Memory" in the presence of a robot tutor in two sessions. In one session, the robot gave

  13. The effects of varying contextual demands on age-related positive gaze preferences.

    Science.gov (United States)

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  14. Does beauty catch the eye?: Sex differences in gazing at attractive opposite-sex targets

    NARCIS (Netherlands)

    van Straaten, I.; Holland, R.; Finkenauer, C.; Hollenstein, T.; Engels, R.C.M.E.

    2010-01-01

    We investigated to what extent the length of people's gazes during conversations with opposite-sex persons is affected by the physical attractiveness of the partner. Single participants (N = 115) conversed for 5 min with confederates who were rated either as low or high on physical attractiveness.

  15. Eye-gaze patterns as students study worked-out examples in mechanics

    Directory of Open Access Journals (Sweden)

    Brian H. Ross

    2010-10-01

    Full Text Available This study explores what introductory physics students actually look at when studying worked-out examples. Our classroom experiences indicate that introductory physics students neither discuss nor refer to the conceptual information contained in the text of worked-out examples. This study is an effort to determine to what extent students incorporate the textual information into the way they study. Student eye-gaze patterns were recorded as they studied the examples to aid them in solving a target problem. Contrary to our expectations from classroom interactions, students spent 40±3% of their gaze time reading the textual information. Their gaze patterns were also characterized by numerous jumps between corresponding mathematical and textual information, implying that they were combining information from both sources. Despite this large fraction of time spent reading the text, student recall of the conceptual information contained therein remained very poor. We also found that having a particular problem in mind had no significant effects on the gaze-patterns or conceptual information retention.

  16. MEG evidence for dynamic amygdala modulations by gaze and facial emotions.

    Directory of Open Access Journals (Sweden)

    Thibaud Dumas

    Full Text Available Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known.Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310-350 ms. Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala.Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception.

  17. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Home-Based Computer Gaming in Vestibular Rehabilitation of Gaze and Balance Impairment.

    Science.gov (United States)

    Szturm, Tony; Reimer, Karen M; Hochman, Jordan

    2015-06-01

    Disease or damage of the vestibular sense organs cause a range of distressing symptoms and functional problems that could include loss of balance, gaze instability, disorientation, and dizziness. A novel computer-based rehabilitation system with therapeutic gaming application has been developed. This method allows different gaze and head movement exercises to be coupled to a wide range of inexpensive, commercial computer games. It can be used in standing, and thus graded balance demands using a sponge pad can be incorporated into the program. A case series pre- and postintervention study was conducted of nine adults diagnosed with peripheral vestibular dysfunction who received a 12-week home rehabilitation program. The feasibility and usability of the home computer-based therapeutic program were established. Study findings revealed that using head rotation to interact with computer games, when coupled to demanding balance conditions, resulted in significant improvements in standing balance, dynamic visual acuity, gaze control, and walking performance. Perception of dizziness as measured by the Dizziness Handicap Inventory also decreased significantly. These preliminary findings provide support that a low-cost home game-based exercise program is well suited to train standing balance and gaze control (with active and passive head motion).

  19. LAND WHERE YOU LOOK? – FUNCTIONAL RELATIONSHIPS BETWEEN GAZE AND MOVEMENT BEHAVIOUR IN A BACKWARD SALTO

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2012-07-01

    Full Text Available In most everyday actions the eyes look towards objects and locations they are engaged with in a specific task and this information is used to guide the corresponding action. The question is, however, whether this strategy also holds for skills incorporating a whole-body rotation in sport. Therefore, the goal of this study was to investigate relationships between gaze behaviour and movement behaviour in a complex gymnastics skill, namely the backward salto performed as a dismount on the uneven bars. Thirteen expert gymnasts were instructed to fixate a light spot on the landing mat during the downswing phase when performing a backward salto as dismount. The location of the light spot was varied systematically with regard to each gymnast’s individual landing distance. Time-discrete kinematic parameters of the swing motion and the dismount were measured. It was expected that fixating the gaze towards different locations of the light spot on the landing mat would directly affect the landing location. We had, however, no specific predictions on the effects of manipulating gaze direction on the remaining kinematic parameters. The hip angle at the top of the backswing, the duration of the downswing phase, the hip angle prior to kick-through, and the landing distance varied clearly as a function of the location of the light spot. It is concluded that fixating the gaze towards the landing mat serves the function to execute the skill in a way to land on a particular location.

  20. Look over There! Unilateral Gaze Increases Geographical Memory of the 50 United States

    Science.gov (United States)

    Propper, Ruth E.; Brunye, Tad T.; Christman, Stephen D.; Januszewskia, Ashley

    2012-01-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the…

  1. The Iron Cage and the Gaze: Interpreting Medical Control in the English Health System

    Directory of Open Access Journals (Sweden)

    Mark Exworthy

    2015-05-01

    Full Text Available This paper seeks to determine the value of theoretical ideal-types of medical control. Whilst ideal types (such as the iron cage and gaze need revision in their application to medical settings, they remain useful in describing and explaining patterns of control and autonomy in the medical profession. The apparent transition from the cage to the gaze has often been over-stated since both types are found in many contemporary health reforms. Indeed, forms of neo-bureaucracy have emerged alongside surveillance of the gaze. These types are contextualised and elaborated in terms of two empirical examples: the management of medical performance and financial incentives for senior hospital doctors in England. Findings point towards the reformulation of medical control, an on-going re-stratification of the medical profession, and the internalisation of managerial discourses. The cumulative effect involves the medical profession’s ability to re-cast and enhance its position (vis-à-vis managerial interests.Keywords: medical profession, medical control, iron cage, gaze

  2. MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions

    Science.gov (United States)

    Dumas, Thibaud; Dubal, Stéphanie; Attal, Yohan; Chupin, Marie; Jouvent, Roland; Morel, Shasha; George, Nathalie

    2013-01-01

    Background Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known. Methodology/Principal Findings Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310–350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala. Conclusion Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception. PMID:24040190

  3. [Left lateral gaze paresis due to subcortical hematoma in the right precentral gyrus].

    Science.gov (United States)

    Sato, K; Takamori, M

    1998-03-01

    We report a case of transient left lateral gaze paresis due to a hemorrhagic lesion restricted in the right precentral gyrus. A 74-year-old female experienced a sudden clumsiness of the left upper extremity. A neurological examination revealed a left central facial paresis, distal dominant muscle weakness in the left upper limb and left lateral gaze paresis. There were no other focal neurological signs. Laboratory data were all normal. Brain CTs and MRIs demonstrated a subcortical hematoma in the right precentral gyrus. The neurological symptoms and signs disappeared over seven days. A recent physiological study suggested that the human frontal eye field (FEF) is located in the posterior part of the middle frontal gyrus (Brodmann's area 8) and the precentral gyrus around the precentral sulcus. More recent studies stressed the role of the precentral sulcus and the precentral gyrus. Our case supports those physiological findings. The hematoma affected both the FEF and its underlying white matter in our case. We assume the lateral gaze paresis is attributable to the disruption of the fibers from the FEF. It is likely that fibers for motor control of the face, upper extremity, and lateral gaze lie adjacently in the subcortical area.

  4. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.

    Science.gov (United States)

    Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.

  5. Differences in How 12- and 24-Month-Olds Interpret the Gaze of Adults

    Science.gov (United States)

    Moore, Chris; Povinelli, Daniel J.

    2007-01-01

    This study examined the hypothesis that toddlers interpret an adult's head turn as evidence that the adult was looking at something, whereas younger infants interpret gaze based on an expectancy that an interesting object will be present on the side to which the adult has turned. Infants of 12 months and toddlers of 24 months were first shown that…

  6. Spatial updating depends on gaze direction even after loss of vision.

    Science.gov (United States)

    Reuschel, Johanna; Rösler, Frank; Henriques, Denise Y P; Fiehler, Katja

    2012-02-15

    Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.

  7. Gaze Aversion during Social Style Interactions in Autism Spectrum Disorder and Williams Syndrome

    Science.gov (United States)

    Doherty-Sneddon, Gwyneth; Whittle, Lisa; Riby, Deborah M.

    2013-01-01

    During face-to-face interactions typically developing individuals use gaze aversion (GA), away from their questioner, when thinking. GA is also used when individuals with autism (ASD) and Williams syndrome (WS) are thinking during question-answer interactions. We investigated GA strategies during face-to-face social style interactions with…

  8. Attention and gaze control in picture naming, word reading, and word categorizing

    NARCIS (Netherlands)

    Roelofs, A.P.A.

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment

  9. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    Science.gov (United States)

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  10. Exploring associations between gaze patterns and putative human mirror neuron system activity

    Directory of Open Access Journals (Sweden)

    Peter Hugh Donaldson

    2015-07-01

    Full Text Available The human mirror neuron system (MNS is hypothesised to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity, healthy right-handed participants aged 18-40 (n = 26 viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation (TMS. Motor-evoked potentials (MEPs recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze (PG and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  11. Looking for Action: Talk and Gaze Home Position in the Airline Cockpit

    Science.gov (United States)

    Nevile, Maurice

    2010-01-01

    This paper considers the embodied nature of discourse for a professional work setting. It examines language in interaction in the airline cockpit, and specifically how shifts in pilots' eye gaze direction can indicate the action of talk, that is, what talk is doing and its relative contribution to work-in-progress. Looking towards the other…

  12. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    Science.gov (United States)

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Gaze-based assistive technology in daily activities in children with severe physical impairments-An intervention study.

    Science.gov (United States)

    Borgestig, Maria; Sandqvist, Jan; Ahlsten, Gunnar; Falkmer, Torbjörn; Hemmingsson, Helena

    2017-04-01

    To establish the impact of a gaze-based assistive technology (AT) intervention on activity repertoire, autonomous use, and goal attainment in children with severe physical impairments, and to examine parents' satisfaction with the gaze-based AT and with services related to the gaze-based AT intervention. Non-experimental multiple case study with before, after, and follow-up design. Ten children with severe physical impairments without speaking ability (aged 1-15 years) participated in gaze-based AT intervention for 9-10 months, during which period the gaze-based AT was implemented in daily activities. Repertoire of computer activities increased for seven children. All children had sustained usage of gaze-based AT in daily activities at follow-up, all had attained goals, and parents' satisfaction with the AT and with services was high. The gaze-based AT intervention was effective in guiding parents and teachers to continue supporting the children to perform activities with the AT after the intervention program.

  14. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  15. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    Science.gov (United States)

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech

  16. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    Science.gov (United States)

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  17. Eye Gaze Patterns Associated with Aggressive Tendencies in Adolescence

    NARCIS (Netherlands)

    Laue, Cameron; Griffey, Marcus; Lin, Ping I.; Wallace, Kirk; van der Schoot, Menno; Horn, Paul; Pedapati, Ernest; Barzman, Drew

    2018-01-01

    Social information processing theory hypothesizes that aggressive children pay more attention to cues of hostility and threat in others’ behavior, consequently leading to over-interpretation of others’ behavior as hostile. While there is abundant evidence of aggressive children demonstrating hostile

  18. Keeping Your Eye on the Rail: Gaze Behaviour of Horse Riders Approaching a Jump

    Science.gov (United States)

    Hall, Carol; Varley, Ian; Kay, Rachel; Crundall, David

    2014-01-01

    The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye). The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG) was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10) was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (pJump 1 was fixated on significantly earlier and more frequently than jump 2 or 3 (pjump 3 than with jump 1 (p = 0.01) but there was no difference in errors made between rounds. Although no significant correlations between gaze behaviour and skill scores were found, the riders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07), when the horse was further from the jump (p = 0.09) and their first fixation on the jump was of a longer duration (p = 0.06). Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports. PMID:24846055

  19. The Disturbance of Gaze in Progressive Supranuclear Palsy (PSP: Implications for Pathogenesis

    Directory of Open Access Journals (Sweden)

    Athena L Chen

    2010-12-01

    Full Text Available Progressive supranuclear palsy (PSP is a disease of later life that is currently regarded as a form of neurodegenerative tauopathy. Disturbance of gaze is a cardinal clinical feature of PSP that often helps clinicians to establish the diagnosis. Since the neurobiology of gaze control is now well understood, it is possible to use eye movements as investigational tools to understand aspects of the pathogenesis of PSP. In this review, we summarize each disorder of gaze control that occurs in PSP, drawing on our studies of fifty patients, and on reports from other laboratories that have measured the disturbances of eye movements. When these gaze disorders are approached by considering each functional class of eye movements and its neurobiological basis, a distinct pattern of eye movement deficits emerges that provides insight into the pathogenesis of PSP. Although some aspects of all forms of eye movements are affected in PSP, the predominant defects concern vertical saccades (slow and hypometric, both up and down, impaired vergence, and inability to modulate the linear vestibulo-ocular reflex appropriately for viewing distance. These vertical and vergence eye movements habitually work in concert to enable visuomotor skills that are important during locomotion with the hands free. Taken with the prominent early feature of falls, these findings suggest that PSP tauopathy impairs a recently-evolved neural system concerned with bipedal locomotion in an erect posture and frequent gaze shifts between the distant environment and proximate hands. This approach provides a conceptual framework that can be used to address the nosological challenge posed by overlapping clinical and neuropathological features of neurodegenerative tauopathies.

  20. Gaze strategies during visually-guided versus memory-guided grasping.

    Science.gov (United States)

    Prime, Steven L; Marotta, Jonathan J

    2013-03-01

    Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream.

  1. A novel attention training paradigm based on operant conditioning of eye gaze: Preliminary findings.

    Science.gov (United States)

    Price, Rebecca B; Greven, Inez M; Siegle, Greg J; Koster, Ernst H W; De Raedt, Rudi

    2016-02-01

    Inability to engage with positive stimuli is a widespread problem associated with negative mood states across many conditions, from low self-esteem to anhedonic depression. Though attention retraining procedures have shown promise as interventions in some clinical populations, novel procedures may be necessary to reliably attenuate chronic negative mood in refractory clinical populations (e.g., clinical depression) through, for example, more active, adaptive learning processes. In addition, a focus on individual difference variables predicting intervention outcome may improve the ability to provide such targeted interventions efficiently. To provide preliminary proof-of-principle, we tested a novel paradigm using operant conditioning to train eye gaze patterns toward happy faces. Thirty-two healthy undergraduates were randomized to receive operant conditioning of eye gaze toward happy faces (train-happy) or neutral faces (train-neutral). At the group level, the train-happy condition attenuated sad mood increases following a stressful task, in comparison to train-neutral. In individual differences analysis, greater physiological reactivity (pupil dilation) in response to happy faces (during an emotional face-search task at baseline) predicted decreased mood reactivity after stress. These Preliminary results suggest that operant conditioning of eye gaze toward happy faces buffers against stress-induced effects on mood, particularly in individuals who show sufficient baseline neural engagement with happy faces. Eye gaze patterns to emotional face arrays may have a causal relationship with mood reactivity. Personalized medicine research in depression may benefit from novel cognitive training paradigms that shape eye gaze patterns through feedback. Baseline neural function (pupil dilation) may be a key mechanism, aiding in iterative refinement of this approach. (c) 2016 APA, all rights reserved).

  2. Studying the influence of race on the gaze cueing effect using eye tracking method

    Directory of Open Access Journals (Sweden)

    Galina Ya. Menshikova

    2017-06-01

    Full Text Available The gaze direction of another person is an important social cue, allowing us to orient quickly in social interactions. The effect of short-term redirection of visual attention to the same object that other people are looking at is known as the gaze cueing effect. There is evidence that the strength of this effect depends on many social factors, such as the trust in a partner, her/his gender, social attitudes, etc. In our study we investigated the influence of race of face stimuli on the strength of the gaze cueing effect. Using the modified Posner Cueing Task an attentional shift was assessed in a scene where avatar faces of different race were used as distractors. Participants were instructed to fix the black dot in the centre of the screen until it changes colour, and then as soon as possible to make a rightward or leftward saccade, depending on colour of a fixed point. A male distractor face was shown in the centre of the screen simultaneously with a fixed point. The gaze direction of the distractor face changed from straight ahead to rightward or leftward at the moment when colour of a fixed point changed. It could be either congruent or incongruent with the saccade direction. We used face distractors of three race categories: Caucasian (own race faces, Asian and African (other race faces. Twenty five Caucasian participants took part in our study. The results showed that the race of face distractors influence the strength of the gaze cueing effect, that manifested in the change of latency and velocity of the ongoing saccades.

  3. Context-sensitivity in Conversation. Eye gaze and the German Repair Initiator ‘bitte?’ (´pardon?´)

    DEFF Research Database (Denmark)

    Egbert, Maria

    1996-01-01

    . In addition, repair is sensitive to certain characteristics of social situations. The selection of a particular repair initiator, German bitte? ‘pardon?’, indexes that there is no mutual gaze between interlocutors; i.e., there is no common course of action. The selection of bitte? not only initiates repair......; it also spurs establishment of mutual gaze, and thus displays that there is attention to a common focus. (Conversation analysis, context, cross-linguistic analysis, repair, gaze, telephone conversation, co-present interaction, grammar and interaction)...

  4. Gaze stabilization in chronic vestibular-loss and in cerebellar ataxia: interactions of feedforward and sensory feedback mechanisms.

    Science.gov (United States)

    Sağlam, M; Lehnen, N

    2014-01-01

    During gaze shifts, humans can use visual, vestibular, and proprioceptive feedback, as well as feedforward mechanisms, for stabilization against active and passive head movements. The contributions of feedforward and sensory feedback control, and the role of the cerebellum, are still under debate. To quantify these contributions, we increased the head moment of inertia in three groups (ten healthy, five chronic vestibular-loss and nine cerebellar-ataxia patients) while they performed large gaze shifts to flashed targets in darkness. This induces undesired head oscillations. Consequently, both active (desired) and passive (undesired) head movements had to be compensated for to stabilize gaze. All groups compensated for active and passive head movements, vestibular-loss patients less than the other groups (P feedforward mechanisms substantially contribute to gaze stabilization. Proprioception alone is not sufficient (gain 0.2). Stabilization against active and passive head movements was not impaired in our cerebellar ataxia patients.

  5. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    Science.gov (United States)

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  6. I spy with my little eye: Analysis of airline pilots' gaze patterns in a manual instrument flight scenario.

    Science.gov (United States)

    Haslbeck, Andreas; Zhang, Bo

    2017-09-01

    The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Spotting expertise in the eyes: billiards knowledge as revealed by gaze shifts in a dynamic visual prediction task.

    Science.gov (United States)

    Crespi, Sofia; Robino, Carlo; Silva, Ottavia; de'Sperati, Claudio

    2012-10-31

    In sports, as in other activities and knowledge domains, expertise is a highly valuable asset. We assessed whether expertise in billiards is associated with specific patterns of eye movements in a visual prediction task. Professional players and novices were presented a number of simplified billiard shots on a computer screen, previously filmed in a real set, with the last part of the ball trajectory occluded. They had to predict whether or not the ball would have hit the central skittle. Experts performed better than novices, in terms of both accuracy and response time. By analyzing eye movements, we found that during occlusion, experts rarely extrapolated with the gaze the occluded part of the ball trajectory-a behavior that was widely diffused in novices-even when the unseen path was long and with two bounces interposed. Rather, they looked selectively at specific diagnostic points on the cushions along the ball's visible trajectory, in accordance with a formal metrical system used by professional players to calculate the shot coordinates. Thus, the eye movements of expert observers contained a clear signature of billiard expertise and documented empirically a strategy upgrade in visual problem solving from dynamic, analog simulation in imagery to more efficient rule-based, conceptual knowledge.

  8. Eye Gaze Controlled Projected Display in Automotive and Military Aviation Environments

    Directory of Open Access Journals (Sweden)

    Gowdham Prabhakar

    2018-01-01

    Full Text Available This paper presents an eye gaze controlled projected display that can be used in aviation and automotive environment as a head up display. We have presented details of the hardware and software used in developing the display and an algorithm to improve performance of point and selection tasks in eye gaze controlled graphical user interface. The algorithm does not require changing layout of an interface; it rather puts a set of hotspots on clickable targets using a Simulated Annealing algorithm. Four user studies involving driving and flight simulators have found that the proposed projected display can improve driving and flying performance and significantly reduce pointing and selection times for secondary mission control tasks compared to existing interaction systems.

  9. Speaking through the flesh: Affective encounters, gazes and desire in Harlequin romances

    Directory of Open Access Journals (Sweden)

    Anja Hirdman

    2016-12-01

    Full Text Available In the wake of the affective turn, emotion and embodiment have emerged as key terms in cultural studies in order to acknowledge the affective dimension of media texts (Gibbs, 2002; Gregg & Seigworth, 2010. Drawing from the cross-disciplinary field of affect theory, the article examines the writing of desire in Harlequin romances through the delineation of gendered encounters. Against the backdrop of earlier feminist critiques of romance fiction, it argues that Harlequin’s intense focus on corporeal sensations and gazes encompasses a looking relationship that differs significantly from the visual mediation of gender and desire. With its use of an extended literary transvestism, a double narrator perspective, and the appropriation of a female gaze, Harlequin offers readers an affective imaginary space in which the significance of the gendered body is re-made, re-versed, and the male body is stripped of its unique position.

  10. Adaptive gaze stabilization through cerebellar internal models in a humanoid robot

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Tolu, Silvia; Falotico, Egidio

    2016-01-01

    Two main classes of reflexes relying on the vestibular system are involved in the stabilization of the human gaze: The vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR...... on the coordination of VCR and VOR and OKR. The model, inspired on neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. Tests on a simulated humanoid platform confirm the effectiveness of our approach....... works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism for moving the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we present the first complete model of gaze stabilization based...

  11. Electronic medical records in diabetes consultations: participants' gaze as an interactional resource.

    Science.gov (United States)

    Rhodes, Penny; Small, Neil; Rowley, Emma; Langdon, Mark; Ariss, Steven; Wright, John

    2008-09-01

    Two routine consultations in primary care diabetes clinics are compared using extracts from video recordings of interactions between nurses and patients. The consultations were chosen to present different styles of interaction, in which the nurse's gaze was either primarily toward the computer screen or directed more toward the patient. Using conversation analysis, the ways in which nurses shift both gaze and body orientation between the computer screen and patient to influence the style, pace, content, and structure of the consultation were investigated. By examining the effects of different levels of engagement between the electronic medical record and the embodied patient in the consultation room, we argue for the need to consider the contingent nature of the interface of technology and the person in the consultation. Policy initiatives designed to deliver what is considered best-evidenced practice are modified in the micro context of the interactions of the consultation.

  12. 3D gaze tracking system for NVidia 3D Vision®.

    Science.gov (United States)

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  13. Postural sway and gaze can track the complex motion of a visual target.

    Directory of Open Access Journals (Sweden)

    Vassilia Hatzitaki

    Full Text Available Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictive visual target motions does not take into account this essential characteristic of the human movement, and may result in task specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degree of complexity during whole-body swaying in the Anterior-Posterior (AP and Medio-Lateral (ML direction. Participants were asked to track three visual target motions: a complex (Lorenz attractor, a noise (brown and a periodic (sine moving target while receiving online visual feedback about their performance. Postural sway, gaze and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.

  14. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  15. Influences of dwell time and cursor control on the performance in gaze driven typing

    OpenAIRE

    Helmert, Jens R.; Pannasch, Sebastian; Velichkovsky, Boris M.

    2008-01-01

    In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch ...

  16. Strange-face Illusions During Interpersonal-Gazing and Personality Differences of Spirituality.

    Science.gov (United States)

    Caputo, Giovanni B

    Strange-face illusions are produced when two individuals gaze at each other in the eyes in low illumination for more than a few minutes. Usually, the members of the dyad perceive numinous apparitions, like the other's face deformations and perception of a stranger or a monster in place of the other, and feel a short lasting dissociation. In the present experiment, the influence of the spirituality personality trait on strength and number of strange-face illusions was investigated. Thirty participants were preliminarily tested for superstition (Paranormal Belief Scale, PBS) and spirituality (Spiritual Transcendence Scale, STS); then, they were randomly assigned to 15 dyads. Dyads performed the intersubjective gazing task for 10 minutes and, finally, strange-face illusions (measured through the Strange-Face Questionnaire, SFQ) were evaluated. The first finding was that SFQ was independent of PBS; hence, strange-face illusions during intersubjective gazing are authentically perceptual, hallucination-like phenomena, and not due to superstition. The second finding was that SFQ depended on the spiritual-universality scale of STS (a belief in the unitive nature of life; e.g., "there is a higher plane of consciousness or spirituality that binds all people") and the two variables were negatively correlated. Thus, strange-face illusions, in particular monstrous apparitions, could potentially disrupt binding among human beings. Strange-face illusions can be considered as 'projections' of the subject's unconscious into the other's face. In conclusion, intersubjective gazing at low illumination can be a tool for conscious integration of unconscious 'shadows of the Self' in order to reach completeness of the Self. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Design of a Binocular Pupil and Gaze Point Detection System Utilizing High Definition Images

    Directory of Open Access Journals (Sweden)

    Yilmaz Durna

    2017-05-01

    Full Text Available This study proposes a novel binocular pupil and gaze detection system utilizing a remote full high definition (full HD camera and employing LabVIEW. LabVIEW is inherently parallel and has fewer time-consuming algorithms. Many eye tracker applications are monocular and use low resolution cameras due to real-time image processing difficulties. We utilized the computer’s direct access memory channel for rapid data transmission and processed full HD images with LabVIEW. Full HD images make easier determinations of center coordinates/sizes of pupil and corneal reflection. We modified the camera so that the camera sensor passed only infrared (IR images. Glints were taken as reference points for region of interest (ROI area selection of the eye region in the face image. A morphologic filter was applied for erosion of noise, and a weighted average technique was used for center detection. To test system accuracy with 11 participants, we produced a visual stimulus set up to analyze each eye’s movement. Nonlinear mapping function was utilized for gaze estimation. Pupil size, pupil position, glint position and gaze point coordinates were obtained with free natural head movements in our system. This system also works at 2046 × 1086 resolution at 40 frames per second. It is assumed that 280 frames per second for 640 × 480 pixel images is the case. Experimental results show that the average gaze detection error for 11 participants was 0.76° for the left eye, 0.89° for right eye and 0.83° for the mean of two eyes.

  18. Gaze Awareness in Agent-Based Early-Childhood Learning Application

    OpenAIRE

    Akkil , Deepak; Dey , Prasenjit; Salian , Deepshika; Rajput , Nitendra

    2017-01-01

    Part 6: Interaction with Children; International audience; Use of technological devices for early childhood learning is increasing. Now, kindergarten and primary school children use interactive applications on mobile phones and tablet computers to support and complement classroom learning. With increase in cognitive technologies, there is further potential to make such applications more engaging by understanding the user context. In this paper, we present the Little Bear, a gaze aware pedagog...

  19. Visual perception during mirror gazing at one's own face in schizophrenia.

    Science.gov (United States)

    Caputo, Giovanni B; Ferrucci, Roberta; Bortolomasi, Marco; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2012-09-01

    In normal observers gazing at one's own face in the mirror for some minutes, at a low illumination level, triggers the perception of strange faces, a new perceptual illusion that has been named 'strange-face in the mirror'. Subjects see distortions of their own faces, but often they see monsters, archetypical faces, faces of dead relatives, and of animals. We designed this study to primarily compare strange-face apparitions in response to mirror gazing in patients with schizophrenia and healthy controls. The study included 16 patients with schizophrenia and 21 healthy controls. In this paper we administered a 7 minute mirror gazing test (MGT). Before the mirror gazing session, all subjects underwent assessment with the Cardiff Anomalous Perception Scale (CAPS). When the 7minute MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face perceptions. Apparitions of strange-faces in the mirror were significantly more intense in schizophrenic patients than in controls. All the following variables were higher in patients than in healthy controls: frequency (p<.005) and cumulative duration of apparitions (p<.009), number and types of strange-faces (p<.002), self-evaluation scores on Likert-type scales of apparition strength (p<.03) and of reality of apparitions (p<.001). In schizophrenic patients, these Likert-type scales showed correlations (p<.05) with CAPS total scores. These results suggest that the increase of strange-face apparitions in schizophrenia can be produced by ego dysfunction, by body dysmorphic disorder and by misattribution of self-agency. MGT may help in completing the standard assessment of patients with schizophrenia, independently of hallucinatory psychopathology. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. The Benslimane's Artistic Model for Females' Gaze Beauty: An Original Assessment Tool.

    Science.gov (United States)

    Benslimane, Fahd; van Harpen, Laura; Myers, Simon R; Ingallina, Fabio; Ghanem, Ali M

    2017-02-01

    The aim of this paper is to analyze the aesthetic characteristics of the human females' gaze using anthropometry and to present an artistic model to represent it: "The Frame Concept." In this model, the eye fissure represents a painting, and the most peripheral shadows around it represent the frame of this painting. The narrower the frame, the more aesthetically pleasing and youthful the gaze appears. This study included a literature review of the features that make the gaze appear attractive. Photographs of models with attractive gazes were examined, and old photographs of patients were compared to recent photographs. The frame ratio was defined by anthropometric measurements of modern portraits of twenty consecutive Miss World winners. The concept was then validated for age and attractiveness across centuries by analysis of modern female photographs and works of art acknowledged for portraying beautiful young and older women in classical paintings. The frame height inversely correlated with attractiveness in modern female portrait photographs. The eye fissure frame ratio of modern idealized female portraits was similar to that of beautiful female portraits idealized by classical artists. In contrast, the eye fissure frames of classical artists' mothers' portraits were significantly wider than those of beautiful younger women. The Frame Concept is a valid artistic tool that provides an understanding of both the aesthetic and aging characteristics of the female periorbital region, enabling the practitioner to plan appropriate aesthetic interventions. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the A3 online Instructions to Authors. www.springer.com/00266 .

  1. Gaze-related mimic word activates the frontal eye field and related network in the human brain: an fMRI study.

    Science.gov (United States)

    Osaka, Naoyuki; Osaka, Mariko

    2009-09-18

    This is an fMRI study demonstrating new evidence that a mimic word highly suggestive of an eye gaze, heard by the ear, significantly activates the frontal eye field (FEF), inferior frontal gyrus (IFG), dorsolateral premotor area (PMdr) and superior parietal lobule (SPL) connected with the frontal-parietal network. However, hearing a non-sense words that did not imply gaze under the same task does not activate this area in humans. We concluded that the FEF would be a critical area for generating/processing an active gaze, evoked by an onomatopoeia word that implied gaze closely associated with social skill. We suggest that the implied active gaze may depend on prefrontal-parietal interactions that modify cognitive gaze led by spatial visual attention associated with the SPL.

  2. EDITORIAL: Special section on gaze-independent brain-computer interfaces Special section on gaze-independent brain-computer interfaces

    Science.gov (United States)

    Treder, Matthias S.

    2012-08-01

    Restoring the ability to communicate and interact with the environment in patients with severe motor disabilities is a vision that has been the main catalyst of early brain-computer interface (BCI) research. The past decade has brought a diversification of the field. BCIs have been examined as a tool for motor rehabilitation and their benefit in non-medical applications such as mental-state monitoring for improved human-computer interaction and gaming has been confirmed. At the same time, the weaknesses of some approaches have been pointed out. One of these weaknesses is gaze-dependence, that is, the requirement that the user of a BCI system voluntarily directs his or her eye gaze towards a visual target in order to efficiently operate a BCI. This not only contradicts the main doctrine of BCI research, namely that BCIs should be independent of muscle activity, but it can also limit its real-world applicability both in clinical and non-medical settings. It is only in a scenario devoid of any motor activity that a BCI solution is without alternative. Gaze-dependencies have surfaced at two different points in the BCI loop. Firstly, a BCI that relies on visual stimulation may require users to fixate on the target location. Secondly, feedback is often presented visually, which implies that the user may have to move his or her eyes in order to perceive the feedback. This special section was borne out of a BCI workshop on gaze-independent BCIs held at the 2011 Society for Applied Neurosciences (SAN) Conference and has then been extended with additional contributions from other research groups. It compiles experimental and methodological work that aims toward gaze-independent communication and mental-state monitoring. Riccio et al review the current state-of-the-art in research on gaze-independent BCIs [1]. Van der Waal et al present a tactile speller that builds on the stimulation of the fingers of the right and left hand [2]. H¨ohne et al analyze the ergonomic aspects

  3. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    Directory of Open Access Journals (Sweden)

    Muhammad Ilhamdi Rusydi

    2014-06-01

    Full Text Available Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2 produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  4. Gaze cuing of attention in snake phobic women: the influence of facial expression

    Directory of Open Access Journals (Sweden)

    Carolina ePletti

    2015-04-01

    Full Text Available Only a few studies investigated whether animal phobics exhibit attentional biases in contexts where no phobic stimuli are present. Among these, recent studies provided evidence for a bias toward facial expressions of fear and disgust in animal phobics. Such findings may be due to the fact that these expressions could signal the presence of a phobic object in the surroundings. To test this hypothesis and further investigate attentional biases for emotional faces in animal phobics, we conducted an experiment using a gaze-cuing paradigm in which participants’ attention was driven by the task-irrelevant gaze of a centrally presented face. We employed dynamic negative facial expressions of disgust, fear and anger and found an enhanced gaze-cuing effect in snake phobics as compared to controls, irrespective of facial expression. These results provide evidence of a general hypervigilance in animal phobics in the absence of phobic stimuli, and indicate that research on specific phobias should not be limited to symptom provocation paradigms.

  5. The Tourist Gaze and ‘Family Treasure Trails’ in Museums

    DEFF Research Database (Denmark)

    Larsen, Jonas; Svabo, Connie

    2014-01-01

    Museums are largely neglected in the tourist research literature. This is even more striking given that they are arguably designed for gazing. There is little doubt that “graying” of the Western population adds to the number and range of museums. And yet, even in adult museums, there will be chil......Museums are largely neglected in the tourist research literature. This is even more striking given that they are arguably designed for gazing. There is little doubt that “graying” of the Western population adds to the number and range of museums. And yet, even in adult museums......, there will be children who are “dragged along.” Museums are increasingly aware of such conflicts and dilemmas. Many museums offer printed booklets with “treasure trails.” They afford a trail through the museum that forms a treasure hunt for specific objects and correct answers to questions related to the objects....... This article draws attention to this overlooked, mundane technology and gives it its deserved share of the limelight. We are concerned with exploring ethnographically how trails are designed and especially used by young families in museums for gazing. The article gives insight into how children, broadly...

  6. A 2D eye gaze estimation system with low-resolution webcam images

    Directory of Open Access Journals (Sweden)

    Kim Jin

    2011-01-01

    Full Text Available Abstract In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI algorithm. Deformable template-based 2D gaze estimation (DTBGE algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  7. The head tracks and gaze predicts: how the world's best batters hit a ball.

    Directory of Open Access Journals (Sweden)

    David L Mann

    Full Text Available Hitters in fast ball-sports do not align their gaze with the ball throughout ball-flight; rather, they use predictive eye movement strategies that contribute towards their level of interceptive skill. Existing studies claim that (i baseball and cricket batters cannot track the ball because it moves too quickly to be tracked by the eyes, and that consequently (ii batters do not - and possibly cannot - watch the ball at the moment they hit it. However, to date no studies have examined the gaze of truly elite batters. We examined the eye and head movements of two of the world's best cricket batters and found both claims do not apply to these batters. Remarkably, the batters coupled the rotation of their head to the movement of the ball, ensuring the ball remained in a consistent direction relative to their head. To this end, the ball could be followed if the batters simply moved their head and kept their eyes still. Instead of doing so, we show the elite batters used distinctive eye movement strategies, usually relying on two predictive saccades to anticipate (i the location of ball-bounce, and (ii the location of bat-ball contact, ensuring they could direct their gaze towards the ball as they hit it. These specific head and eye movement strategies play important functional roles in contributing towards interceptive expertise.

  8. Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP).

    Science.gov (United States)

    Acqualagna, Laura; Blankertz, Benjamin

    2013-05-01

    A Brain Computer Interface (BCI) speller is a communication device, which can be used by patients suffering from neurodegenerative diseases to select symbols in a computer application. For patients unable to overtly fixate the target symbol, it is crucial to develop a speller independent of gaze shifts. In the present online study, we investigated rapid serial visual presentation (RSVP) as a paradigm for mental typewriting. We investigated the RSVP speller in three conditions, regarding the Stimulus Onset Asynchrony (SOA) and the use of color features. A vocabulary of 30 symbols was presented one-by-one in a pseudo random sequence at the same location of display. All twelve participants were able to successfully operate the RSVP speller. The results show a mean online spelling rate of 1.43 symb/min and a mean symbol selection accuracy of 94.8% in the best condition. We conclude that the RSVP is a promising paradigm for BCI spelling and its performance is competitive with the fastest gaze-independent spellers in literature. The RSVP speller does not require gaze shifts towards different target locations and can be operated by non-spatial visual attention, therefore it can be considered as a valid paradigm in applications with patients for impaired oculo-motor control. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Facing death, gazing inward: end-of-life and the transformation of clinical subjectivity in Thailand.

    Science.gov (United States)

    Stonington, Scott

    2011-06-01

    In this article, I describe a new form of clinical subjectivity in Thailand, emerging out of public debate over medical care at the end of life. Following the controversial high-tech death of the famous Buddhist monk Buddhadasa, many began to denounce modern death as falling prey to social ills in Thai society, such as consumerism, technology-worship, and the desire to escape the realities of existence. As a result, governmental and non-governmental organizations have begun to focus on the end-of-life as a locus for transforming Thai society. Moving beyond the classic outward focus of the medical gaze, they have begun teaching clinicians and patients to gaze inward instead, to use the suffering inherent in medicine and illness to face the nature of existence and attain inner wisdom. In this article, I describe the emergence of this new gaze and its major conceptual components, including a novel idea of what it means to be 'human,' as well as a series of technologies used to craft this humanity: confession, "facing suffering," and untying "knots" in the heart. I also describe how this new subjectivity has begun to change the long-stable Buddhist concept of death as taking place at a moment in time, giving way for a new concept of "end-of-life," an elongated interval to be experienced, studied, and used for inner wisdom.

  10. Method of Menu Selection by Gaze Movement Using AC EOG Signals

    Science.gov (United States)

    Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu

    A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.

  11. Goats display audience-dependent human-directed gazing behaviour in a problem-solving task.

    Science.gov (United States)

    Nawroth, Christian; Brett, Jemma M; McElligott, Alan G

    2016-07-01

    Domestication is an important factor driving changes in animal cognition and behaviour. In particular, the capacity of dogs to communicate in a referential and intentional way with humans is considered a key outcome of how domestication as a companion animal shaped the canid brain. However, the lack of comparison with other domestic animals makes general conclusions about how domestication has affected these important cognitive features difficult. We investigated human-directed behaviour in an 'unsolvable problem' task in a domestic, but non-companion species: goats. During the test, goats experienced a forward-facing or an away-facing person. They gazed towards the forward-facing person earlier and for longer and showed more gaze alternations and a lower latency until the first gaze alternation when the person was forward-facing. Our results provide strong evidence for audience-dependent human-directed visual orienting behaviour in a species that was domesticated primarily for production, and show similarities with the referential and intentional communicative behaviour exhibited by domestic companion animals such as dogs and horses. This indicates that domestication has a much broader impact on heterospecific communication than previously believed. © 2016 The Author(s).

  12. Affine transform to reform pixel coordinates of EOG signals for controlling robot manipulators using gaze motions.

    Science.gov (United States)

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-06-10

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  13. Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.

    Science.gov (United States)

    Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M

    2013-02-01

    We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. Looking into the future: An inward bias in aesthetic experience driven only by gaze cues.

    Science.gov (United States)

    Chen, Yi-Chia; Colombatto, Clara; Scholl, Brian J

    2018-07-01

    The inward bias is an especially powerful principle of aesthetic experience: In framed images (e.g. photographs), we prefer peripheral figures that face inward (vs. outward). Why does this bias exist? Since agents tend to act in the direction in which they are facing, one intriguing possibility is that the inward bias reflects a preference to view scenes from a perspective that will allow us to witness those predicted future actions. This account has been difficult to test with previous displays, in which facing direction is often confounded with either global shape profiles or the relative locations of salient features (since e.g. someone's face is generally more visually interesting than the back of their head). But here we demonstrate a robust inward bias in aesthetic judgment driven by a cue that is socially powerful but visually subtle: averted gaze. Subjects adjusted the positions of people in images to maximize the images' aesthetic appeal. People with direct gaze were not placed preferentially in particular regions, but people with averted gaze were reliably placed so that they appeared to be looking inward. This demonstrates that the inward bias can arise from visually subtle features, when those features signal how future events may unfold. Copyright © 2018. Published by Elsevier B.V.

  15. Eye-gaze control of the computer interface: Discrimination of zoom intent

    International Nuclear Information System (INIS)

    Goldberg, J.H.

    1993-01-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at a statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered

  16. Improved remote gaze estimation using corneal reflection-adaptive geometric transforms

    Science.gov (United States)

    Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea

    2014-05-01

    Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.

  17. Stay tuned: Inter-individual neural synchronization during mutual gaze and joint attention

    Directory of Open Access Journals (Sweden)

    Daisuke N Saito

    2010-11-01

    Full Text Available Eye contact provides a communicative link between humans, prompting joint attention. As spontaneous brain activity may have an important role in coordination of neuronal processing within the brain, their inter-subject synchronization may occur during eye contact. To test this, we conducted simultaneous functional MRI in pairs of adults. Eye contact was maintained at baseline while the subjects engaged in real-time gaze exchange in a joint attention task. Averted gaze activated the bilateral occipital pole extending to the right posterior superior temporal sulcus, the dorso-medial prefrontal cortex, and bilateral inferior frontal gyrus. Following a partner’s gaze towards an object activated the left intraparietal sulcus. After all task-related effects were modeled out, inter-individual correlation analysis of residual time-courses was performed. Paired subjects showed more prominent correlations than non-paired subjects in the right inferior frontal gyrus, suggesting that this region is involved in sharing intention during eye contact that provides the context for joint attention.

  18. Examining Vision and Attention in Sports Performance Using a Gaze-Contingent Paradigm

    Directory of Open Access Journals (Sweden)

    Donghyun Ryu

    2012-10-01

    Full Text Available In time-constrained activities, such as competitive sports, the rapid acquisition and comprehension of visual information is vital for successful performance. Currently our understanding of how and what visual information is acquired and how this changes with skill development is quite rudimentary. Interpretation of eye movement behaviour is limited by uncertainties surrounding the relationship between attention, line-of-gaze data, and the mechanism of information pick-up from different sectors of the visual field. We used gaze-contingent display methodology to provide a selective information presentation to the central and peripheral parts of the visual field while performing a decision-making task. Eleven skilled and 11 less-skilled players watched videos of basketball scenarios under three different vision conditions (tunnel, masked, and full vision and in a forced-choice paradigm responded whether it was more appropriate for the ball carrier to pass or drive. In the tunnel and mask conditions vision was selectively restricted to, or occluded from, 5o around the line of gaze respectively. The skilled players showed significantly higher response accuracy and faster response times compared to their lesser skilled counterparts irrespective of the vision condition, demonstrating the skilled players' superiority in information extraction irrespective of the segment of visual field they rely on. Findings suggest that the capability to interpret visual information appears to be the key limiting factor to expert performance rather than the sector of the visual field in which the information is detected.

  19. Neural Temporal Dynamics of Social Exclusion Elicited by Averted Gaze: An Event-Related Potentials Study

    Directory of Open Access Journals (Sweden)

    Yue Leng

    2018-02-01

    Full Text Available Eye gaze plays a fundamental role in social communication. The averted eye gaze during social interaction, as the most common form of silent treatment, conveys a signal of social exclusion. In the present study, we examined the time course of brain response to social exclusion by using a modified version of Eye-gaze paradigm. The event-related potentials (ERPs data and the subjective rating data showed that the frontocentral P200 was positively correlated with negative mood of excluded events, whereas, the centroparietal late positive potential (LPP was positively correlated with the perceived ostracism intensity. Both the P200 and LPP were more positive-going for excluded events than for included events. These findings suggest that brain responses sensitive to social exclusion can be divided into the early affective processing stage, linking to the early pre-cognitive warning system; and the late higher-order processes stage, demanding attentional resources for elaborate stimuli evaluation and categorization generally not under specific situation.

  20. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    Directory of Open Access Journals (Sweden)

    Saki Takao

    2018-01-01

    Full Text Available The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  1. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    Science.gov (United States)

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  2. Gaze and motor behavior of people with PD during obstacle circumvention.

    Science.gov (United States)

    Simieli, Lucas; Vitório, Rodrigo; Rodrigues, Sérgio Tosi; Zago, Paula Fávaro Polastri; Ignacio Pereira, Vinícius Alota; Baptista, André Macari; de Paula, Pedro Henrique Alves; Penedo, Tiago; Almeida, Quincy J; Barbieri, Fabio Augusto

    2017-10-01

    The aim of this study was to analyze the motor and visual strategies used when walking around (circumvention) an obstacle in patients with Parkinson's disease (PD), in addition to the effects of dopaminergic medication on these strategies. To answer the study question, people with PD (15) and neurologically healthy individuals (15 - CG) performed the task of obstacle circumvention during walking (5 trials of unobstructed walking and obstacle circumvention). The following parameters were analyzed: body clearance (longer mediolateral distance during obstacle circumvention of the center of mass -CoM- to the obstacle), horizontal distance (distance of the CoM at the beginning of obstacle circumvention to the obstacle), circumvention strategy ("lead-out" or "lead-in" strategy), spatial-temporal of each step, and number of fixations, the mean duration of the fixations and time of fixations according to areas of interest. In addition, the variability of each parameter was calculated. The results indicated that people with PD and the CG presented similar obstacle circumvention strategies (no differences between groups for body clearance, horizontal distance to obstacle, or obstacle circumvention strategy), but the groups used different adjustments to perform these strategies (people with PD performed adjustments during both the approach and circumvention steps and presented greater visual dependence on the obstacle; the CG adjusted only the final step before obstacle circumvention). Moreover, without dopaminergic medication, people with PD reduced body clearance and increased the use of a "lead-out" strategy, variability in spatial-temporal parameters, and dependency on obstacle information, increasing the risk of contact with the obstacle during circumvention. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Effects of Language Background on Gaze Behavior: A Crosslinguistic Comparison Between Korean and German Speakers

    Science.gov (United States)

    Goller, Florian; Lee, Donghoon; Ansorge, Ulrich; Choi, Soonja

    2017-01-01

    Languages differ in how they categorize spatial relations: While German differentiates between containment (in) and support (auf) with distinct spatial words—(a) den Kuli IN die Kappe stecken (”put pen in cap”); (b) die Kappe AUF den Kuli stecken (”put cap on pen”)—Korean uses a single spatial word (kkita) collapsing (a) and (b) into one semantic category, particularly when the spatial enclosure is tight-fit. Korean uses a different word (i.e., netha) for loose-fits (e.g., apple in bowl). We tested whether these differences influence the attention of the speaker. In a crosslinguistic study, we compared native German speakers with native Korean speakers. Participants rated the similarity of two successive video clips of several scenes where two objects were joined or nested (either in a tight or loose manner). The rating data show that Korean speakers base their rating of similarity more on tight- versus loose-fit, whereas German speakers base their rating more on containment versus support (in vs. auf). Throughout the experiment, we also measured the participants’ eye movements. Korean speakers looked equally long at the moving Figure object and at the stationary Ground object, whereas German speakers were more biased to look at the Ground object. Additionally, Korean speakers also looked more at the region where the two objects touched than did German speakers. We discuss our data in the light of crosslinguistic semantics and the extent of their influence on spatial cognition and perception. PMID:29362644

  4. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  5. Differences in gaze behaviour of expert and junior surgeons performing open inguinal hernia repair.

    Science.gov (United States)

    Tien, Tony; Pucher, Philip H; Sodergren, Mikael H; Sriskandarajah, Kumuthan; Yang, Guang-Zhong; Darzi, Ara

    2015-02-01

    Various fields have used gaze behaviour to evaluate task proficiency. This may also apply to surgery for the assessment of technical skill, but has not previously been explored in live surgery. The aim was to assess differences in gaze behaviour between expert and junior surgeons during open inguinal hernia repair. Gaze behaviour of expert and junior surgeons (defined by operative experience) performing the operation was recorded using eye-tracking glasses (SMI Eye Tracking Glasses 2.0, SensoMotoric Instruments, Germany). Primary endpoints were fixation frequency (steady eye gaze rate) and dwell time (fixation and saccades duration) and were analysed for designated areas of interest in the subject's visual field. Secondary endpoints were maximum pupil size, pupil rate of change (change frequency in pupil size) and pupil entropy (predictability of pupil change). NASA TLX scale measured perceived workload. Recorded metrics were compared between groups for the entire procedure and for comparable procedural segments. Twenty-five cases were recorded, with 13 operations analysed, from 9 surgeons giving 630 min of data, recorded at 30 Hz. Experts demonstrated higher fixation frequency (median[IQR] 1.86 [0.3] vs 0.96 [0.3]; P = 0.006) and dwell time on the operative site during application of mesh (792 [159] vs 469 [109] s; P = 0.028), closure of the external oblique (1.79 [0.2] vs 1.20 [0.6]; P = 0.003) (625 [154] vs 448 [147] s; P = 0.032) and dwelled more on the sterile field during cutting of mesh (716 [173] vs 268 [297] s; P = 0.019). NASA TLX scores indicated experts found the procedure less mentally demanding than juniors (3 [2] vs 12 [5.2]; P = 0.038). No subjects reported problems with wearing of the device, or obstruction of view. Use of portable eye-tracking technology in open surgery is feasible, without impinging surgical performance. Differences in gaze behaviour during open inguinal hernia repair can be seen between expert and junior surgeons and may have

  6. Intentional gaze shift to neglected space: a compensatory strategy during recovery after unilateral spatial neglect.

    Science.gov (United States)

    Takamura, Yusaku; Imanishi, Maho; Osaka, Madoka; Ohmatsu, Satoko; Tominaga, Takanori; Yamanaka, Kentaro; Morioka, Shu; Kawashima, Noritaka

    2016-11-01

    Unilateral spatial neglect is a common neurological syndrome following predominantly right hemispheric stroke. While most patients lack insight into their neglect behaviour and do not initiate compensatory behaviours in the early recovery phase, some patients recognize it and start to pay attention towards the neglected space. We aimed to characterize visual attention capacity in patients with unilateral spatial neglect with specific focus on cortical processes underlying compensatory gaze shift towards the neglected space during the recovery process. Based on the Behavioural Inattention Test score and presence or absence of experience of neglect in their daily life from stroke onset to the enrolment date, participants were divided into USN+‰‰+ (do not compensate, n = 15), USN+ (compensate, n = 10), and right hemisphere damage groups (no neglect, n = 24). The patients participated in eye pursuit-based choice reaction tasks and were asked to pursue one of five horizontally located circular objects flashed on a computer display. The task consisted of 25 trials with 4-s intervals, and the order of highlighted objects was randomly determined. From the recorded eye tracking data, eye movement onset and gaze shift were calculated. To elucidate the cortical mechanism underlying behavioural results, electroencephalagram activities were recorded in three USN+‰‰+, 13 USN+ and eight patients with right hemisphere damage. We found that while lower Behavioural Inattention Test scoring patients (USN+‰‰+) showed gaze shift to non-neglected space, some higher scoring patients (USN+) showed clear leftward gaze shift at visual stimuli onset. Moreover, we found a significant correlation between Behavioural Inattention Test score and gaze shift extent in the unilateral spatial neglect group (r = -0.62, P attention to the neglected space) and its neural correlates in patients with unilateral spatial neglect. In conclusion, patients with unilateral spatial neglect who recognized

  7. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  8. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  9. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2017-06-01

    Full Text Available A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered. In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze, Control Saccade (remember original target location relative to gaze fixation, independent of the landmark, and a non-spatial control, Color Report (report target color. During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally

  10. Evidence for a link between changes to gaze behaviour and risk of falling in older adults during adaptive locomotion.

    Science.gov (United States)

    Chapman, G J; Hollands, M A

    2006-11-01

    There is increasing evidence that gaze stabilization with respect to footfall targets plays a crucial role in the control of visually guided stepping and that there are significant changes to gaze behaviour as we age. However, past research has not measured if age-related changes in gaze behaviour are associated with changes to stepping performance. This paper aims to identify differences in gaze behaviour between young (n=8) adults, older adults determined to be at a low-risk of falling (low-risk, n=4) and older adults prone to falling (high-risk, n=4) performing an adaptive locomotor task and attempts to relate observed differences in gaze behaviour to decline in stepping performance. Participants walked at a self-selected pace along a 9m pathway stepping into two footfall target locations en route. Gaze behaviour and lower limb kinematics were recorded using an ASL 500 gaze tracker interfaced with a Vicon motion analysis system. Results showed that older adults looked significantly sooner to targets, and fixated the targets for longer, than younger adults. There were also significant differences in these measures between high and low-risk older adults. On average, high-risk older adults looked away from targets significantly sooner and demonstrated less accurate and more variable foot placements than younger adults and low-risk older adults. These findings suggest that, as we age, we need more time to plan precise stepping movements and clearly demonstrate that there are differences between low-risk and high-risk older adults in both where and when they look at future stepping targets and the precision with which they subsequently step. We propose that high-risk older adults may prioritize the planning of future actions over the accurate execution of ongoing movements and that adoption of this strategy may contribute to an increased likelihood of falls. Copyright 2005 Elsevier B.V.

  11. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    Science.gov (United States)

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. How do children learn to follow gaze, share joint attention, imitate their teachers, and use tools during social interactions?

    Science.gov (United States)

    Grossberg, Stephen; Vladusich, Tony

    2010-01-01

    How does an infant learn through visual experience to imitate actions of adult teachers, despite the fact that the infant and adult view one another and the world from different perspectives? To accomplish this, an infant needs to learn how to share joint attention with adult teachers and to follow their gaze towards valued goal objects. The infant also needs to be capable of view-invariant object learning and recognition whereby it can carry out goal-directed behaviors, such as the use of tools, using different object views than the ones that its teachers use. Such capabilities are often attributed to "mirror neurons". This attribution does not, however, explain the brain processes whereby these competences arise. This article describes the CRIB (Circular Reactions for Imitative Behavior) neural model of how the brain achieves these goals through inter-personal circular reactions. Inter-personal circular reactions generalize the intra-personal circular reactions of Piaget, which clarify how infants learn from their own babbled arm movements and reactive eye movements how to carry out volitional reaches, with or without tools, towards valued goal objects. The article proposes how intra-personal circular reactions create a foundation for inter-personal circular reactions when infants and other learners interact with external teachers in space. Both types of circular reactions involve learned coordinate transformations between body-centered arm movement commands and retinotopic visual feedback, and coordination of processes within and between the What and Where cortical processing streams. Specific breakdowns of model processes generate formal symptoms similar to clinical symptoms of autism. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  14. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Directory of Open Access Journals (Sweden)

    Per Olav Folgerø

    2016-09-01

    Full Text Available This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise experimental art history. Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation, and the valence of facial expression.We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a

  15. Hovering by Gazing: A Novel Strategy for Implementing Saccadic Flight-Based Navigation in GPS-Denied Environments

    Directory of Open Access Journals (Sweden)

    Augustin Manecy

    2014-04-01

    Full Text Available Hovering flies are able to stay still in place when hovering above flowers and burst into movement towards a new object of interest (a target. This suggests that sensorimotor control loops implemented onboard could be usefully mimicked for controlling Unmanned Aerial Vehicles (UAVs. In this study, the fundamental head-body movements occurring in free-flying insects was simulated in a sighted twin-engine robot with a mechanical decoupling inserted between its eye (or gaze and its body. The robot based on this gaze control system achieved robust and accurate hovering performances, without an accelerometer, over a ground target despite a narrow eye field of view (±5°. The gaze stabilization strategy validated under Processor-In-the-Loop (PIL and inspired by three biological Oculomotor Reflexes (ORs enables the aerial robot to lock its gaze onto a fixed target regardless of its roll angle. In addition, the gaze control mechanism allows the robot to perform short range target to target navigation by triggering an automatic fast “target jump” behaviour based on a saccadic eye movement.

  16. Novel Eye Movement Disorders in Whipple’s Disease—Staircase Horizontal Saccades, Gaze-Evoked Nystagmus, and Esotropia

    Directory of Open Access Journals (Sweden)

    Aasef G. Shaikh

    2017-07-01

    Full Text Available Whipple’s disease, a rare systemic infectious disorder, is complicated by the involvement of the central nervous system in about 5% of cases. Oscillations of the eyes and the jaw, called oculo-masticatory myorhythmia, are pathognomonic of the central nervous system involvement but are often absent. Typical manifestations of the central nervous system Whipple’s disease are cognitive impairment, parkinsonism mimicking progressive supranuclear palsy with vertical saccade slowing, and up-gaze range limitation. We describe a unique patient with the central nervous system Whipple’s disease who had typical features, including parkinsonism, cognitive impairment, and up-gaze limitation; but also had diplopia, esotropia with mild horizontal (abduction more than adduction limitation, and vertigo. The patient also had gaze-evoked nystagmus and staircase horizontal saccades. Latter were thought to be due to mal-programmed small saccades followed by a series of corrective saccades. The saccades were disconjugate due to the concurrent strabismus. Also, we noted disconjugacy in the slow phase of gaze-evoked nystagmus. The disconjugacy of the slow phase of gaze-evoked nystagmus was larger during monocular viewing condition. We propose that interaction of the strabismic drifts of the covered eyes and the nystagmus drift, putatively at the final common pathway might lead to such disconjugacy.

  17. Testing the dual-route model of perceived gaze direction: Linear combination of eye and head cues.

    Science.gov (United States)

    Otsuka, Yumiko; Mareschal, Isabelle; Clifford, Colin W G

    2016-06-01

    We have recently proposed a dual-route model of the effect of head orientation on perceived gaze direction (Otsuka, Mareschal, Calder, & Clifford, 2014; Otsuka, Mareschal, & Clifford, 2015), which computes perceived gaze direction as a linear combination of eye orientation and head orientation. By parametrically manipulating eye orientation and head orientation, we tested the adequacy of a linear model to account for the effect of horizontal head orientation on perceived direction of gaze. Here, participants adjusted an on-screen pointer toward the perceived gaze direction in two image conditions: Normal condition and Wollaston condition. Images in the Normal condition included a change in the visible part of the eye along with the change in head orientation, while images in the Wollaston condition were manipulated to have identical eye regions across head orientations. Multiple regression analysis with explanatory variables of eye orientation and head orientation revealed that linear models account for most of the variance both in the Normal condition and in the Wollaston condition. Further, we found no evidence that the model with a nonlinear term explains significantly more variance. Thus, the current study supports the dual-route model that computes the perceived gaze direction as a linear combination of eye orientation and head orientation.

  18. The response of guide dogs and pet dogs (Canis familiaris) to cues of human referential communication (pointing and gaze).

    Science.gov (United States)

    Ittyerah, Miriam; Gaunet, Florence

    2009-03-01

    The study raises the question of whether guide dogs and pet dogs are expected to differ in response to cues of referential communication given by their owners; especially since guide dogs grow up among sighted humans, and while living with their blind owners, they still have interactions with several sighted people. Guide dogs and pet dogs were required to respond to point, point and gaze, gaze and control cues of referential communication given by their owners. Results indicate that the two groups of dogs do not differ from each other, revealing that the visual status of the owner is not a factor in the use of cues of referential communication. Both groups of dogs have higher frequencies of performance and faster latencies for the point and the point and gaze cues as compared to gaze cue only. However, responses to control cues are below chance performance for the guide dogs, whereas the pet dogs perform at chance. The below chance performance of the guide dogs may be explained by a tendency among them to go and stand by the owner. The study indicates that both groups of dogs respond similarly in normal daily dyadic interaction with their owners and the lower comprehension of the human gaze may be a less salient cue among dogs in comparison to the pointing gesture.

  19. Gaze distribution analysis and saliency prediction across age groups.

    Science.gov (United States)

    Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu

    2018-01-01

    Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.

  20. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  1. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  2. Older Adult Multitasking Performance Using a Gaze-Contingent Useful Field of View.

    Science.gov (United States)

    Ward, Nathan; Gaspar, John G; Neider, Mark B; Crowell, James; Carbonari, Ronald; Kaczmarski, Hank; Ringer, Ryan V; Johnson, Aaron P; Loschky, Lester C; Kramer, Arthur F

    2018-03-01

    Objective We implemented a gaze-contingent useful field of view paradigm to examine older adult multitasking performance in a simulated driving environment. Background Multitasking refers to the ability to manage multiple simultaneous streams of information. Recent work suggests that multitasking declines with age, yet the mechanisms supporting these declines are still debated. One possible framework to better understand this phenomenon is the useful field of view, or the area in the visual field where information can be attended and processed. In particular, the useful field of view allows for the discrimination of two competing theories of real-time multitasking, a general interference account and a tunneling account. Methods Twenty-five older adult subjects completed a useful field of view task that involved discriminating the orientation of lines in gaze-contingent Gabor patches appearing at varying eccentricities (based on distance from the fovea) as they operated a vehicle in a driving simulator. In half of the driving scenarios, subjects also completed an auditory two-back task to manipulate cognitive workload, and during some trials, wind was introduced as a means to alter general driving difficulty. Results Consistent with prior work, indices of driving performance were sensitive to both wind and workload. Interestingly, we also observed a decline in Gabor patch discrimination accuracy under high cognitive workload regardless of eccentricity, which provides support for a general interference account of multitasking. Conclusion The results showed that our gaze-contingent useful field of view paradigm was able to successfully examine older adult multitasking performance in a simulated driving environment. Application This study represents the first attempt to successfully measure dynamic changes in the useful field of view for older adults completing a multitasking scenario involving driving.

  3. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    Science.gov (United States)

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  4. Nostalgia for a Childhood Without: Implications of the Adult Gaze on Childhood and Young Adult Sexuality

    OpenAIRE

    Lareau, Kristina

    2012-01-01

    This paper examines the adult gaze on children’s literature through the lens of Eric Tribunella’s article “From Kiddie Lit to Kiddie Porn” (2008) which explores the implications of child sexuality through an examination of Chris Kent’s parodies of The Coral Island by R. M. Ballantyne and Tom Brown’s Schooldays by Thomas Hughes. Introducing Kincaid’s term ‘child-loving,’ I explore the implications of the types of ‘child-loving’ as they are examined in children’s and young adult literature. Thi...

  5. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Keiko Sakurai

    2017-01-01

    Full Text Available A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor.

  6. RETURNING THE GAZE: CULTURE AND THE POLITICS OF SURVEILLANCE IN IRELAND

    Directory of Open Access Journals (Sweden)

    Spurgeon Thompson

    2002-12-01

    Full Text Available This essay seeks to examine the modalities of colonial state surveillance as well as severa1 ways in which they have been problematised in recent lrish literary writing, film, painting, photography and practice. Works by Ciaran Carson, Willie Doherty, Dave Fox, Teny George and Jim Sheridan, and Dermot Seymour are al1 therefore examined with the thernatic of "returning the gaze" in rnind. Further, this essay seeks to advance contemporary theories of surveillance away from an information-based or textual model to one which considers the spatial violence of surveillance and the subject positions it delimits, particularly in the context of colonialism and postcolonial theory.

  7. Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data.

    Science.gov (United States)

    Jain, Eakta; Sheikh, Yaser; Hodgins, Jessica

    2016-01-01

    Comic art consists of a sequence of panels of different shapes and sizes that visually communicate the narrative to the reader. The move-on-stills technique allows such still images to be retargeted for digital displays via camera moves. Today, moves-on-stills can be created by software applications given user-provided parameters for each desired camera move. The proposed algorithm uses viewer gaze as input to computationally predict camera move parameters. The authors demonstrate their algorithm on various comic book panels and evaluate its performance by comparing their results with a professional DVD.

  8. Contextual analysis of human non-verbal guide behaviors to inform the development of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    This paper reports the first step in a series of studies to design the interaction behaviors of an outdoor robotic guide. We describe and report the use case development carried out to identify effective human tour guide behaviors. In this paper we focus on non-verbal communication cues in gaze,

  9. Impairment of holistic face perception following right occipito-temporal damage in prosopagnosia: converging evidence from gaze-contingency.

    Science.gov (United States)

    Van Belle, Goedele; Busigny, Thomas; Lefèvre, Philippe; Joubert, Sven; Felician, Olivier; Gentile, Francesco; Rossion, Bruno

    2011-09-01

    Gaze-contingency is a method traditionally used to investigate the perceptual span in reading by selectively revealing/masking a portion of the visual field in real time. Introducing this approach in face perception research showed that the performance pattern of a brain-damaged patient with acquired prosopagnosia (PS) in a face matching task was reversed, as compared to normal observers: the patient showed almost no further decrease of performance when only one facial part (eye, mouth, nose, etc.) was available at a time (foveal window condition, forcing part-based analysis), but a very large impairment when the fixated part was selectively masked (mask condition, promoting holistic perception) (Van Belle, De Graef, Verfaillie, Busigny, & Rossion, 2010a; Van Belle, De Graef, Verfaillie, Rossion, & Lefèvre, 2010b). Here we tested the same manipulation in a recently reported case of pure prosopagnosia (GG) with unilateral right hemisphere damage (Busigny, Joubert, Felician, Ceccaldi, & Rossion, 2010). Contrary to normal observers, GG was also significantly more impaired with a mask than with a window, demonstrating impairment with holistic face perception. Together with our previous study, these observations support a generalized account of acquired prosopagnosia as a critical impairment of holistic (individual) face perception, implying that this function is a key element of normal human face recognition. Furthermore, the similar behavioral pattern of the two patients despite different lesion localizations supports a distributed network view of the neural face processing structures, suggesting that the key function of human face processing, namely holistic perception of individual faces, requires the activity of several brain areas of the right hemisphere and their mutual connectivity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Is improved lane keeping during cognitive load caused by increased physical arousal or gaze concentration toward the road center?

    Science.gov (United States)

    Li, Penghui; Markkula, Gustav; Li, Yibing; Merat, Natasha

    2018-08-01

    Driver distraction is one of the main causes of motor-vehicle accidents. However, the impact on traffic safety of tasks that impose cognitive (non-visual) distraction remains debated. One particularly intriguing finding is that cognitive load seems to improve lane keeping performance, most often quantified as reduced standard deviation of lateral position (SDLP). The main competing hypotheses, supported by current empirical evidence, suggest that cognitive load improves lane keeping via either increased physical arousal, or higher gaze concentration toward the road center, but views are mixed regarding if, and how, these possible mediators influence lane keeping performance. Hence, a simulator study was conducted, with participants driving on a straight city road section whilst completing a cognitive task at different levels of difficulty. In line with previous studies, cognitive load led to increased physical arousal, higher gaze concentration toward the road center, and higher levels of micro-steering activity, accompanied by improved lane keeping performance. More importantly, during the high cognitive task, both physical arousal and gaze concentration changed earlier in time than micro-steering activity, which in turn changed earlier than lane keeping performance. In addition, our results did not show a significant correlation between gaze concentration and physical arousal on the level of individual task recordings. Based on these findings, various multilevel models for micro-steering activity and lane keeping performance were conducted and compared, and the results suggest that all of the mechanisms proposed by existing hypotheses could be simultaneously involved. In other words, it is suggested that cognitive load leads to: (i) an increase in arousal, causing increased micro-steering activity, which in turn improves lane keeping performance, and (ii) an increase in gaze concentration, causing lane keeping improvement through both (a) further increased micro

  11. "I would like to get close to you": Making robot personal space invasion less intrusive with a social gaze cue

    DEFF Research Database (Denmark)

    Suvei, Stefan-Daniel; Vroon, Jered; Somoza Sanchez, Vella Veronica

    2018-01-01

    participants (n=83), with/without personal space invasion, and with/without a social gaze cue. With a questionnaire, we measured subjective perception of warmth, competence, and comfort after such an interaction. In addition, we used on-board sensors and a tracking system to measure the dynamics of social......How can a social robot get physically close to the people it needs to interact with? We investigated the effect of a social gaze cue by a human-sized mobile robot on the effects of personal space invasion by that robot. In our 2x2 between-subject experiment, our robot would approach our...

  12. Does the Vigilance-Avoidance Gazing Behavior of Children with Separation Anxiety Disorder Change after Cognitive-Behavioral Therapy?

    Science.gov (United States)

    In-Albon, Tina; Schneider, Silvia

    2012-01-01

    Cognitive biases are of interest in understanding the development of anxiety disorders. They also play a significant role during psychotherapy, where cognitive biases are modified in order to break the vicious cycle responsible for maintaining anxiety disorders. In a previous study, the vigilance-avoidance pattern was shown in children with…

  13. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    Science.gov (United States)

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  15. The abject gaze and the homosexual body: Flandrin's Figure d'Etude.

    Science.gov (United States)

    Camille, M

    1994-01-01

    This article charts the history of the reception, reproduction and appropriation of a single image that has recently become a kind of "gay icon"--the Figure d'Etude in the Louvre, painted by Hippolyte Flandrin in 1835. Initially no more than a neo-classical academic exercise, the formal emptiness of this picture meant that it could be re-invested and reinscribed with new meanings and new titles at every turn. Emblematic of the anxious visibility/invisibility of the newly discovered homosexual body during a period when the gaze still had to be kept a dark secret, Flandrin's image only "came out" in its later photographic reworkings by Frederick Holland Day and Baron von Gloeden. After being reproduced for a specifically homosexual audience early this century, the popular Romantic pose of the young man curled-up in profile became a standard one, reappearing recently in the photographs of Robert Mapplethorpe. The inactive, abject and inward-turned isolation of the figure with its narcissistic self-absorption makes it, in my view, a profoundly negative stereotype of the gay gaze and the homosexual body. Flandrin's figure nonetheless appears today on gay merchandise world-wide as a sign of our separate and secluded subject positions and our community's unwillingness to radically alter older imposed and inherited classical stereotypes.

  16. Eyes that bind us: Gaze leading induces an implicit sense of agency.

    Science.gov (United States)

    Stephenson, Lisa J; Edwards, S Gareth; Howard, Emma E; Bayliss, Andrew P

    2018-03-01

    Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention - causing another person to follow our eye movement - can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named 'temporal binding,' is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Gaze-independent ERP-BCIs: augmenting performance through location-congruent bimodal stimuli

    Science.gov (United States)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Werkhoven, Peter

    2014-01-01

    Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bimodal BCIs may be that the user can choose which modality or modalities to attend to. We studied bimodal, visual-tactile, gaze-independent BCIs and investigated whether or not ERP components’ tAUCs and subsequent classification accuracies are increased for (1) bimodal vs. unimodal stimuli; (2) location-congruent vs. location-incongruent bimodal stimuli; and (3) attending to both modalities vs. to either one modality. We observed an enhanced bimodal (compared to unimodal) P300 tAUC, which appeared to be positively affected by location-congruency (p = 0.056) and resulted in higher classification accuracies. Attending either to one or to both modalities of the bimodal location-congruent stimuli resulted in differences between ERP components, but not in classification performance. We conclude that location-congruent bimodal stimuli improve ERP-BCIs, and offer the user the possibility to switch the attended modality without losing performance. PMID:25249947

  18. The Metamorphosis of Polyphemus's Gaze in Marij Pregelj's Painting (1913-1967

    Directory of Open Access Journals (Sweden)

    Jure Mikuž

    2015-04-01

    Full Text Available In 1949-1951 Marij Pregelj, one of the most interesting Slovenian modernist painters, illustrated his version of Homer's Iliad and Odsssey. His illustrations were presented in the time of socialist realist aesthetics announce a reintegration of Slovenian art into the global (Western context. Among the illustrations is the figure of Cyclops devouring Odysseus' comrades. The image of the one-eyed giant Polyphemus is one which concerned Pregelj all his life: the painter, whose vocation is most dependent on the gaze, can show one eye in profile. And the profiles of others' faces and of his own face interested Pregelj his whole life through. Not only people but also objects were one-eyed: the rosette of a cathedral, which changes into a human figure, a washing machine door, a meat grinder's orifice, a blind “windeye” or window, and so on. The themes of his final two paintings, which he, already more than a year before his boding senseless death at the age of 54, executed but did not complete, are Polyphemus and the Portrait of His Son Vasko. In the first, blood flows from the pricked-out eye towards a stylized camera, in the second, the gaze of the son, an enthusiastic filmmaker, extends to the camera that will displace the father's brush.

  19. Beggars, Black Bears, and Butterflies: The Scientific Gaze and Ink Painting in Modern China

    Directory of Open Access Journals (Sweden)

    Lisa Claypool

    2015-03-01

    Full Text Available The ink brushes of the painters Chen Shizeng (1876–1923, Liu Kuiling (1885–1967, and Gao Jianfu (1879–1951 were employed as tools of the nation in early twentieth-century China. Yet the expression of a radical idealism about the new republic in their ink paintings was tempered early on by a tentative and self-conscious exploration of new ways of seeing. By synthesizing a “universal” scientific gaze with their idiosyncratically trained vision as artists, they created pictures that encouraged their viewers to cross the boundaries and binaries that would come to define the discourse about guohua, or “national painting”: East versus West, oil versus ink, modernity versus tradition, painting versus graphic arts, and elite versus folk. This article explores that extended moment of synthesis and experimentation. It argues that it was through the scientific gaze of these brush-and-ink artists that idealism and learning came to cooperate, and through their paintings that possibilities for news ways of seeing the nation emerged.

  20. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    Directory of Open Access Journals (Sweden)

    Alexis D. J. Makin

    2016-03-01

    Full Text Available Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene and orientation (0° to 90°, orientation, ORI gene. An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference.

  1. Love is in the gaze: an eye-tracking study of love and sexual desire.

    Science.gov (United States)

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports. © The Author(s) 2014.

  2. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  3. Visual Perception during Mirror-Gazing at One’s Own Face in Patients with Depression

    Directory of Open Access Journals (Sweden)

    Giovanni B. Caputo

    2014-01-01

    Full Text Available In normal observers, gazing at one’s own face in the mirror for a few minutes, at a low illumination level, produces the apparition of strange faces. Observers see distortions of their own faces, but they often see hallucinations like monsters, archetypical faces, faces of relatives and deceased, and animals. In this research, patients with depression were compared to healthy controls with respect to strange-face apparitions. The experiment was a 7-minute mirror-gazing test (MGT under low illumination. When the MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face apparitions. Apparitions of strange faces in the mirror were very reduced in depression patients compared to healthy controls. Depression patients compared to healthy controls showed shorter duration of apparitions; minor number of strange faces; lower self-evaluation rating of apparition strength; lower self-evaluation rating of provoked emotion. These decreases in depression may be produced by deficits of facial expression and facial recognition of emotions, which are involved in the relationship between the patient (or the patient’s ego and his face image (or the patient’s bodily self that is reflected in the mirror.

  4. Patriarchal Regime of the Spectacle: Racial and Gendered Gaze in Jhumpa Lahiri’s Fiction

    Directory of Open Access Journals (Sweden)

    Moussa Pourya Asl

    2017-01-01

    Full Text Available This article attempts to evince the political, cultural and affective consequences of Jhumpa Lahiri’s diasporic writings and their particular enunciations of the literary gaze. To do so, it details the manner in which the stories’ exercise of visual operations rigidly corresponds with those of the Panopticon. The essay argues that Lahiri’s narrative produces a kind of panoptic machine that underpins the ‘modes of social regulation and control’ that Foucault has explained as disciplinary technologies. By situating Lahiri’s stories, “A Real Durwan” and “Only Goodness,” within a historical-political context, this essay aims at identifying the way in which panopticism defines her fiction as both a record of and a participant in the social, sexual and political ‘paranoia’ behind the propaganda of America’s self-image as the land of freedom. We maintain that Lahiri’s fiction situates itself in complex relation to the postcolonial concerns of the late twentieth century, suggesting that through their fascination with a visual literalization of the panoptic machine, and by privileging the masculine gaze, the stories legitimate the perpetuation of socially prescribed notion of sexual difference.

  5. The EyeHarp: A Gaze-Controlled Digital Musical Instrument.

    Science.gov (United States)

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective.

  6. Visual perception during mirror-gazing at one's own face in patients with depression.

    Science.gov (United States)

    Caputo, Giovanni B; Bortolomasi, Marco; Ferrucci, Roberta; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2014-01-01

    In normal observers, gazing at one's own face in the mirror for a few minutes, at a low illumination level, produces the apparition of strange faces. Observers see distortions of their own faces, but they often see hallucinations like monsters, archetypical faces, faces of relatives and deceased, and animals. In this research, patients with depression were compared to healthy controls with respect to strange-face apparitions. The experiment was a 7-minute mirror-gazing test (MGT) under low illumination. When the MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face apparitions. Apparitions of strange faces in the mirror were very reduced in depression patients compared to healthy controls. Depression patients compared to healthy controls showed shorter duration of apparitions; minor number of strange faces; lower self-evaluation rating of apparition strength; lower self-evaluation rating of provoked emotion. These decreases in depression may be produced by deficits of facial expression and facial recognition of emotions, which are involved in the relationship between the patient (or the patient's ego) and his face image (or the patient's bodily self) that is reflected in the mirror.

  7. Evaluating gaze-driven power wheelchair with navigation support for persons with disabilities.

    Science.gov (United States)

    Wästlund, Erik; Sponseller, Kay; Pettersson, Ola; Bared, Anders

    2015-01-01

    This article describes a novel add-on for powered wheelchairs that is composed of a gaze-driven control system and a navigation support system. The add-on was tested by three users. All of the users were individuals with severe disabilities and no possibility of moving independently. The system is an add-on to a standard power wheelchair and can be customized for different levels of support according to the cognitive level, motor control, perceptual skills, and specific needs of the user. The primary aim of this study was to test the functionality and safety of the system in the user's home environment. The secondary aim was to evaluate whether access to a gaze-driven powered wheelchair with navigation support is perceived as meaningful in terms of independence and participation. The results show that the system has the potential to provide safe, independent indoor mobility and that the users perceive doing so as fun, meaningful, and a way to reduce dependency on others. Independent mobility has numerous benefits in addition to psychological and emotional well-being. By observing users' actions, caregivers and healthcare professionals can assess the individual's capabilities, which was not previously possible. Rehabilitation can be better adapted to the individual's specific needs, and driving a wheelchair independently can be a valuable, motivating training tool.

  8. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    Science.gov (United States)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  9. Postural control and head stability during natural gaze behaviour in 6- to 12-year-old children.

    Science.gov (United States)

    Schärli, A M; van de Langenberg, R; Murer, K; Müller, R M

    2013-06-01

    We investigated how the influence of natural exploratory gaze behaviour on postural control develops from childhood into adulthood. In a cross-sectional design, we compared four age groups: 6-, 9-, 12-year-olds and young adults. Two experimental trials were performed: quiet stance with a fixed gaze (fixed) and quiet stance with natural exploratory gaze behaviour (exploratory). The latter was elicited by having participants watch an animated short film on a large screen in front of them. 3D head rotations in space and centre of pressure (COP) excursions on the ground plane were measured. Across conditions, both head rotation and COP displacement decreased with increasing age. Head movement was greater in the exploratory condition in all age groups. In all children-but not in adults-COP displacement was markedly greater in the exploratory condition. Bivariate correlations across groups showed highly significant positive correlations between COP displacement in ML direction and head rotation in yaw, roll, and pitch in both conditions. The regularity of COP displacements did not show a clear developmental trend, which indicates that COP dynamics were qualitatively similar across age groups. Together, the results suggest that the contribution of head movement to eye-head saccades decreases with age and that head instability-in part resulting from such gaze-related head movements-is an important limiting factor in children's postural control. The lack of head stabilisation might particularly affect children in everyday activities in which both postural control and visual exploration are required.

  10. Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search.

    Science.gov (United States)

    Wahn, Basil; Schwandt, Jessika; Krüger, Matti; Crafa, Daina; Nunnendorf, Vanessa; König, Peter

    2016-06-01

    In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.

  11. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    Science.gov (United States)

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  12. Attention and Social Cognition in Virtual Reality : The effect of engagement mode and character eye-gaze

    NARCIS (Netherlands)

    Rooney, Brendan; Bálint, Katalin; Parsons, Thomas; Burke, Colin; O'Leary, T; Lee, C.T.; Mantei, C.

    2017-01-01

    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an

  13. Does social presence or the potential for interaction reduce social gaze in online social scenarios? Introducing the "live lab" paradigm.

    Science.gov (United States)

    Gregory, Nicola J; Antolin, Jastine V

    2018-05-01

    Research has shown that people's gaze is biased away from faces in the real world but towards them when they are viewed onscreen. Non-equivalent stimulus conditions may have represented a confound in this research, however, as participants viewed onscreen stimuli as pre-recordings where interaction was not possible compared with real-world stimuli which were viewed in real time where interaction was possible. We assessed the independent contributions of online social presence and ability for interaction on social gaze by developing the "live lab" paradigm. Participants in three groups ( N = 132) viewed a confederate as (1) a live webcam stream where interaction was not possible (one-way), (2) a live webcam stream where an interaction was possible (two-way), or (3) a pre-recording. Potential for interaction, rather than online social presence, was the primary influence on gaze behaviour: participants in the pre-recorded and one-way conditions looked more to the face than those in the two-way condition, particularly, when the confederate made "eye contact." Fixation durations to the face were shorter when the scene was viewed live, particularly, during a bid for eye contact. Our findings support the dual function of gaze but suggest that online social presence alone is not sufficient to activate social norms of civil inattention. Implications for the reinterpretation of previous research are discussed.

  14. Controlling Attention to Gaze and Arrows in Childhood: An fMRI Study of Typical Development and Autism Spectrum Disorders

    Science.gov (United States)

    Vaidya, Chandan J.; Foss-Feig, Jennifer; Shook, Devon; Kaplan, Lauren; Kenworthy, Lauren; Gaillard, William D.

    2011-01-01

    Functional magnetic resonance imaging was used to examine functional anatomy of attention to social (eye gaze) and nonsocial (arrow) communicative stimuli in late childhood and in a disorder defined by atypical processing of social stimuli, Autism Spectrum Disorders (ASD). Children responded to a target word ("LEFT"/"RIGHT") in the context of a…

  15. Eye Gaze During Face Processing in Children and Adolescents with 22q11.2 Deletion Syndrome

    Science.gov (United States)

    Glaser, Bronwyn; Debbane, Martin; Ottet, Marie-Christine; Vuilleumier, Patrik; Zesiger, Pascal; Antonarakis, Stylianos E.; Eliez, Stephan

    2010-01-01

    Objective: The 22q11.2 deletion syndrome (22q11DS) is a neurogenetic syndrome with high risk for the development of psychiatric disorder. There is interest in identifying reliable markers for measuring and monitoring socio-emotional impairments in 22q11DS during development. The current study investigated eye gaze as a potential marker during a…

  16. An Exploration of the Use of Eye-Gaze Tracking to Study Problem-Solving on Standardized Science Assessments

    Science.gov (United States)

    Tai, Robert H.; Loehr, John F.; Brigham, Frederick J.

    2006-01-01

    This pilot study investigated the capacity of eye-gaze tracking to identify differences in problem-solving behaviours within a group of individuals who possessed varying degrees of knowledge and expertise in three disciplines of science (biology, chemistry and physics). The six participants, all pre-service science teachers, completed an 18-item…

  17. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  18. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  19. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  20. Exploring combinations of different color and facial expression stimuli for gaze-independent BCIs

    Directory of Open Access Journals (Sweden)

    Long eChen

    2016-01-01

    Full Text Available AbstractBackground: Some studies have proven that a conventional visual brain computer interface (BCI based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention had been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression change had been presented, and obtained high performance. However, some facial expressions were so similar that users couldn’t tell them apart. Especially they were presented at the same position in a rapid serial visual presentation (RSVP paradigm. Consequently, the performance of BCIs is reduced.New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help subjects to locate the target and evoke larger event-related potentials (ERPs. In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the grey dummy face pattern and the colored ball pattern. Comparison with Existing Method(s: The key point that determined the value of the colored dummy faces stimuli in BCI systems were whether dummy face stimuli could obtain higher performance than grey faces or colored balls stimuli. Ten healthy subjects (7 male, aged 21-26 years, mean 24.5±1.25 participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the grey dummy face pattern and the colored ball pattern. Online results showed

  1. Media multitasking behavior: concurrent television and computer usage.

    Science.gov (United States)

    Brasel, S Adam; Gips, James

    2011-09-01

    Changes in the media landscape have made simultaneous usage of the computer and television increasingly commonplace, but little research has explored how individuals navigate this media multitasking environment. Prior work suggests that self-insight may be limited in media consumption and multitasking environments, reinforcing a rising need for direct observational research. A laboratory experiment recorded both younger and older individuals as they used a computer and television concurrently, multitasking across television and Internet content. Results show that individuals are attending primarily to the computer during media multitasking. Although gazes last longer on the computer when compared to the television, the overall distribution of gazes is strongly skewed toward very short gazes only a few seconds in duration. People switched between media at an extreme rate, averaging more than 4 switches per min and 120 switches over the 27.5-minute study exposure. Participants had little insight into their switching activity and recalled their switching behavior at an average of only 12 percent of their actual switching rate revealed in the objective data. Younger individuals switched more often than older individuals, but other individual differences such as stated multitasking preference and polychronicity had little effect on switching patterns or gaze duration. This overall pattern of results highlights the importance of exploring new media environments, such as the current drive toward media multitasking, and reinforces that self-monitoring, post hoc surveying, and lay theory may offer only limited insight into how individuals interact with media.

  2. Head movements evoked in alert rhesus monkey by vestibular prosthesis stimulation: implications for postural and gaze stabilization.

    Directory of Open Access Journals (Sweden)

    Diana E Mitchell

    Full Text Available The vestibular system detects motion of the head in space and in turn generates reflexes that are vital for our daily activities. The eye movements produced by the vestibulo-ocular reflex (VOR play an essential role in stabilizing the visual axis (gaze, while vestibulo-spinal reflexes ensure the maintenance of head and body posture. The neuronal pathways from the vestibular periphery to the cervical spinal cord potentially serve a dual role, since they function to stabilize the head relative to inertial space and could thus contribute to gaze (eye-in-head + head-in-space and posture stabilization. To date, however, the functional significance of vestibular-neck pathways in alert primates remains a matter of debate. Here we used a vestibular prosthesis to 1 quantify vestibularly-driven head movements in primates, and 2 assess whether these evoked head movements make a significant contribution to gaze as well as postural stabilization. We stimulated electrodes implanted in the horizontal semicircular canal of alert rhesus monkeys, and measured the head and eye movements evoked during a 100 ms time period for which the contribution of longer latency voluntary inputs to the neck would be minimal. Our results show that prosthetic stimulation evoked significant head movements with latencies consistent with known vestibulo-spinal pathways. Furthermore, while the evoked head movements were substantially smaller than the coincidently evoked eye movements, they made a significant contribution to gaze stabilization, complementing the VOR to ensure that the appropriate gaze response is achieved. We speculate that analogous compensatory head movements will be evoked when implanted prosthetic devices are transitioned to human patients.

  3. Poor Receptive Joint Attention Skills Are Associated with Atypical Grey Matter Asymmetry in the Posterior Superior Temporal Gyrus of Chimpanzees (Pan troglodytes

    Directory of Open Access Journals (Sweden)

    William eHopkins

    2014-01-01

    Full Text Available Clinical and experimental data have implicated the posterior superior temporal gyrus as an important cortical region in the processing of socially relevant stimuli such as gaze following, eye direction, and head orientation. Gaze following and responding to different socio-communicative signals is an important and highly adaptive skill in primates, including humans. Here, we examined whether individual differences in responding to socio-communicative cues was associated with variation in either grey matter volume and asymmetry in a sample of chimpanzees. MRI scans and behavioral data on receptive joint attention (RJA was obtained from a sample of 191 chimpanzees. We found that chimpanzees that performed poorly on the RJA task had more rightward asymmetries in the posterior but not anterior superior temporal gyrus. We further found that middle-aged and elderly chimpanzee performed more poorly on the RJA task and had significantly less grey matter than young-adult and sub-adult chimpanzees. The results are consistent with previous studies implicating the posterior temporal gyrus in the processing of socially relevant information.

  4. Poor receptive joint attention skills are associated with atypical gray matter asymmetry in the posterior superior temporal gyrus of chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hopkins, William D; Misiura, Maria; Reamer, Lisa A; Schaeffer, Jennifer A; Mareno, Mary C; Schapiro, Steven J

    2014-01-01

    Clinical and experimental data have implicated the posterior superior temporal gyrus as an important cortical region in the processing of socially relevant stimuli such as gaze following, eye direction, and head orientation. Gaze following and responding to different socio-communicative signals is an important and highly adaptive skill in primates, including humans. Here, we examined whether individual differences in responding to socio-communicative cues was associated with variation in either gray matter (GM) volume and asymmetry in a sample of chimpanzees. Magnetic resonance image scans and behavioral data on receptive joint attention (RJA) was obtained from a sample of 191 chimpanzees. We found that chimpanzees that performed poorly on the RJA task had less GM in the right compared to left hemisphere in the posterior but not anterior superior temporal gyrus. We further found that middle-aged and elderly chimpanzee performed more poorly on the RJA task and had significantly less GM than young-adult and sub-adult chimpanzees. The results are consistent with previous studies implicating the posterior temporal gyrus in the processing of socially relevant information.

  5. Sex-related differences in behavioral and amygdalar responses to compound facial threat cues.

    Science.gov (United States)

    Im, Hee Yeon; Adams, Reginald B; Cushing, Cody A; Boshyan, Jasmine; Ward, Noreen; Kveraga, Kestutis

    2018-03-08

    During face perception, we integrate facial expression and eye gaze to take advantage of their shared signals. For example, fear with averted gaze provides a congruent avoidance cue, signaling both threat presence and its location, whereas fear with direct gaze sends an incongruent cue, leaving threat location ambiguous. It has been proposed that the processing of different combinations of threat cues is mediated by dual processing routes: reflexive processing via magnocellular (M) pathway and reflective processing via parvocellular (P) pathway. Because growing evidence has identified a variety of sex differences in emotional perception, here we also investigated how M and P processing of fear and eye gaze might be modulated by observer's sex, focusing on the amygdala, a structure important to threat perception and affective appraisal. We adjusted luminance and color of face stimuli to selectively engage M or P processing and asked observers to identify emotion of the face. Female observers showed more accurate behavioral responses to faces with averted gaze and greater left amygdala reactivity both to fearful and neutral faces. Conversely, males showed greater right amygdala activation only for M-biased averted-gaze fear faces. In addition to functional reactivity differences, females had proportionately greater bilateral amygdala volumes, which positively correlated with behavioral accuracy for M-biased fear. Conversely, in males only the right amygdala volume was positively correlated with accuracy for M-biased fear faces. Our findings suggest that M and P processing of facial threat cues is modulated by functional and structural differences in the amygdalae associated with observer's sex. © 2018 Wiley Periodicals, Inc.

  6. Complicating Eroticism and the Male Gaze: Feminism and Georges Bataille’s Story of the Eye

    Directory of Open Access Journals (Sweden)

    Chris Vanderwees

    2014-01-01

    Full Text Available This article explores the relationship between feminist criticism and Georges Bataille’s Story of the Eye . Much of the critical work on Bataille assimilates his psychosocial theories in Erotism with the manifestation of those theories in his fiction without acknowledging potential contradictions between the two bodies of work. The conflation of important distinctions between representations of sex and death in Story of the Eye and the writings of Erotism forecloses the possibility of reading Bataille’s novel as a critique of gender relations. This article unravels some of the distinctions between Erotism and Story of the Eye in order to complicate the assumption that the novel simply reproduces phallogocentric sexual fantasies of transgression. Drawing from the work of Angela Carter and Laura Mulvey, the author proposes the possibility of reading Story of the Eye as a pornographic critique of gender relations through an analysis of the novel’s displacement and destruction of the male gaze.

  7. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Science.gov (United States)

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa

    2014-01-01

    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  8. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  9. Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience.

    Science.gov (United States)

    Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang

    2017-04-01

    In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Gaze-Aware Streaming Solutions for the Next Generation of Mobile VR Experiences.

    Science.gov (United States)

    Lungaro, Pietro; Sjoberg, Rickard; Valero, Alfredo Jose Fanghella; Mittal, Ashutosh; Tollmar, Konrad

    2018-04-01

    This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.

  11. Eye-gaze independent EEG-based brain-computer interfaces for communication

    Science.gov (United States)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  12. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Directory of Open Access Journals (Sweden)

    Eero Ahtola

    Full Text Available To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm.Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus.The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants.The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals.Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  13. Evidence for impairments in using static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism.

    Science.gov (United States)

    Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B

    2008-09-01

    We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.

  14. The effect of gaze angle on the evaluations of SAR and temperature rise in human eye under plane-wave exposures from 0.9 to 10 GHz

    International Nuclear Information System (INIS)

    Diao, Yinliang; Leung, Sai-Wing; Sun, Weinong; Siu, Yun-Ming; Kong, Richard; Hung Chan, Kwok

    2016-01-01

    This article investigates the effect of gaze angle on the specific absorption rate (SAR) and temperature rise in human eye under electromagnetic exposures from 0.9 to 10 GHz. Eye models in different gaze angles are developed based on bio-metric data. The spatial-average SARs in eyes are investigated using the finite-difference time-domain method, and the corresponding maximum temperature rises in lens are calculated by the finite-difference method. It is found that the changes in the gaze angle produce a maximum variation of 35, 12 and 20 % in the eye-averaged SAR, peak 10 g average SAR and temperature rise, respectively. Results also reveal that the eye-averaged SAR is more sensitive to the changes in the gaze angle than peak 10 g average SAR, especially at higher frequencies. (authors)

  15. Gaze patterns reveal how texts are remembered: A mental model of what was described is favored over the text itself

    DEFF Research Database (Denmark)

    Traub, Franziska; Johansson, Roger; Holmqvist, Kenneth

    or incongruent with the spatial layout of the text itself. 28 participants read and recalled three texts: (1) a scene description congruent with the spatial layout of the text; (2) a scene description incongruent with the spatial layout of the text; and (3) a control text without any spatial scene content....... Recollection was performed orally while gazing at a blank screen. 
Results demonstrate that participant’s gaze patterns during recall more closely reflect the spatial layout of the scene than the physical locations of the text. We conclude that participants formed a mental model that represents the content...... of what was described, i.e., visuospatial information of the scene, which then guided the retrieval process. During their retellings, they moved the eyes across the blank screen as if they saw the scene in front of them. Whereas previous studies on the involvement of eye movements in mental imagery tasks...

  16. Effects of objectifying gaze on female cognitive performance: The role of flow experience and internalization of beauty ideals.

    Science.gov (United States)

    Guizzo, Francesca; Cadinu, Mara

    2017-06-01

    Although previous research has demonstrated that objectification impairs female cognitive performance, no research to date has investigated the mechanisms underlying such decrement. Therefore, we tested the role of flow experience as one mechanism leading to performance decrement under sexual objectification. Gaze gender was manipulated by having male versus female experimenters take body pictures of female participants (N = 107) who then performed a Sustained Attention to Response Task. As predicted, a moderated mediation model showed that under male versus female gaze, higher internalization of beauty ideals was associated with lower flow, which in turn decreased performance. The implications of these results are discussed in relation to objectification theory and strategies to prevent sexually objectifying experiences. © 2016 The British Psychological Society.

  17. Olhar e contato ocular: desenvolvimento típico e comparação na Síndrome de Down Gaze and eye contact: typical development and comparison in Down syndrome

    Directory of Open Access Journals (Sweden)

    Aline Elise Gerbelli Belini

    2008-03-01

    babies with normal development were video recorded once a month, between the first and the fifth months of life, at home, interacting with their mothers. The frequency of gaze directed toward 11 different targets, including "mother's eye" was registered. RESULTS: Babies with normal development presented statistically significant evolution, throughout the observed period, in the amount of "eye closed" and of gaze direction to "objects", "researcher", "surroundings", "own body", "mother's face" and "mother's eye". The sample presented statistical stability in the areas of "looking to other person", "looking to mother's body" and "opening and closing eyes". The development of gaze and eye contact of the baby with Down syndrome was statistically very similar to that of typically developing babies, compared by means (Chi-square test and individual varisability (analysis of significant clusters. CONCLUSIONS: Early interaction between mother and baby seams to interfere more with non-verbal communication than some genetically determined limitations. It may have resulted in the likeliness observed on the visual behavior of the baby with Down syndrome and the other, normally developing babies.

  18. Are Eyes Windows to a Deceiver's Soul? Children's Use of Another's Eye Gaze Cues in a Deceptive Situation

    Science.gov (United States)

    Freire, Alejo; Eskritt, Michelle; Lee, Kang

    2004-01-01

    Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In…

  19. Effects of galvanic skin response feedback on user experience in gaze-controlled gaming: A pilot study.

    Science.gov (United States)

    Larradet, Fanny; Barresi, Giacinto; Mattos, Leonardo S

    2017-07-01

    Eye-tracking (ET) is one of the most intuitive solutions for enabling people with severe motor impairments to control devices. Nevertheless, even such an effective assistive solution can detrimentally affect user experience during demanding tasks because of, for instance, the user's mental workload - using gaze-based controls for an extensive period of time can generate fatigue and cause frustration. Thus, it is necessary to design novel solutions for ET contexts able to improve the user experience, with particular attention to its aspects related to workload. In this paper, a pilot study evaluates the effects of a relaxation biofeedback system on the user experience in the context of a gaze-controlled task that is mentally and temporally demanding: ET-based gaming. Different aspects of the subjects' experience were investigated under two conditions of a gaze-controlled game. In the Biofeedback group (BF), the user triggered a command by means of voluntary relaxation, monitored through Galvanic Skin Response (GSR) and represented by visual feedback. In the No Biofeedback group (NBF), the same feedback was timed according to the average frequency of commands in BF. After the experiment, each subject filled out a user experience questionnaire. The results showed a general appreciation for BF, with a significant between-group difference in the perceived session time duration, with the latter being shorter for subjects in BF than for the ones in NBF. This result implies a lower mental workload for BF than for NBF subjects. Other results point toward a potential role of user's engagement in the improvement of user experience in BF. Such an effect highlights the value of relaxation biofeedback for improving the user experience in a demanding gaze-controlled task.

  20. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience

    OpenAIRE

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that ...