WorldWideScience

Sample records for eye gaze interface

  1. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  2. Assessing the Usability of Gaze-Adapted Interface against Conventional Eye-based Input Emulation

    OpenAIRE

    Kumar, Chandan; Menges, Raphael; Staab, Steffen

    2017-01-01

    In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse and keyboard devices in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work, we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze inpu...

  3. Eye-gaze control of the computer interface: Discrimination of zoom intent

    International Nuclear Information System (INIS)

    Goldberg, J.H.

    1993-01-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at a statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered

  4. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements...

  5. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  6. EYE GAZE TRACKING

    DEFF Research Database (Denmark)

    2017-01-01

    This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of: detecting the position of at least three reflections on said eye, transforming said positions to spanning...... a normalized coordinate system spanning a frame of reference, wherein said transformation is performed based on a bilinear transformation or a non linear transformation e.g. a möbius transformation or a homographic transformation, detecting the position of said center of the eye relative to the position...... of said reflections and transforming this position to said normalized coordinate system, tracking the eye gaze by tracking the movement of said eye in said normalized coordinate system. Thereby calibration of a camera, such as knowledge of the exact position and zoom level of the camera, is avoided...

  7. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  8. Eye Gaze Controlled Projected Display in Automotive and Military Aviation Environments

    Directory of Open Access Journals (Sweden)

    Gowdham Prabhakar

    2018-01-01

    Full Text Available This paper presents an eye gaze controlled projected display that can be used in aviation and automotive environment as a head up display. We have presented details of the hardware and software used in developing the display and an algorithm to improve performance of point and selection tasks in eye gaze controlled graphical user interface. The algorithm does not require changing layout of an interface; it rather puts a set of hotspots on clickable targets using a Simulated Annealing algorithm. Four user studies involving driving and flight simulators have found that the proposed projected display can improve driving and flying performance and significantly reduce pointing and selection times for secondary mission control tasks compared to existing interaction systems.

  9. Intermediate view synthesis for eye-gazing

    Science.gov (United States)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  10. A neural-based remote eye gaze tracker under natural head motion.

    Science.gov (United States)

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  11. Does the 'P300' speller depend on eye gaze?

    Science.gov (United States)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  12. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    Science.gov (United States)

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception. © 2016 by the Society for Personality and Social Psychology, Inc.

  13. Anxiety symptoms and children's eye gaze during fear learning.

    Science.gov (United States)

    Michalska, Kalina J; Machlin, Laura; Moroney, Elizabeth; Lowet, Daniel S; Hettema, John M; Roberson-Nay, Roxann; Averbeck, Bruno B; Brotman, Melissa A; Nelson, Eric E; Leibenluft, Ellen; Pine, Daniel S

    2017-11-01

    The eye region of the face is particularly relevant for decoding threat-related signals, such as fear. However, it is unclear if gaze patterns to the eyes can be influenced by fear learning. Previous studies examining gaze patterns in adults find an association between anxiety and eye gaze avoidance, although no studies to date examine how associations between anxiety symptoms and eye-viewing patterns manifest in children. The current study examined the effects of learning and trait anxiety on eye gaze using a face-based fear conditioning task developed for use in children. Participants were 82 youth from a general population sample of twins (aged 9-13 years), exhibiting a range of anxiety symptoms. Participants underwent a fear conditioning paradigm where the conditioned stimuli (CS+) were two neutral faces, one of which was randomly selected to be paired with an aversive scream. Eye tracking, physiological, and subjective data were acquired. Children and parents reported their child's anxiety using the Screen for Child Anxiety Related Emotional Disorders. Conditioning influenced eye gaze patterns in that children looked longer and more frequently to the eye region of the CS+ than CS- face; this effect was present only during fear acquisition, not at baseline or extinction. Furthermore, consistent with past work in adults, anxiety symptoms were associated with eye gaze avoidance. Finally, gaze duration to the eye region mediated the effect of anxious traits on self-reported fear during acquisition. Anxiety symptoms in children relate to face-viewing strategies deployed in the context of a fear learning experiment. This relationship may inform attempts to understand the relationship between pediatric anxiety symptoms and learning. © 2017 Association for Child and Adolescent Mental Health.

  14. Eye gazing direction inspection based on image processing technique

    Science.gov (United States)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  15. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were...

  16. Orienting of attention via observed eye gaze is head-centred.

    Science.gov (United States)

    Bayliss, Andrew P; di Pellegrino, Giuseppe; Tipper, Steven P

    2004-11-01

    Observing averted eye gaze results in the automatic allocation of attention to the gazed-at location. The role of the orientation of the face that produces the gaze cue was investigated. The eyes in the face could look left or right in a head-centred frame, but the face itself could be oriented 90 degrees clockwise or anticlockwise such that the eyes were gazing up or down. Significant cueing effects to targets presented to the left or right of the screen were found in these head orientation conditions. This suggests that attention was directed to the side to which the eyes would have been looking towards, had the face been presented upright. This finding provides evidence that head orientation can affect gaze following, even when the head orientation alone is not a social cue. It also shows that the mechanism responsible for the allocation of attention following a gaze cue can be influenced by intrinsic object-based (i.e. head-centred) properties of the task-irrelevant cue.

  17. Reading the mind from eye gaze.

    NARCIS (Netherlands)

    Christoffels, I.; Young, A.W.; Owen, A.M.; Scott, S.K.; Keane, J.; Lawrence, A.D.

    2002-01-01

    S. Baron-Cohen (1997) has suggested that the interpretation of gaze plays an important role in a normal functioning theory of mind (ToM) system. Consistent with this suggestion, functional imaging research has shown that both ToM tasks and eye gaze processing engage a similar region of the posterior

  18. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

    Science.gov (United States)

    Shishkin, Sergei L.; Nuzhdin, Yuri O.; Svirin, Evgeny P.; Trofimov, Alexander G.; Fedorova, Anastasia A.; Kozyrskiy, Bogdan L.; Velichkovsky, Boris M.

    2016-01-01

    We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device

  19. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Sergei L. Shishkin

    2016-11-01

    Full Text Available We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs were collected during game’s control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant’s averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200..500 ms relative to the fixation onset. For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower results were obtained for the shrinkage LDA classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for

  20. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability.

    Science.gov (United States)

    Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando

    2008-01-01

    This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.

  1. Training for eye contact modulates gaze following in dogs.

    Science.gov (United States)

    Wallis, Lisa J; Range, Friederike; Müller, Corsin A; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-08-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task.

  2. Eye gaze performance for children with severe physical impairments using gaze-based assistive technology-A longitudinal study.

    Science.gov (United States)

    Borgestig, Maria; Sandqvist, Jan; Parsons, Richard; Falkmer, Torbjörn; Hemmingsson, Helena

    2016-01-01

    Gaze-based assistive technology (gaze-based AT) has the potential to provide children affected by severe physical impairments with opportunities for communication and activities. This study aimed to examine changes in eye gaze performance over time (time on task and accuracy) in children with severe physical impairments, without speaking ability, using gaze-based AT. A longitudinal study with a before and after design was conducted on 10 children (aged 1-15 years) with severe physical impairments, who were beginners to gaze-based AT at baseline. Thereafter, all children used the gaze-based AT in daily activities over the course of the study. Compass computer software was used to measure time on task and accuracy with eye selection of targets on screen, and tests were performed with the children at baseline, after 5 months, 9-11 months, and after 15-20 months. Findings showed that the children improved in time on task after 5 months and became more accurate in selecting targets after 15-20 months. This study indicates that these children with severe physical impairments, who were unable to speak, could improve in eye gaze performance. However, the children needed time to practice on a long-term basis to acquire skills needed to develop fast and accurate eye gaze performance.

  3. Experimental test of spatial updating models for monkey eye-head gaze shifts.

    Directory of Open Access Journals (Sweden)

    Tom J Van Grootel

    Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.

  4. CULTURAL DISPLAY RULES DRIVE EYE GAZE DURING THINKING.

    Science.gov (United States)

    McCarthy, Anjanie; Lee, Kang; Itakura, Shoji; Muir, Darwin W

    2006-11-01

    The authors measured the eye gaze displays of Canadian, Trinidadian, and Japanese participants as they answered questions for which they either knew, or had to derive, the answers. When they knew the answers, Trinidadians maintained the most eye contact, whereas Japanese maintained the least. When thinking about the answers to questions, Canadians and Trinidadians looked up, whereas Japanese looked down. Thus, for humans, gaze displays while thinking are at least in part culturally determined.

  5. Eye and head movements shape gaze shifts in Indian peafowl.

    Science.gov (United States)

    Yorzinski, Jessica L; Patricelli, Gail L; Platt, Michael L; Land, Michael F

    2015-12-01

    Animals selectively direct their visual attention toward relevant aspects of their environments. They can shift their attention using a combination of eye, head and body movements. While we have a growing understanding of eye and head movements in mammals, we know little about these processes in birds. We therefore measured the eye and head movements of freely behaving Indian peafowl (Pavo cristatus) using a telemetric eye-tracker. Both eye and head movements contributed to gaze changes in peafowl. When gaze shifts were smaller, eye movements played a larger role than when gaze shifts were larger. The duration and velocity of eye and head movements were positively related to the size of the eye and head movements, respectively. In addition, the coordination of eye and head movements in peafowl differed from that in mammals; peafowl exhibited a near-absence of the vestibulo-ocular reflex, which may partly result from the peafowl's ability to move their heads as quickly as their eyes. © 2015. Published by The Company of Biologists Ltd.

  6. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Science.gov (United States)

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  7. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Directory of Open Access Journals (Sweden)

    Simon Ho

    Full Text Available Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials, which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1 validate the overall results from earlier aggregated analyses and 2 provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1 speakers end their turn with direct gaze at the listener and 2 the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  8. Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze and Touch

    OpenAIRE

    Turner , Jayson; Alexander , Jason; Bulling , Andreas; Schmidt , Dominik; Gellersen , Hans

    2013-01-01

    Part 4: Gaze-Enabled Interaction Design; International audience; Previous work has validated the eyes and mobile input as a viable approach for pointing at, and selecting out of reach objects. This work presents Eye Pull, Eye Push, a novel interaction concept for content transfer between public and personal devices using gaze and touch. We present three techniques that enable this interaction: Eye Cut & Paste, Eye Drag & Drop, and Eye Summon & Cast. We outline and discuss several scenarios in...

  9. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua

    2013-01-01

    In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment the langu......In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment...

  10. Eye gaze tracking based on the shape of pupil image

    Science.gov (United States)

    Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.

  11. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (pusers.

  12. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  13. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia

    Science.gov (United States)

    2014-01-01

    Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356

  14. Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder.

    Science.gov (United States)

    Kim, Haena; Shin, Jung Eun; Hong, Yeon-Ju; Shin, Yu-Bin; Shin, Young Seok; Han, Kiwan; Kim, Jae-Jin; Choi, Soo-Hee

    2018-03-01

    One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients' gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.

  15. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu

    2013-01-01

    and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant...

  16. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    Science.gov (United States)

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech

  17. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    Science.gov (United States)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  18. Biasing moral decisions by exploiting the dynamics of eye gaze.

    Science.gov (United States)

    Pärnamets, Philip; Johansson, Petter; Hall, Lars; Balkenius, Christian; Spivey, Michael J; Richardson, Daniel C

    2015-03-31

    Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

  19. EDITORIAL: Special section on gaze-independent brain-computer interfaces Special section on gaze-independent brain-computer interfaces

    Science.gov (United States)

    Treder, Matthias S.

    2012-08-01

    Restoring the ability to communicate and interact with the environment in patients with severe motor disabilities is a vision that has been the main catalyst of early brain-computer interface (BCI) research. The past decade has brought a diversification of the field. BCIs have been examined as a tool for motor rehabilitation and their benefit in non-medical applications such as mental-state monitoring for improved human-computer interaction and gaming has been confirmed. At the same time, the weaknesses of some approaches have been pointed out. One of these weaknesses is gaze-dependence, that is, the requirement that the user of a BCI system voluntarily directs his or her eye gaze towards a visual target in order to efficiently operate a BCI. This not only contradicts the main doctrine of BCI research, namely that BCIs should be independent of muscle activity, but it can also limit its real-world applicability both in clinical and non-medical settings. It is only in a scenario devoid of any motor activity that a BCI solution is without alternative. Gaze-dependencies have surfaced at two different points in the BCI loop. Firstly, a BCI that relies on visual stimulation may require users to fixate on the target location. Secondly, feedback is often presented visually, which implies that the user may have to move his or her eyes in order to perceive the feedback. This special section was borne out of a BCI workshop on gaze-independent BCIs held at the 2011 Society for Applied Neurosciences (SAN) Conference and has then been extended with additional contributions from other research groups. It compiles experimental and methodological work that aims toward gaze-independent communication and mental-state monitoring. Riccio et al review the current state-of-the-art in research on gaze-independent BCIs [1]. Van der Waal et al present a tactile speller that builds on the stimulation of the fingers of the right and left hand [2]. H¨ohne et al analyze the ergonomic aspects

  20. Gaze interaction with textual user interface

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Lund, Haakon; Madsen, Janus Askø

    2015-01-01

    ” option for text navigation. People readily understood how to execute RSVP command prompts and a majority of them preferred gaze input to a pen pointer. We present the concept of a smartwatch that can track eye movements and mediate command options whenever in proximity of intelligent devices...

  1. Gaze-based interaction with public displays using off-the-shelf components

    DEFF Research Database (Denmark)

    San Agustin, Javier; Hansen, John Paulin; Tall, Martin Henrik

    Eye gaze can be used to interact with high-density information presented on large displays. We have built a system employing off-the-shelf hardware components and open-source gaze tracking software that enables users to interact with an interface displayed on a 55” screen using their eye movement...

  2. "Gaze Leading": Initiating Simulated Joint Attention Influences Eye Movements and Choice Behavior

    Science.gov (United States)

    Bayliss, Andrew P.; Murphy, Emily; Naughtin, Claire K.; Kritikos, Ada; Schilbach, Leonhard; Becker, Stefanie I.

    2013-01-01

    Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator--the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon…

  3. The EyeHarp: A Gaze-Controlled Digital Musical Instrument.

    Science.gov (United States)

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective.

  4. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  5. Real-time inference of word relevance from electroencephalogram and eye gaze

    Science.gov (United States)

    Wenzel, M. A.; Bogojeski, M.; Blankertz, B.

    2017-10-01

    Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.

  6. Eye-gaze independent EEG-based brain-computer interfaces for communication

    Science.gov (United States)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  7. Testing the dual-route model of perceived gaze direction: Linear combination of eye and head cues.

    Science.gov (United States)

    Otsuka, Yumiko; Mareschal, Isabelle; Clifford, Colin W G

    2016-06-01

    We have recently proposed a dual-route model of the effect of head orientation on perceived gaze direction (Otsuka, Mareschal, Calder, & Clifford, 2014; Otsuka, Mareschal, & Clifford, 2015), which computes perceived gaze direction as a linear combination of eye orientation and head orientation. By parametrically manipulating eye orientation and head orientation, we tested the adequacy of a linear model to account for the effect of horizontal head orientation on perceived direction of gaze. Here, participants adjusted an on-screen pointer toward the perceived gaze direction in two image conditions: Normal condition and Wollaston condition. Images in the Normal condition included a change in the visible part of the eye along with the change in head orientation, while images in the Wollaston condition were manipulated to have identical eye regions across head orientations. Multiple regression analysis with explanatory variables of eye orientation and head orientation revealed that linear models account for most of the variance both in the Normal condition and in the Wollaston condition. Further, we found no evidence that the model with a nonlinear term explains significantly more variance. Thus, the current study supports the dual-route model that computes the perceived gaze direction as a linear combination of eye orientation and head orientation.

  8. Evaluation of a low-cost open-source gaze tracker

    DEFF Research Database (Denmark)

    San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner; Møllenbach, Emilie

    2010-01-01

    This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user's eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending...... on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The user successfully typed on a wall-projected interface using his eye movements....

  9. Gaze-independent brain-computer interfaces based on covert attention and feature attention

    Science.gov (United States)

    Treder, M. S.; Schmidt, N. M.; Blankertz, B.

    2011-10-01

    There is evidence that conventional visual brain-computer interfaces (BCIs) based on event-related potentials cannot be operated efficiently when eye movements are not allowed. To overcome this limitation, the aim of this study was to develop a visual speller that does not require eye movements. Three different variants of a two-stage visual speller based on covert spatial attention and non-spatial feature attention (i.e. attention to colour and form) were tested in an online experiment with 13 healthy participants. All participants achieved highly accurate BCI control. They could select one out of thirty symbols (chance level 3.3%) with mean accuracies of 88%-97% for the different spellers. The best results were obtained for a speller that was operated using non-spatial feature attention only. These results show that, using feature attention, it is possible to realize high-accuracy, fast-paced visual spellers that have a large vocabulary and are independent of eye gaze.

  10. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    Science.gov (United States)

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  11. Eye blinking in an avian species is associated with gaze shifts.

    Science.gov (United States)

    Yorzinski, Jessica L

    2016-08-30

    Even when animals are actively monitoring their environment, they lose access to visual information whenever they blink. They can strategically time their blinks to minimize information loss and improve visual functioning but we have little understanding of how this process operates in birds. This study therefore examined blinking in freely-moving peacocks (Pavo cristatus) to determine the relationship between their blinks, gaze shifts, and context. Peacocks wearing a telemetric eye-tracker were exposed to a taxidermy predator (Vulpes vulpes) and their blinks and gaze shifts were recorded. Peacocks blinked during the majority of their gaze shifts, especially when gaze shifts were large, thereby timing their blinks to coincide with periods when visual information is already suppressed. They inhibited their blinks the most when they exhibited high rates of gaze shifts and were thus highly alert. Alternative hypotheses explaining the link between blinks and gaze shifts are discussed.

  12. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    Science.gov (United States)

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.

    Science.gov (United States)

    Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.

  14. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  15. A 2D eye gaze estimation system with low-resolution webcam images

    Directory of Open Access Journals (Sweden)

    Kim Jin

    2011-01-01

    Full Text Available Abstract In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI algorithm. Deformable template-based 2D gaze estimation (DTBGE algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  16. A novel attention training paradigm based on operant conditioning of eye gaze: Preliminary findings.

    Science.gov (United States)

    Price, Rebecca B; Greven, Inez M; Siegle, Greg J; Koster, Ernst H W; De Raedt, Rudi

    2016-02-01

    Inability to engage with positive stimuli is a widespread problem associated with negative mood states across many conditions, from low self-esteem to anhedonic depression. Though attention retraining procedures have shown promise as interventions in some clinical populations, novel procedures may be necessary to reliably attenuate chronic negative mood in refractory clinical populations (e.g., clinical depression) through, for example, more active, adaptive learning processes. In addition, a focus on individual difference variables predicting intervention outcome may improve the ability to provide such targeted interventions efficiently. To provide preliminary proof-of-principle, we tested a novel paradigm using operant conditioning to train eye gaze patterns toward happy faces. Thirty-two healthy undergraduates were randomized to receive operant conditioning of eye gaze toward happy faces (train-happy) or neutral faces (train-neutral). At the group level, the train-happy condition attenuated sad mood increases following a stressful task, in comparison to train-neutral. In individual differences analysis, greater physiological reactivity (pupil dilation) in response to happy faces (during an emotional face-search task at baseline) predicted decreased mood reactivity after stress. These Preliminary results suggest that operant conditioning of eye gaze toward happy faces buffers against stress-induced effects on mood, particularly in individuals who show sufficient baseline neural engagement with happy faces. Eye gaze patterns to emotional face arrays may have a causal relationship with mood reactivity. Personalized medicine research in depression may benefit from novel cognitive training paradigms that shape eye gaze patterns through feedback. Baseline neural function (pupil dilation) may be a key mechanism, aiding in iterative refinement of this approach. (c) 2016 APA, all rights reserved).

  17. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    Science.gov (United States)

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  18. A free geometry model-independent neural eye-gaze tracking system

    Directory of Open Access Journals (Sweden)

    Gneo Massimo

    2012-11-01

    Full Text Available Abstract Background Eye Gaze Tracking Systems (EGTSs estimate the Point Of Gaze (POG of a user. In diagnostic applications EGTSs are used to study oculomotor characteristics and abnormalities, whereas in interactive applications EGTSs are proposed as input devices for human computer interfaces (HCI, e.g. to move a cursor on the screen when mouse control is not possible, such as in the case of assistive devices for people suffering from locked-in syndrome. If the user’s head remains still and the cornea rotates around its fixed centre, the pupil follows the eye in the images captured from one or more cameras, whereas the outer corneal reflection generated by an IR light source, i.e. glint, can be assumed as a fixed reference point. According to the so-called pupil centre corneal reflection method (PCCR, the POG can be thus estimated from the pupil-glint vector. Methods A new model-independent EGTS based on the PCCR is proposed. The mapping function based on artificial neural networks allows to avoid any specific model assumption and approximation either for the user’s eye physiology or for the system initial setup admitting a free geometry positioning for the user and the system components. The robustness of the proposed EGTS is proven by assessing its accuracy when tested on real data coming from: i different healthy users; ii different geometric settings of the camera and the light sources; iii different protocols based on the observation of points on a calibration grid and halfway points of a test grid. Results The achieved accuracy is approximately 0.49°, 0.41°, and 0.62° for respectively the horizontal, vertical and radial error of the POG. Conclusions The results prove the validity of the proposed approach as the proposed system performs better than EGTSs designed for HCI which, even if equipped with superior hardware, show accuracy values in the range 0.6°-1°.

  19. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    Directory of Open Access Journals (Sweden)

    Jorge Torres-Marín

    2017-11-01

    Full Text Available Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40 indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40, we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence

  20. Effects of Observing Eye Contact on Gaze Following in High-Functioning Autism

    NARCIS (Netherlands)

    Böckler, A.; Timmermans, B.; Sebanz, N.; Vogeley, K.; Schilbach, L.

    2014-01-01

    Observing eye contact between others enhances the tendency to subsequently follow their gaze and has been suggested to function as a social signal that adds meaning to an upcoming action or event. The present study investigated effects of observed eye contact in high-functioning autism (HFA). Two

  1. In the presence of conflicting gaze cues, fearful expression and eye-size guide attention.

    Science.gov (United States)

    Carlson, Joshua M; Aday, Jacob

    2017-10-19

    Humans are social beings that often interact in multi-individual environments. As such, we are frequently confronted with nonverbal social signals, including eye-gaze direction, from multiple individuals. Yet, the factors that allow for the prioritisation of certain gaze cues over others are poorly understood. Using a modified conflicting gaze paradigm, we tested the hypothesis that fearful gaze would be favoured amongst competing gaze cues. We further hypothesised that this effect is related to the increased sclera exposure, which is characteristic of fearful expressions. Across three experiments, we found that fearful, but not happy, gaze guides observers' attention over competing non-emotional gaze. The guidance of attention by fearful gaze appears to be linked to increased sclera exposure. However, differences in sclera exposure do not prioritise competing gazes of other types. Thus, fearful gaze guides attention among competing cues and this effect is facilitated by increased sclera exposure - but increased sclera exposure per se does not guide attention. The prioritisation of fearful gaze over non-emotional gaze likely represents an adaptive means of selectively attending to survival-relevant spatial locations.

  2. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    Science.gov (United States)

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  3. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    Science.gov (United States)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  4. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    Science.gov (United States)

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  5. Method of Menu Selection by Gaze Movement Using AC EOG Signals

    Science.gov (United States)

    Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu

    A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.

  6. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  7. The duality of gaze: Eyes extract and signal social information during sustained cooperative and competitive dyadic gaze

    Directory of Open Access Journals (Sweden)

    Michelle eJarick

    2015-09-01

    Full Text Available In contrast to nonhuman primate eyes, which have a dark sclera surrounding a dark iris, human eyes have a white sclera that surrounds a dark iris. This high contrast morphology allows humans to determine quickly and easily where others are looking and infer what they are attending to. In recent years an enormous body of work has used photos and schematic images of faces to study these aspects of social attention, e.g., the selection of the eyes of others and the shift of attention to where those eyes are directed. However, evolutionary theory holds that humans did not develop a high contrast morphology simply to use the eyes of others as attentional cues; rather they sacrificed camouflage for communication, that is, to signal their thoughts and intentions to others. In the present study we demonstrate the importance of this by taking as our starting point the hypothesis that a cornerstone of nonverbal communication is the eye contact between individuals and the time that it is held. In a single simple study we show experimentally that the effect of eye contact can be quickly and profoundly altered merely by having participants, who had never met before, play a game in a cooperative or competitive manner. After the game participants were asked to make eye contact for a prolonged period of time (10 minutes. Those who had played the game cooperatively found this terribly difficult to do, repeatedly talking and breaking gaze. In contrast, those who had played the game competitively were able to stare quietly at each other for a sustained period. Collectively these data demonstrate that when looking at the eyes of a real person one both acquires and signals information to the other person. This duality of gaze is critical to nonverbal communication, with the nature of that communication shaped by the relationship between individuals, e.g., cooperative or competitive.

  8. Gaze-related mimic word activates the frontal eye field and related network in the human brain: an fMRI study.

    Science.gov (United States)

    Osaka, Naoyuki; Osaka, Mariko

    2009-09-18

    This is an fMRI study demonstrating new evidence that a mimic word highly suggestive of an eye gaze, heard by the ear, significantly activates the frontal eye field (FEF), inferior frontal gyrus (IFG), dorsolateral premotor area (PMdr) and superior parietal lobule (SPL) connected with the frontal-parietal network. However, hearing a non-sense words that did not imply gaze under the same task does not activate this area in humans. We concluded that the FEF would be a critical area for generating/processing an active gaze, evoked by an onomatopoeia word that implied gaze closely associated with social skill. We suggest that the implied active gaze may depend on prefrontal-parietal interactions that modify cognitive gaze led by spatial visual attention associated with the SPL.

  9. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Directory of Open Access Journals (Sweden)

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  10. Eye-gaze patterns as students study worked-out examples in mechanics

    Directory of Open Access Journals (Sweden)

    Brian H. Ross

    2010-10-01

    Full Text Available This study explores what introductory physics students actually look at when studying worked-out examples. Our classroom experiences indicate that introductory physics students neither discuss nor refer to the conceptual information contained in the text of worked-out examples. This study is an effort to determine to what extent students incorporate the textual information into the way they study. Student eye-gaze patterns were recorded as they studied the examples to aid them in solving a target problem. Contrary to our expectations from classroom interactions, students spent 40±3% of their gaze time reading the textual information. Their gaze patterns were also characterized by numerous jumps between corresponding mathematical and textual information, implying that they were combining information from both sources. Despite this large fraction of time spent reading the text, student recall of the conceptual information contained therein remained very poor. We also found that having a particular problem in mind had no significant effects on the gaze-patterns or conceptual information retention.

  11. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Eyes that bind us: Gaze leading induces an implicit sense of agency.

    Science.gov (United States)

    Stephenson, Lisa J; Edwards, S Gareth; Howard, Emma E; Bayliss, Andrew P

    2018-03-01

    Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention - causing another person to follow our eye movement - can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named 'temporal binding,' is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Novel Eye Movement Disorders in Whipple’s Disease—Staircase Horizontal Saccades, Gaze-Evoked Nystagmus, and Esotropia

    Directory of Open Access Journals (Sweden)

    Aasef G. Shaikh

    2017-07-01

    Full Text Available Whipple’s disease, a rare systemic infectious disorder, is complicated by the involvement of the central nervous system in about 5% of cases. Oscillations of the eyes and the jaw, called oculo-masticatory myorhythmia, are pathognomonic of the central nervous system involvement but are often absent. Typical manifestations of the central nervous system Whipple’s disease are cognitive impairment, parkinsonism mimicking progressive supranuclear palsy with vertical saccade slowing, and up-gaze range limitation. We describe a unique patient with the central nervous system Whipple’s disease who had typical features, including parkinsonism, cognitive impairment, and up-gaze limitation; but also had diplopia, esotropia with mild horizontal (abduction more than adduction limitation, and vertigo. The patient also had gaze-evoked nystagmus and staircase horizontal saccades. Latter were thought to be due to mal-programmed small saccades followed by a series of corrective saccades. The saccades were disconjugate due to the concurrent strabismus. Also, we noted disconjugacy in the slow phase of gaze-evoked nystagmus. The disconjugacy of the slow phase of gaze-evoked nystagmus was larger during monocular viewing condition. We propose that interaction of the strabismic drifts of the covered eyes and the nystagmus drift, putatively at the final common pathway might lead to such disconjugacy.

  14. Love is in the gaze: an eye-tracking study of love and sexual desire.

    Science.gov (United States)

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports. © The Author(s) 2014.

  15. The neurophysiology of human touch and eye gaze and its effects on therapeutic relationships and healing: a scoping review protocol.

    Science.gov (United States)

    Kerr, Fiona; Wiechula, Rick; Feo, Rebecca; Schultz, Tim; Kitson, Alison

    2016-04-01

    The objective of this scoping review is to examine and map the range of neurophysiological impacts of human touch and eye gaze, and better understand their possible links to the therapeutic relationship and the process of healing. The specific question is "what neurophysiological impacts of human touch and eye gaze have been reported in relation to therapeutic relationships and healing?"

  16. EyeDroid: An Open Source Mobile Gaze Tracker on Android for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbeigi, Diako; Sintos, Ioannis

    2015-01-01

    In this paper we report on development and evaluation of a video-based mobile gaze tracker for eyewear computers. Unlike most of the previous work, our system performs all its processing workload on an Android device and sends the coordinates of the gaze point to an eyewear device through wireless...... connection. We propose a lightweight software architecture for Android to increase the efficiency of image processing needed for eye tracking. The evaluation of the system indicated an accuracy of 1:06 and a battery lifetime of approximate 4.5 hours....

  17. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  18. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience

    OpenAIRE

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that ...

  19. The effect of gaze angle on the evaluations of SAR and temperature rise in human eye under plane-wave exposures from 0.9 to 10 GHz

    International Nuclear Information System (INIS)

    Diao, Yinliang; Leung, Sai-Wing; Sun, Weinong; Siu, Yun-Ming; Kong, Richard; Hung Chan, Kwok

    2016-01-01

    This article investigates the effect of gaze angle on the specific absorption rate (SAR) and temperature rise in human eye under electromagnetic exposures from 0.9 to 10 GHz. Eye models in different gaze angles are developed based on bio-metric data. The spatial-average SARs in eyes are investigated using the finite-difference time-domain method, and the corresponding maximum temperature rises in lens are calculated by the finite-difference method. It is found that the changes in the gaze angle produce a maximum variation of 35, 12 and 20 % in the eye-averaged SAR, peak 10 g average SAR and temperature rise, respectively. Results also reveal that the eye-averaged SAR is more sensitive to the changes in the gaze angle than peak 10 g average SAR, especially at higher frequencies. (authors)

  20. An Open Conversation on Using Eye-Gaze Methods in Studies of Neurodevelopmental Disorders

    Science.gov (United States)

    Venker, Courtney E.; Kover, Sara T.

    2015-01-01

    Purpose: Eye-gaze methods have the potential to advance the study of neurodevelopmental disorders. Despite their increasing use, challenges arise in using these methods with individuals with neurodevelopmental disorders and in reporting sufficient methodological detail such that the resulting research is replicable and interpretable. Method: This…

  1. A Gaze Interactive Textual Smartwatch Interface

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Biermann, Florian; Askø Madsen, Janus

    2015-01-01

    Mobile gaze interaction is challenged by inherent motor noise. We examined the gaze tracking accuracy and precision of twelve subjects wearing a gaze tracker on their wrist while standing and walking. Results suggest that it will be possible to detect whether people are glancing the watch, but no...

  2. Keeping Your Eye on the Rail: Gaze Behaviour of Horse Riders Approaching a Jump

    Science.gov (United States)

    Hall, Carol; Varley, Ian; Kay, Rachel; Crundall, David

    2014-01-01

    The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye). The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG) was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10) was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (pJump 1 was fixated on significantly earlier and more frequently than jump 2 or 3 (pjump 3 than with jump 1 (p = 0.01) but there was no difference in errors made between rounds. Although no significant correlations between gaze behaviour and skill scores were found, the riders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07), when the horse was further from the jump (p = 0.09) and their first fixation on the jump was of a longer duration (p = 0.06). Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports. PMID:24846055

  3. Design gaze simulation for people with visual disability

    NARCIS (Netherlands)

    Qiu, S.

    2017-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react. My doctoral research is to design gaze simulation

  4. A Statistical Approach to Continuous Self-Calibrating Eye Gaze Tracking for Head-Mounted Virtual Reality Systems

    OpenAIRE

    Tripathi, Subarna; Guenter, Brian

    2016-01-01

    We present a novel, automatic eye gaze tracking scheme inspired by smooth pursuit eye motion while playing mobile games or watching virtual reality contents. Our algorithm continuously calibrates an eye tracking system for a head mounted display. This eliminates the need for an explicit calibration step and automatically compensates for small movements of the headset with respect to the head. The algorithm finds correspondences between corneal motion and screen space motion, and uses these to...

  5. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    Science.gov (United States)

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  6. Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation.

    Science.gov (United States)

    McLaughlin, Laura; Bond, Raymond; Hughes, Ciara; McConnell, Jonathan; McFadden, Sonyia

    2017-09-01

    To investigate image interpretation performance by diagnostic radiography students, diagnostic radiographers and reporting radiographers by computing eye gaze metrics using eye tracking technology. Three groups of participants were studied during their interpretation of 8 digital radiographic images including the axial and appendicular skeleton, and chest (prevalence of normal images was 12.5%). A total of 464 image interpretations were collected. Participants consisted of 21 radiography students, 19 qualified radiographers and 18 qualified reporting radiographers who were further qualified to report on the musculoskeletal (MSK) system. Eye tracking data was collected using the Tobii X60 eye tracker and subsequently eye gaze metrics were computed. Voice recordings, confidence levels and diagnoses provided a clear demonstration of the image interpretation and the cognitive processes undertaken by each participant. A questionnaire afforded the participants an opportunity to offer information on their experience in image interpretation and their opinion on the eye tracking technology. Reporting radiographers demonstrated a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took a mean of 2.4s longer to clinically decide on all features compared to students. Reporting radiographers also had a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took longer to clinically decide on an image diagnosis (p=0.02) than radiographers. Reporting radiographers had a greater mean fixation duration (p=0.01), mean fixation count (p=0.04) and mean visit count (p=0.04) within the areas of pathology compared to students. Eye tracking patterns, presented within heat maps, were a good reflection of group expertise and search strategies. Eye gaze metrics such as time to first fixate, fixation count, fixation duration and visit count within the areas of pathology were indicative of the radiographer's competency. The accuracy and confidence of

  7. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research.

    Science.gov (United States)

    Kredel, Ralf; Vater, Christian; Klostermann, André; Hossner, Ernst-Joachim

    2017-01-01

    Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  8. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research

    Directory of Open Access Journals (Sweden)

    Ralf Kredel

    2017-10-01

    Full Text Available Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes, while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses. To meet both demands, some promising compromises of methodological solutions have been proposed—in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  9. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    Science.gov (United States)

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  10. Are Eyes Windows to a Deceiver's Soul? Children's Use of Another's Eye Gaze Cues in a Deceptive Situation

    Science.gov (United States)

    Freire, Alejo; Eskritt, Michelle; Lee, Kang

    2004-01-01

    Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In…

  11. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    Science.gov (United States)

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Recognition of Emotion from Facial Expressions with Direct or Averted Eye Gaze and Varying Expression Intensities in Children with Autism Disorder and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Dina Tell

    2014-01-01

    Full Text Available Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.

  13. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus.

    Directory of Open Access Journals (Sweden)

    Sayoko Ueda

    Full Text Available As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear, B-type (only the eye position is clear, and C-type (both the pupil and eye position are unclear. A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  14. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    Science.gov (United States)

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    Directory of Open Access Journals (Sweden)

    Hong Zeng

    2017-10-01

    Full Text Available Brain-machine interface (BMI can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback over the open-loop system (with visual inspection only have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes.

  16. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    Science.gov (United States)

    Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng

    2017-01-01

    Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123

  17. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    Science.gov (United States)

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    Science.gov (United States)

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal

  19. Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research.

    Science.gov (United States)

    Gibaldi, Agostino; Vanegas, Mauricio; Bex, Peter J; Maiello, Guido

    2017-06-01

    The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.

  20. Attention and Social Cognition in Virtual Reality : The effect of engagement mode and character eye-gaze

    NARCIS (Netherlands)

    Rooney, Brendan; Bálint, Katalin; Parsons, Thomas; Burke, Colin; O'Leary, T; Lee, C.T.; Mantei, C.

    2017-01-01

    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an

  1. Attention to gaze and emotion in schizophrenia.

    Science.gov (United States)

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  2. Visual Foraging With Fingers and Eye Gaze

    Directory of Open Access Journals (Sweden)

    Ómar I. Jóhannesson

    2016-03-01

    Full Text Available A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a The fact that a sizeable number of observers (in particular during gaze foraging had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  3. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Science.gov (United States)

    Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  4. A GazeWatch Prototype

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Biermann, Florian; Møllenbach, Emile

    2015-01-01

    We demonstrate potentials of adding a gaze tracking unit to a smartwatch, allowing hands-free interaction with the watch itself and control of the environment. Users give commands via gaze gestures, i.e. looking away and back to the GazeWatch. Rapid presentation of single words on the watch displ...... provides a rich and effective textual interface. Finally, we exemplify how the GazeWatch can be used as a ubiquitous pointer on large displays....

  5. Gaze as a biometric

    Science.gov (United States)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  6. Gaze as a biometric

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  7. AmbiGaze : direct control of ambient devices by gaze

    OpenAIRE

    Velloso, Eduardo; Wirth, Markus; Weichel, Christian; Abreu Esteves, Augusto Emanuel; Gellersen, Hans-Werner Georg

    2016-01-01

    Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes...

  8. Mobile gaze input system for pervasive interaction

    DEFF Research Database (Denmark)

    2017-01-01

    feedback to the user in response to the received command input. The unit provides feedback to the user on how to position the mobile unit in front of his eyes. The gaze tracking unit interacts with one or more controlled devices via wireless or wired communications. Example devices include a lock......, a thermostat, a light or a TV. The connection between the gaze tracking unit may be temporary or longer-lasting. The gaze tracking unit may detect features of the eye that provide information about the identity of the user....

  9. Gaze-controlled Driving

    DEFF Research Database (Denmark)

    Tall, Martin; Alapetite, Alexandre; San Agustin, Javier

    2009-01-01

    We investigate if the gaze (point of regard) can control a remote vehicle driving on a racing track. Five different input devices (on-screen buttons, mouse-pointing low-cost webcam eye tracker and two commercial eye tracking systems) provide heading and speed control on the scene view transmitted...

  10. Gaze shifts and fixations dominate gaze behavior of walking cats

    Science.gov (United States)

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  11. Remote gaze tracking system for 3D environments.

    Science.gov (United States)

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  12. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Directory of Open Access Journals (Sweden)

    Karine Durand

    Full Text Available This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48 were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  13. The Effectiveness of Gaze-Contingent Control in Computer Games.

    Science.gov (United States)

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  14. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...... mobile phone screens. The user study shows that the method detects a set of defined gestures reliably.......A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  15. Eye Gaze During Face Processing in Children and Adolescents with 22q11.2 Deletion Syndrome

    Science.gov (United States)

    Glaser, Bronwyn; Debbane, Martin; Ottet, Marie-Christine; Vuilleumier, Patrik; Zesiger, Pascal; Antonarakis, Stylianos E.; Eliez, Stephan

    2010-01-01

    Objective: The 22q11.2 deletion syndrome (22q11DS) is a neurogenetic syndrome with high risk for the development of psychiatric disorder. There is interest in identifying reliable markers for measuring and monitoring socio-emotional impairments in 22q11DS during development. The current study investigated eye gaze as a potential marker during a…

  16. An Exploration of the Use of Eye-Gaze Tracking to Study Problem-Solving on Standardized Science Assessments

    Science.gov (United States)

    Tai, Robert H.; Loehr, John F.; Brigham, Frederick J.

    2006-01-01

    This pilot study investigated the capacity of eye-gaze tracking to identify differences in problem-solving behaviours within a group of individuals who possessed varying degrees of knowledge and expertise in three disciplines of science (biology, chemistry and physics). The six participants, all pre-service science teachers, completed an 18-item…

  17. Real-time sharing of gaze data between multiple eye trackers-evaluation, tools, and advice.

    Science.gov (United States)

    Nyström, Marcus; Niehorster, Diederick C; Cornelissen, Tim; Garde, Henrik

    2017-08-01

    Technological advancements in combination with significant reductions in price have made it practically feasible to run experiments with multiple eye trackers. This enables new types of experiments with simultaneous recordings of eye movement data from several participants, which is of interest for researchers in, e.g., social and educational psychology. The Lund University Humanities Laboratory recently acquired 25 remote eye trackers, which are connected over a local wireless network. As a first step toward running experiments with this setup, demanding situations with real time sharing of gaze data were investigated in terms of network performance as well as clock and screen synchronization. Results show that data can be shared with a sufficiently low packet loss (0.1 %) and latency (M = 3 ms, M A D = 2 ms) across 8 eye trackers at a rate of 60 Hz. For a similar performance using 24 computers, the send rate needs to be reduced to 20 Hz. To help researchers conduct similar measurements on their own multi-eye-tracker setup, open source software written in Python and PsychoPy are provided. Part of the software contains a minimal working example to help researchers kick-start experiments with two or more eye trackers.

  18. Intranasal Oxytocin Treatment Increases Eye-Gaze Behavior toward the Owner in Ancient Japanese Dog Breeds

    Directory of Open Access Journals (Sweden)

    Miho Nagasawa

    2017-09-01

    Full Text Available Dogs acquired unique cognitive abilities during domestication, which is thought to have contributed to the formation of the human-dog bond. In European breeds, but not in wolves, a dog’s gazing behavior plays an important role in affiliative interactions with humans and stimulates oxytocin secretion in both humans and dogs, which suggests that this interspecies oxytocin and gaze-mediated bonding was also acquired during domestication. In this study, we investigated whether Japanese breeds, which are classified as ancient breeds and are relatively close to wolves genetically, establish a bond with their owners through gazing behavior. The subject dogs were treated with either oxytocin or saline before the starting of the behavioral testing. We also evaluated physiological changes in the owners during mutual gazing by analyzing their heart rate variability (HRV and subsequent urinary oxytocin levels in both dogs and their owners. We found that oxytocin treatment enhanced the gazing behavior of Japanese dogs and increased their owners’ urinary oxytocin levels, as was seen with European breeds; however, the measured durations of skin contact and proximity to their owners were relatively low. In the owners’ HRV readings, inter-beat (R-R intervals (RRI, the standard deviation of normal to normal inter-beat (R-R intervals (SDNN, and the root mean square of successive heartbeat interval differences (RMSSD were lower when the dogs were treated with oxytocin compared with saline. Furthermore, the owners of female dogs showed lower SDNN than the owners of male dogs. These results suggest that the owners of female Japanese dogs exhibit more tension during interactions, and apart from gazing behavior, the dogs may show sex differences in their interactions with humans as well. They also suggest that Japanese dogs use eye-gazing as an attachment behavior toward humans similar to European breeds; however, there is a disparity between the dog sexes when

  19. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  20. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  1. Model-driven gaze simulation for the blind person in face-to-face communication

    NARCIS (Netherlands)

    Qiu, S.; Anas, S.A.B.; Osawa, H.; Rauterberg, G.W.M.; Hu, J.

    2016-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses

  2. Speaker gaze increases information coupling between infant and adult brains.

    Science.gov (United States)

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  3. Predictive Gaze Cues and Personality Judgments: Should Eye Trust You?

    OpenAIRE

    Bayliss, Andrew P.; Tipper, Steven P.

    2006-01-01

    Although following another person's gaze is essential in fluent social interactions, the reflexive nature of this gaze-cuing effect means that gaze can be used to deceive. In a gaze-cuing procedure, participants were presented with several faces that looked to the left or right. Some faces always looked to the target (predictive-valid), some never looked to the target (predictive-invalid), and others looked toward and away from the target in equal proportions (nonpredictive). The standard gaz...

  4. Investigating gaze-controlled input in a cognitive selection test

    OpenAIRE

    Gayraud, Katja; Hasse, Catrin; Eißfeldt, Hinnerk; Pannasch, Sebastian

    2017-01-01

    In the field of aviation, there is a growing interest in developing more natural forms of interaction between operators and systems to enhance safety and efficiency. These efforts also include eye gaze as an input channel for human-machine interaction. The present study investigates the application of gaze-controlled input in a cognitive selection test called Eye Movement Conflict Detection Test. The test enables eye movements to be studied as an indicator for psychological test performance a...

  5. Complicating Eroticism and the Male Gaze: Feminism and Georges Bataille’s Story of the Eye

    Directory of Open Access Journals (Sweden)

    Chris Vanderwees

    2014-01-01

    Full Text Available This article explores the relationship between feminist criticism and Georges Bataille’s Story of the Eye . Much of the critical work on Bataille assimilates his psychosocial theories in Erotism with the manifestation of those theories in his fiction without acknowledging potential contradictions between the two bodies of work. The conflation of important distinctions between representations of sex and death in Story of the Eye and the writings of Erotism forecloses the possibility of reading Bataille’s novel as a critique of gender relations. This article unravels some of the distinctions between Erotism and Story of the Eye in order to complicate the assumption that the novel simply reproduces phallogocentric sexual fantasies of transgression. Drawing from the work of Angela Carter and Laura Mulvey, the author proposes the possibility of reading Story of the Eye as a pornographic critique of gender relations through an analysis of the novel’s displacement and destruction of the male gaze.

  6. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2015-01-01

    Full Text Available The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g. think, believe, know in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant’s gaze. Eleven typical adults and eleven high functioning adults with ASD were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  7. Intact unconscious processing of eye contact in schizophrenia

    Directory of Open Access Journals (Sweden)

    Kiley Seymour

    2016-03-01

    Full Text Available The perception of eye gaze is crucial for social interaction, providing essential information about another person’s goals, intentions, and focus of attention. People with schizophrenia suffer a wide range of social cognitive deficits, including abnormalities in eye gaze perception. For instance, patients have shown an increased bias to misjudge averted gaze as being directed toward them. In this study we probed early unconscious mechanisms of gaze processing in schizophrenia using a technique known as continuous flash suppression. Previous research using this technique to render faces with direct and averted gaze initially invisible reveals that direct eye contact gains privileged access to conscious awareness in healthy adults. We found that patients, as with healthy control subjects, showed the same effect: faces with direct eye gaze became visible significantly faster than faces with averted gaze. This suggests that early unconscious processing of eye gaze is intact in schizophrenia and implies that any misjudgments of gaze direction must manifest at a later conscious stage of gaze processing where deficits and/or biases in attributing mental states to gaze and/or beliefs about being watched may play a role.

  8. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Directory of Open Access Journals (Sweden)

    Eero Ahtola

    Full Text Available To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm.Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus.The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants.The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals.Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  9. Discussion and Future Directions for Eye Tracker Development

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Mulvey, Fiona; Mardanbegi, Diako

    2011-01-01

    Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking.......Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking....

  10. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  11. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    Science.gov (United States)

    Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen

    2018-01-01

    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  12. Human-like object tracking and gaze estimation with PKD android.

    Science.gov (United States)

    Wijayasinghe, Indika B; Miller, Haylie L; Das, Sumit K; Bugnariu, Nicoleta L; Popa, Dan O

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  13. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  14. Evidence for impairments in using static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism.

    Science.gov (United States)

    Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B

    2008-09-01

    We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.

  15. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation123

    Science.gov (United States)

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying

    2016-01-01

    Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation. PMID:27092335

  16. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Science.gov (United States)

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa

    2014-01-01

    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  17. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    Directory of Open Access Journals (Sweden)

    Ľuboš Hládek

    Full Text Available The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG, which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  18. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    Science.gov (United States)

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  19. Influences of dwell time and cursor control on the performance in gaze driven typing

    OpenAIRE

    Helmert, Jens R.; Pannasch, Sebastian; Velichkovsky, Boris M.

    2008-01-01

    In gaze controlled computer interfaces the dwell time is often used as selection criterion. But this solution comes along with several problems, especially in the temporal domain: Eye movement studies on scene perception could demonstrate that fixations of different durations serve different purposes and should therefore be differentiated. The use of dwell time for selection implies the need to distinguish intentional selections from merely per-ceptual processes, described as the Midas touch ...

  20. Coding gaze tracking data with chromatic gradients for VR Exposure Therapy

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Grillon, Helena; De Heras Ciechomski, Pablo

    2007-01-01

    This article presents a simple and intuitive way to represent the eye-tracking data gathered during immersive virtual reality exposure therapy sessions. Eye-tracking technology is used to observe gaze movements during vir- tual reality sessions and the gaze-map chromatic gradient coding allows to...... is fully compatible with different VR exposure systems and provides clinically meaningful data....

  1. In the eye of the beholder: eye contact increases resistance to persuasion.

    Science.gov (United States)

    Chen, Frances S; Minson, Julia A; Schöne, Maren; Heinrichs, Markus

    2013-11-01

    Popular belief holds that eye contact increases the success of persuasive communication, and prior research suggests that speakers who direct their gaze more toward their listeners are perceived as more persuasive. In contrast, we demonstrate that more eye contact between the listener and speaker during persuasive communication predicts less attitude change in the direction advocated. In Study 1, participants freely watched videos of speakers expressing various views on controversial sociopolitical issues. Greater direct gaze at the speaker's eyes was associated with less attitude change in the direction advocated by the speaker. In Study 2, we instructed participants to look at either the eyes or the mouths of speakers presenting arguments counter to participants' own attitudes. Intentionally maintaining direct eye contact led to less persuasion than did gazing at the mouth. These findings suggest that efforts at increasing eye contact may be counterproductive across a variety of persuasion contexts.

  2. Spatial updating depends on gaze direction even after loss of vision.

    Science.gov (United States)

    Reuschel, Johanna; Rösler, Frank; Henriques, Denise Y P; Fiehler, Katja

    2012-02-15

    Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.

  3. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    Science.gov (United States)

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (pfaces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  4. Race perception and gaze direction differently impair visual working memory for faces: An event-related potential study.

    Science.gov (United States)

    Sessa, Paola; Dalmaso, Mario

    2016-01-01

    Humans are amazingly experts at processing and recognizing faces, however there are moderating factors of this ability. In the present study, we used the event-related potential technique to investigate the influence of both race and gaze direction on visual working memory (i.e., VWM) face representations. In a change detection task, we orthogonally manipulated race (own-race vs. other-race faces) and eye-gaze direction (direct gaze vs. averted gaze). Participants were required to encode identities of these faces. We quantified the amount of information encoded in VWM by monitoring the amplitude of the sustained posterior contralateral negativity (SPCN) time-locked to the faces. Notably, race and eye-gaze direction differently modulated SPCN amplitude such that other-race faces elicited reduced SPCN amplitudes compared with own-race faces only when displaying a direct gaze. On the other hand, faces displaying averted gaze, independently of their race, elicited increased SPCN amplitudes compared with faces displaying direct gaze. We interpret these findings as denoting that race and eye-gaze direction affect different face processing stages.

  5. Gaze perception in social anxiety and social anxiety disorder

    Directory of Open Access Journals (Sweden)

    Lars eSchulze

    2013-12-01

    Full Text Available Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD. Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  6. Dysfunctional gaze processing in bipolar disorder

    Directory of Open Access Journals (Sweden)

    Cristina Berchio

    2017-01-01

    The present study provides neurophysiological evidence for abnormal gaze processing in BP and suggests dysfunctional processing of direct eye contact as a prominent characteristic of bipolar disorder.

  7. Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition?

    Science.gov (United States)

    Estudillo, Alejandro J; Bindemann, Markus

    2017-05-01

    This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed.

  8. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Directory of Open Access Journals (Sweden)

    Johanna Palcu

    2017-06-01

    Full Text Available Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a determine whether presenting human faces (static or animated in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product in an advertisement, and (c to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product. Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers

  9. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze

  10. Interaction between gaze and visual and proprioceptive position judgements.

    Science.gov (United States)

    Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P

    2010-06-01

    There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.

  11. The Effect of Eye Contact Is Contingent on Visual Awareness

    Directory of Open Access Journals (Sweden)

    Shan Xu

    2018-02-01

    Full Text Available The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1 or subliminal (Experiment 2 eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition.

  12. Investigating social gaze as an action-perception online performance.

    Science.gov (United States)

    Grynszpan, Ouriel; Simonin, Jérôme; Martin, Jean-Claude; Nadel, Jacqueline

    2012-01-01

    Gaze represents a major non-verbal communication channel in social interactions. In this respect, when facing another person, one's gaze should not be examined as a purely perceptive process but also as an action-perception online performance. However, little is known about processes involved in the real-time self-regulation of social gaze. The present study investigates the impact of a gaze-contingent viewing window on fixation patterns and the awareness of being the agent moving the window. In face-to-face scenarios played by a virtual human character, the task for the 18 adult participants was to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking. The virtual character was embedded in naturalistic backgrounds to enhance realism. Eye-tracking data showed that the viewing window induced changes in gaze behavior, notably longer visual fixations. Notwithstanding, only half of the participants ascribed the window displacements to their eye movements. These participants also spent more time looking at the eyes and mouth regions of the virtual human character. The outcomes of the study highlight the dissociation between non-volitional gaze adaptation and the self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes, linked by bottom-up and top-down processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, in particular concerning autism spectrum disorders (ASD) where the poor pragmatic understanding of oral speech is considered linked to visual peculiarities that impede facial exploration.

  13. The Disturbance of Gaze in Progressive Supranuclear Palsy (PSP: Implications for Pathogenesis

    Directory of Open Access Journals (Sweden)

    Athena L Chen

    2010-12-01

    Full Text Available Progressive supranuclear palsy (PSP is a disease of later life that is currently regarded as a form of neurodegenerative tauopathy. Disturbance of gaze is a cardinal clinical feature of PSP that often helps clinicians to establish the diagnosis. Since the neurobiology of gaze control is now well understood, it is possible to use eye movements as investigational tools to understand aspects of the pathogenesis of PSP. In this review, we summarize each disorder of gaze control that occurs in PSP, drawing on our studies of fifty patients, and on reports from other laboratories that have measured the disturbances of eye movements. When these gaze disorders are approached by considering each functional class of eye movements and its neurobiological basis, a distinct pattern of eye movement deficits emerges that provides insight into the pathogenesis of PSP. Although some aspects of all forms of eye movements are affected in PSP, the predominant defects concern vertical saccades (slow and hypometric, both up and down, impaired vergence, and inability to modulate the linear vestibulo-ocular reflex appropriately for viewing distance. These vertical and vergence eye movements habitually work in concert to enable visuomotor skills that are important during locomotion with the hands free. Taken with the prominent early feature of falls, these findings suggest that PSP tauopathy impairs a recently-evolved neural system concerned with bipedal locomotion in an erect posture and frequent gaze shifts between the distant environment and proximate hands. This approach provides a conceptual framework that can be used to address the nosological challenge posed by overlapping clinical and neuropathological features of neurodegenerative tauopathies.

  14. Face age modulates gaze following in young adults

    OpenAIRE

    Francesca Ciardo; Barbara F. M. Marino; Rossana Actis-Grosso; Angela Rossetti; Paola Ricciardelli

    2014-01-01

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18–25; 35–45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6–10; 18–25; 35–45; over 70). The results show ...

  15. Improved remote gaze estimation using corneal reflection-adaptive geometric transforms

    Science.gov (United States)

    Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea

    2014-05-01

    Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.

  16. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behavior in Dealing with Uncertainty.

    Science.gov (United States)

    Wittek, Peter; Liu, Ying-Hsang; Darányi, Sándor; Gedeon, Tom; Lim, Ik Soo

    2016-01-01

    Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Our findings from an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain suggest that user traits of cognitive styles and perceived search task difficulty are significantly correlated with eye gaze and search behavior. We also demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favor of ambiguity minimization. Since the pattern of behavior in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.

  17. Gaze Interactive Building Instructions

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Ahmed, Zaheer; Mardanbeigi, Diako

    We combine eye tracking technology and mobile tablets to support hands-free interaction with digital building instructions. As a proof-of-concept we have developed a small interactive 3D environment where one can interact with digital blocks by gaze, keystroke and head gestures. Blocks may be moved...

  18. In the twinkling of an eye: synchronization of EEG and eye tracking based on blink signatures

    DEFF Research Database (Denmark)

    Bækgaard, Per; Petersen, Michael Kai; Larsen, Jakob Eg

    2014-01-01

    function based algorithm to correlate the signals. Comparing the accuracy of the method against a state of the art EYE-EEG plug-in for offline analysis of EEG and eye tracking data, we propose our approach could be applied for robust synchronization of biometric sensor data collected in a mobile context.......ACHIEVING ROBUST ADAPTIVE SYNCHRONIZATION OF MULTIMODAL BIOMETRIC INPUTS: The recent arrival of wireless EEG headsets that enable mobile real-time 3D brain imaging on smartphones, and low cost eye trackers that provide gaze control of tablets, will radically change how biometric sensors might...... be integrated into next generation user interfaces. In experimental lab settings EEG neuroimaging and eye tracking data are traditionally combined using external triggers to synchronize the signals. However, with biometric sensors increasingly being applied in everyday usage scenarios, there will be a need...

  19. Facilitated orienting underlies fearful face-enhanced gaze cueing of spatial location

    Directory of Open Access Journals (Sweden)

    Joshua M. Carlson

    2016-12-01

    Full Text Available Faces provide a platform for non-verbal communication through emotional expression and eye gaze. Fearful facial expressions are salient indicators of potential threat within the environment, which automatically capture observers’ attention. However, the degree to which fearful facial expressions facilitate attention to others’ gaze is unresolved. Given that fearful gaze indicates the location of potential threat, it was hypothesized that fearful gaze facilitates location processing. To test this hypothesis, a gaze cueing study with fearful and neutral faces assessing target localization was conducted. The task consisted of leftward, rightward, and forward/straight gaze trials. The inclusion of forward gaze trials allowed for the isolation of orienting and disengagement components of gaze-directed attention. The results suggest that both neutral and fearful gaze modulates attention through orienting and disengagement components. Fearful gaze, however, resulted in quicker orienting than neutral gaze. Thus, fearful faces enhance gaze cueing of spatial location through facilitated orienting.

  20. Stay tuned: Inter-individual neural synchronization during mutual gaze and joint attention

    Directory of Open Access Journals (Sweden)

    Daisuke N Saito

    2010-11-01

    Full Text Available Eye contact provides a communicative link between humans, prompting joint attention. As spontaneous brain activity may have an important role in coordination of neuronal processing within the brain, their inter-subject synchronization may occur during eye contact. To test this, we conducted simultaneous functional MRI in pairs of adults. Eye contact was maintained at baseline while the subjects engaged in real-time gaze exchange in a joint attention task. Averted gaze activated the bilateral occipital pole extending to the right posterior superior temporal sulcus, the dorso-medial prefrontal cortex, and bilateral inferior frontal gyrus. Following a partner’s gaze towards an object activated the left intraparietal sulcus. After all task-related effects were modeled out, inter-individual correlation analysis of residual time-courses was performed. Paired subjects showed more prominent correlations than non-paired subjects in the right inferior frontal gyrus, suggesting that this region is involved in sharing intention during eye contact that provides the context for joint attention.

  1. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  2. Differences in gaze anticipation for locomotion with and without vision

    Science.gov (United States)

    Authié, Colas N.; Hilt, Pauline M.; N'Guyen, Steve; Berthoz, Alain; Bennequin, Daniel

    2015-01-01

    Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition. PMID:26106313

  3. A closer look at the size of the gaze-liking effect: a preregistered replication.

    Science.gov (United States)

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  4. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  5. Tracking Eyes using Shape and Appearance

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nielsen, Mads; Hansen, John Paulin

    2002-01-01

    to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes. We use a Gaussian Process interpolation method for gaze determination, which facilitates stability feedback from the system. The use of a learning method for gaze estimation gives more flexibility...

  6. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Rizwan Ali Naqvi

    2018-02-01

    Full Text Available A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB. The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  7. Estimating the gaze of a virtuality human.

    Science.gov (United States)

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.

  8. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.

    Science.gov (United States)

    Wilson, Mark R; Vine, Samuel J; Bright, Elizabeth; Masters, Rich S W; Defriend, David; McGrath, John S

    2011-12-01

    The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks.

  9. Gaze Behavior, Believability, Likability and the iCat

    NARCIS (Netherlands)

    Poel, Mannes; Heylen, Dirk K.J.; Meulemans, M.; Nijholt, Antinus; Stock, O.; Nishida, T.

    2007-01-01

    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour

  10. Gaze Behavior, Believability, Likability and the iCat

    NARCIS (Netherlands)

    Nijholt, Antinus; Poel, Mannes; Heylen, Dirk K.J.; Stock, O.; Nishida, T.; Meulemans, M.; van Bremen, A.

    2009-01-01

    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour

  11. Depth Compensation Model for Gaze Estimation in Sport Analysis

    DEFF Research Database (Denmark)

    Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2015-01-01

    is tested in a totally controlled environment with aim to check the influences of eye tracker parameters and ocular biometric parameters on its behavior. We also present a gaze estimation method based on epipolar geometry for binocular eye tracking setups. The depth compensation model has shown very...

  12. Intact unconscious processing of eye contact in schizophrenia

    NARCIS (Netherlands)

    Seymour, K.; Rhodes, G.; Stein, T.; Langdon, R.

    The perception of eye gaze is crucial for social interaction, providing essential information about another person's goals, intentions, and focus of attention. People with schizophrenia suffer a wide range of social cognitive deficits, including abnormalities in eye gaze perception. For instance,

  13. DIAGNOSIS OF MYASTHENIA GRAVIS USING FUZZY GAZE TRACKING SOFTWARE

    Directory of Open Access Journals (Sweden)

    Javad Rasti

    2015-04-01

    Full Text Available Myasthenia Gravis (MG is an autoimmune disorder, which may lead to paralysis and even death if not treated on time. One of its primary symptoms is severe muscular weakness, initially arising in the eye muscles. Testing the mobility of the eyeball can help in early detection of MG. In this study, software was designed to analyze the ability of the eye muscles to focus in various directions, thus estimating the MG risk. Progressive weakness in gazing at the directions prompted by the software can reveal abnormal fatigue of the eye muscles, which is an alert sign for MG. To assess the user’s ability to keep gazing at a specified direction, a fuzzy algorithm was applied to images of the user’s eyes to determine the position of the iris in relation to the sclera. The results of the tests performed on 18 healthy volunteers and 18 volunteers in early stages of MG confirmed the validity of the suggested software.

  14. The “Social Gaze Space”: A Taxonomy for Gaze-Based Communication in Triadic Interactions

    Directory of Open Access Journals (Sweden)

    Mathis Jording

    2018-02-01

    Full Text Available Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a “simultaneous input-output device”: While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act, our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities – and their phylogenetic and ontogenetic impacts – become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the ‘Social Gaze Space’ as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely ‘partner-oriented,’ ‘object-oriented,’ ‘introspective,’ ‘initiating joint attention,’ and ‘responding joint attention.’ Different combinations of these states explain several interpersonal phenomena. We argue that this taxonomy distinguishes the most relevant interactional states along their distinctive features, and will showcase the implications for prominent social gaze phenomena. The taxonomy allows to identify research desiderates that have been neglected so far. We argue for a systematic investigation of these phenomena and discuss some related methodological issues.

  15. Hovering by Gazing: A Novel Strategy for Implementing Saccadic Flight-Based Navigation in GPS-Denied Environments

    Directory of Open Access Journals (Sweden)

    Augustin Manecy

    2014-04-01

    Full Text Available Hovering flies are able to stay still in place when hovering above flowers and burst into movement towards a new object of interest (a target. This suggests that sensorimotor control loops implemented onboard could be usefully mimicked for controlling Unmanned Aerial Vehicles (UAVs. In this study, the fundamental head-body movements occurring in free-flying insects was simulated in a sighted twin-engine robot with a mechanical decoupling inserted between its eye (or gaze and its body. The robot based on this gaze control system achieved robust and accurate hovering performances, without an accelerometer, over a ground target despite a narrow eye field of view (±5°. The gaze stabilization strategy validated under Processor-In-the-Loop (PIL and inspired by three biological Oculomotor Reflexes (ORs enables the aerial robot to lock its gaze onto a fixed target regardless of its roll angle. In addition, the gaze control mechanism allows the robot to perform short range target to target navigation by triggering an automatic fast “target jump” behaviour based on a saccadic eye movement.

  16. Live Speech Driven Head-and-Eye Motion Generators.

    Science.gov (United States)

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  17. Self-Monitoring of Gaze in High Functioning Autism

    Science.gov (United States)

    Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques

    2012-01-01

    Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…

  18. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    Science.gov (United States)

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  19. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    Science.gov (United States)

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  20. Face age modulates gaze following in young adults.

    Science.gov (United States)

    Ciardo, Francesca; Marino, Barbara F M; Actis-Grosso, Rossana; Rossetti, Angela; Ricciardelli, Paola

    2014-04-22

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).

  1. Off-the-Shelf Gaze Interaction

    DEFF Research Database (Denmark)

    San Agustin, Javier

    People with severe motor-skill disabilities are often unable to use standard input devices such as a mouse or a keyboard to control a computer and they are, therefore, in strong need for alternative input devices. Gaze tracking offers them the possibility to use the movements of their eyes to int...

  2. Robustifying eye interaction

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin

    2006-01-01

    This paper presents a gaze typing system based on consumer hardware. Eye tracking based on consumer hardware is subject to several unknown factors. We propose methods using robust statistical principles to accommodate uncertainties in image data as well as in gaze estimates to improve accuracy. We...

  3. Clinician's gaze behaviour in simulated paediatric emergencies.

    Science.gov (United States)

    McNaughten, Ben; Hart, Caroline; Gallagher, Stephen; Junk, Carol; Coulter, Patricia; Thompson, Andrew; Bourke, Thomas

    2018-03-07

    Differences in the gaze behaviour of experts and novices are described in aviation and surgery. This study sought to describe the gaze behaviour of clinicians from different training backgrounds during a simulated paediatric emergency. Clinicians from four clinical areas undertook a simulated emergency. Participants wore SMI (SensoMotoric Instruments) eye tracking glasses. We measured the fixation count and dwell time on predefined areas of interest and the time taken to key clinical interventions. Paediatric intensive care unit (PICU) consultants performed best and focused longer on the chest and airway. Paediatric consultants and trainees spent longer looking at the defibrillator and algorithm (51 180 ms and 50 551 ms, respectively) than the PICU and paediatric emergency medicine consultants. This study is the first to describe differences in the gaze behaviour between experts and novices in a resuscitation. They mirror those described in aviation and surgery. Further research is needed to evaluate the potential use of eye tracking as an educational tool. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Investigating gaze of children with ASD in naturalistic settings.

    Directory of Open Access Journals (Sweden)

    Basilio Noris

    Full Text Available BACKGROUND: Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD. Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii whether new atypical elements appear when studying visual behavior across the whole field of view. METHODOLOGY/PRINCIPAL FINDINGS: Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS. The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. CONCLUSIONS/SIGNIFICANCE: The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment.

  5. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    Science.gov (United States)

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  6. Investigating social gaze as an action-perception online performance

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2012-04-01

    Full Text Available In interpersonal interactions, linguistic information is complemented by non-linguistic information originating largely from facial expressions. The study of online face-to-face social interaction thus entails investigating the multimodal simultaneous processing of oral and visual percepts. Moreover, gaze in and of itself functions as a powerful communicative channel. In this respect, gaze should not be examined as a purely perceptive process but also as an active social performance. We designed a task involving multimodal deciphering of social information based on virtual characters, embedded in naturalistic backgrounds, who directly address the participant with non-literal speech and meaningful facial expressions. Eighteen adult participants were to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking to them face-to-face. To examine self-control and self-awareness of gaze in this context, visual feedback is provided to the participant by a real-time gaze-contingent viewing window centered on the focal point, while the rest of the display is blurred. Eye-tracking data showed that the viewing window induced changes in gaze behaviour, notably longer visual fixations. Notwithstanding, only half the participants ascribed the window displacements to their eye movements. These results highlight the dissociation between non volitional gaze adaptation and self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, especially concerning Autism Spectrum Disorders (ASD where poor pragmatic understanding of oral speech are considered linked to visual peculiarities that impede face exploration.

  7. Learning rational temporal eye movement strategies.

    Science.gov (United States)

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  8. [Eye contact effects: A therapeutic issue?

    Science.gov (United States)

    Baltazar, M; Conty, L

    2016-12-01

    The perception of a direct gaze - that is, of another individual's gaze directed at the observer that leads to eye contact - is known to influence a wide range of cognitive processes and behaviors. We stress that these effects mainly reflect positive impacts on human cognition and may thus be used as relevant tools for therapeutic purposes. In this review, we aim (1) to provide an exhaustive review of eye contact effects while discussing the limits of the dominant models used to explain these effects, (2) to illustrate the therapeutic potential of eye contact by targeting those pathologies that show both preserved gaze processing and deficits in one or several functions that are targeted by the eye contact effects, and (3) to propose concrete ways in which eye contact could be employed as a therapeutic tool. (1) We regroup the variety of eye contact effects into four categories, including memory effects, activation of prosocial behavior, positive appraisals of self and others and the enhancement of self-awareness. We emphasize that the models proposed to account for these effects have a poor predictive value and that further descriptions of these effects is needed. (2) We then emphasize that people with pathologies that affect memory, social behavior, and self and/or other appraisal, and self-awareness could benefit from eye contact effects. We focus on depression, autism and Alzheimer's disease to illustrate our proposal. To our knowledge, no anomaly of eye contact has been reported in depression. Patients suffering from Alzheimer disease, at the early and moderate stage, have been shown to maintain a normal amount of eye contact with their interlocutor. We take into account that autism is controversial regarding whether gaze processing is preserved or altered. In the first view, individuals are thought to elude or omit gazing at another's eyes while in the second, individuals are considered to not be able to process the gaze of others. We adopt the first stance

  9. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Science.gov (United States)

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  10. Gaze stability of observers watching Op Art pictures.

    Science.gov (United States)

    Zanker, Johannes M; Doyle, Melanie; Robin, Walker

    2003-01-01

    It has been the matter of some debate why we can experience vivid dynamic illusions when looking at static pictures composed from simple black and white patterns. The impression of illusory motion is particularly strong when viewing some of the works of 'Op Artists, such as Bridget Riley's painting Fall. Explanations of the illusory motion have ranged from retinal to cortical mechanisms, and an important role has been attributed to eye movements. To assess the possible contribution of eye movements to the illusory-motion percept we studied the strength of the illusion under different viewing conditions, and analysed the gaze stability of observers viewing the Riley painting and control patterns that do not produce the illusion. Whereas the illusion was reduced, but not abolished, when watching the painting through a pinhole, which reduces the effects of accommodation, it was not perceived in flash afterimages, suggesting an important role for eye movements in generating the illusion for this image. Recordings of eye movements revealed an abundance of small involuntary saccades when looking at the Riley pattern, despite the fact that gaze was kept within the dedicated fixation region. The frequency and particular characteristics of these rapid eye movements can vary considerably between different observers, but, although there was a tendency for gaze stability to deteriorate while viewing a Riley painting, there was no significant difference in saccade frequency between the stimulus and control patterns. Theoretical considerations indicate that such small image displacements can generate patterns of motion signals in a motion-detector network, which may serve as a simple and sufficient, but not necessarily exclusive, explanation for the illusion. Why such image displacements lead to perceptual results with a group of Op Art and similar patterns, but remain invisible for other stimuli, is discussed.

  11. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  12. Gaze direction effects on perceptions of upper limb kinesthetic coordinate system axes.

    Science.gov (United States)

    Darling, W G; Hondzinski, J M; Harper, J G

    2000-12-01

    The effects of varying gaze direction on perceptions of the upper limb kinesthetic coordinate system axes and of the median plane location were studied in nine subjects with no history of neuromuscular disorders. In two experiments, six subjects aligned the unseen forearm to the trunk-fixed anterior-posterior (a/p) axis and earth-fixed vertical while gazing at different visual targets using either head or eye motion to vary gaze direction in different conditions. Effects of support of the upper limb on perceptual errors were also tested in different conditions. Absolute constant errors and variable errors associated with forearm alignment to the trunk-fixed a/p axis and earth-fixed vertical were similar for different gaze directions whether the head or eyes were moved to control gaze direction. Such errors were decreased by support of the upper limb when aligning to the vertical but not when aligning to the a/p axis. Regression analysis showed that single trial errors in individual subjects were poorly correlated with gaze direction, but showed a dependence on shoulder angles for alignment to both axes. Thus, changes in position of the head and eyes do not influence perceptions of upper limb kinesthetic coordinate system axes. However, dependence of the errors on arm configuration suggests that such perceptions are generated from sensations of shoulder and elbow joint angle information. In a third experiment, perceptions of median plane location were tested by instructing four subjects to place the unseen right index fingertip directly in front of the sternum either by motion of the straight arm at the shoulder or by elbow flexion/extension with shoulder angle varied. Gaze angles were varied to the right and left by 0.5 radians to determine effects of gaze direction on such perceptions. These tasks were also carried out with subjects blind-folded and head orientation varied to test for effects of head orientation on perceptions of median plane location. Constant

  13. Real-time gaze estimation via pupil center tracking

    Directory of Open Access Journals (Sweden)

    Cazzato Dario

    2018-02-01

    Full Text Available Automatic gaze estimation not based on commercial and expensive eye tracking hardware solutions can enable several applications in the fields of human computer interaction (HCI and human behavior analysis. It is therefore not surprising that several related techniques and methods have been investigated in recent years. However, very few camera-based systems proposed in the literature are both real-time and robust. In this work, we propose a real-time user-calibration-free gaze estimation system that does not need person-dependent calibration, can deal with illumination changes and head pose variations, and can work with a wide range of distances from the camera. Our solution is based on a 3-D appearance-based method that processes the images from a built-in laptop camera. Real-time performance is obtained by combining head pose information with geometrical eye features to train a machine learning algorithm. Our method has been validated on a data set of images of users in natural environments, and shows promising results. The possibility of a real-time implementation, combined with the good quality of gaze tracking, make this system suitable for various HCI applications.

  14. Research of Digital Interface Layout Design based on Eye-tracking

    Directory of Open Access Journals (Sweden)

    Shao Jiang

    2015-01-01

    Full Text Available The aim of this paper is to improve the low service efficiency and unsmooth human-computer interaction caused by currently irrational layouts of digital interfaces for complex systems. Also, three common layout structures for digital interfaces are to be presented and five layout types appropriate for multilevel digital interfaces are to be summarized. Based on the eye tracking technology, an assessment was conducted in advantages and disadvantages of different layout types through subjects’ search efficiency. Based on data and results, this study constructed a matching model which is appropriate for multilevel digital interface layout and verified the fact that the task element is a significant and important aspect of layout design. A scientific experimental model of research on digital interfaces for complex systems is provided. Both data and conclusions of the eye movement experiment provide a reference for layout designs of interfaces for complex systems with different task characteristics.

  15. Eye Contact Facilitates Awareness of Faces during Interocular Suppression

    Science.gov (United States)

    Stein, Timo; Senju, Atsushi; Peelen, Marius V.; Sterzer, Philipp

    2011-01-01

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than…

  16. Design of a Binocular Pupil and Gaze Point Detection System Utilizing High Definition Images

    Directory of Open Access Journals (Sweden)

    Yilmaz Durna

    2017-05-01

    Full Text Available This study proposes a novel binocular pupil and gaze detection system utilizing a remote full high definition (full HD camera and employing LabVIEW. LabVIEW is inherently parallel and has fewer time-consuming algorithms. Many eye tracker applications are monocular and use low resolution cameras due to real-time image processing difficulties. We utilized the computer’s direct access memory channel for rapid data transmission and processed full HD images with LabVIEW. Full HD images make easier determinations of center coordinates/sizes of pupil and corneal reflection. We modified the camera so that the camera sensor passed only infrared (IR images. Glints were taken as reference points for region of interest (ROI area selection of the eye region in the face image. A morphologic filter was applied for erosion of noise, and a weighted average technique was used for center detection. To test system accuracy with 11 participants, we produced a visual stimulus set up to analyze each eye’s movement. Nonlinear mapping function was utilized for gaze estimation. Pupil size, pupil position, glint position and gaze point coordinates were obtained with free natural head movements in our system. This system also works at 2046 × 1086 resolution at 40 frames per second. It is assumed that 280 frames per second for 640 × 480 pixel images is the case. Experimental results show that the average gaze detection error for 11 participants was 0.76° for the left eye, 0.89° for right eye and 0.83° for the mean of two eyes.

  17. Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis

    Directory of Open Access Journals (Sweden)

    Sean eAndrist

    2015-07-01

    Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.

  18. The importance of the eyes: communication skills in infants of blind parents.

    Science.gov (United States)

    Senju, Atsushi; Tucker, Leslie; Pasco, Greg; Hudry, Kristelle; Elsabbagh, Mayada; Charman, Tony; Johnson, Mark H

    2013-06-07

    The effects of selectively different experience of eye contact and gaze behaviour on the early development of five sighted infants of blind parents were investigated. Infants were assessed longitudinally at 6-10, 12-15 and 24-47 months. Face scanning and gaze following were assessed using eye tracking. In addition, established measures of autistic-like behaviours and standardized tests of cognitive, motor and linguistic development, as well as observations of naturalistic parent-child interaction were collected. These data were compared with those obtained from a larger group of sighted infants of sighted parents. Infants with blind parents did not show an overall decrease in eye contact or gaze following when they observed sighted adults on video or in live interactions, nor did they show any autistic-like behaviours. However, they directed their own eye gaze somewhat less frequently towards their blind mothers and also showed improved performance in visual memory and attention at younger ages. Being reared with significantly reduced experience of eye contact and gaze behaviour does not preclude sighted infants from developing typical gaze processing and other social-communication skills. Indeed, the need to switch between different types of communication strategy may actually enhance other skills during development.

  19. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Directory of Open Access Journals (Sweden)

    Helene Kreysa

    Full Text Available A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins". Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  20. The effects of social pressure and emotional expression on the cone of gaze in patients with social anxiety disorder.

    Science.gov (United States)

    Harbort, Johannes; Spiegel, Julia; Witthöft, Michael; Hecht, Heiko

    2017-06-01

    Patients with social anxiety disorder suffer from pronounced fears in social situations. As gaze perception is crucial in these situations, we examined which factors influence the range of gaze directions where mutual gaze is experienced (the cone of gaze). The social stimulus was modified by changing the number of people (heads) present and the emotional expression of their faces. Participants completed a psychophysical task, in which they had to adjust the eyes of a virtual head to gaze at the edge of the range where mutual eye-contact was experienced. The number of heads affected the width of the gaze cone: the more heads, the wider the gaze cone. The emotional expression of the virtual head had no consistent effect on the width of the gaze cone, it did however affect the emotional state of the participants. Angry expressions produced the highest arousal values. Highest valence emerged from happy faces, lowest valence from angry faces. These results suggest that the widening of the gaze cone in social anxiety disorder is not primarily mediated by their altered emotional reactivity. Implications for gaze assessment and gaze training in therapeutic contexts are discussed. Due to interindividual variability, enlarged gaze cones are not necessarily indicative of social anxiety disorder, they merely constitute a correlate at the group level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Just one look: Direct gaze briefly disrupts visual working memory.

    Science.gov (United States)

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  2. The head tracks and gaze predicts: how the world's best batters hit a ball.

    Directory of Open Access Journals (Sweden)

    David L Mann

    Full Text Available Hitters in fast ball-sports do not align their gaze with the ball throughout ball-flight; rather, they use predictive eye movement strategies that contribute towards their level of interceptive skill. Existing studies claim that (i baseball and cricket batters cannot track the ball because it moves too quickly to be tracked by the eyes, and that consequently (ii batters do not - and possibly cannot - watch the ball at the moment they hit it. However, to date no studies have examined the gaze of truly elite batters. We examined the eye and head movements of two of the world's best cricket batters and found both claims do not apply to these batters. Remarkably, the batters coupled the rotation of their head to the movement of the ball, ensuring the ball remained in a consistent direction relative to their head. To this end, the ball could be followed if the batters simply moved their head and kept their eyes still. Instead of doing so, we show the elite batters used distinctive eye movement strategies, usually relying on two predictive saccades to anticipate (i the location of ball-bounce, and (ii the location of bat-ball contact, ensuring they could direct their gaze towards the ball as they hit it. These specific head and eye movement strategies play important functional roles in contributing towards interceptive expertise.

  3. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    Science.gov (United States)

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  4. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    Directory of Open Access Journals (Sweden)

    Laura Jean Wells

    Full Text Available The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise, each with three levels of intensity (low, moderate, and normal. Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  5. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    Science.gov (United States)

    Wells, Laura Jean; Gillespie, Steven Mark; Rotshtein, Pia

    2016-01-01

    The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  6. Gaze Bias in Preference Judgments by Younger and Older Adults

    Directory of Open Access Journals (Sweden)

    Toshiki Saito

    2017-08-01

    Full Text Available Individuals’ gaze behavior reflects the choice they will ultimately make. For example, people confronting a choice among multiple stimuli tend to look longer at stimuli that are subsequently chosen than at other stimuli. This tendency, called the gaze bias effect, is a key aspect of visual decision-making. Nevertheless, no study has examined the generality of the gaze bias effect in older adults. Here, we used a two-alternative forced-choice task (2AFC to compare the gaze behavior reflective of different stages of decision processes demonstrated by younger and older adults. Participants who had viewed two faces were instructed to choose the one that they liked/disliked or the one that they judged to be more/less similar to their own face. Their eye movements were tracked while they chose. The results show that the gaze bias effect occurred during the remaining time in both age groups irrespective of the decision type. However, no gaze bias effect was observed for the preference judgment during the first dwell time. Our study demonstrated that the gaze bias during the remaining time occurred regardless of decision-making task and age. Further study using diverse participants, such as clinic patients or infants, may help to generalize the gaze bias effect and to elucidate the mechanisms underlying the gaze bias.

  7. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    Science.gov (United States)

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  9. Gender and facial dominance in gaze cuing: emotional context matters in the eyes that we follow.

    Directory of Open Access Journals (Sweden)

    Garian Ohlsen

    Full Text Available Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual's environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We used the gaze cue task to induce attention to or away from the location of a target stimulus. In the experiment, the gaze cue either belonged to a (dominant looking male face or a (non-dominant looking female face. Critically, prior to the task, individuals were primed with pictures of threat or no threat to induce either a dangerous or safe environment. Findings revealed that the primed emotional context critically influenced the gaze cuing effect. While a gaze cue of the dominant male face influenced performance in both the threat and no-threat conditions, the gaze cue of the non-dominant female face only influenced performance in the no-threat condition. This research suggests an implicit, context-dependent follower bias, which carries implications for research on visual attention, social cognition, and leadership.

  10. Neural Temporal Dynamics of Social Exclusion Elicited by Averted Gaze: An Event-Related Potentials Study

    Directory of Open Access Journals (Sweden)

    Yue Leng

    2018-02-01

    Full Text Available Eye gaze plays a fundamental role in social communication. The averted eye gaze during social interaction, as the most common form of silent treatment, conveys a signal of social exclusion. In the present study, we examined the time course of brain response to social exclusion by using a modified version of Eye-gaze paradigm. The event-related potentials (ERPs data and the subjective rating data showed that the frontocentral P200 was positively correlated with negative mood of excluded events, whereas, the centroparietal late positive potential (LPP was positively correlated with the perceived ostracism intensity. Both the P200 and LPP were more positive-going for excluded events than for included events. These findings suggest that brain responses sensitive to social exclusion can be divided into the early affective processing stage, linking to the early pre-cognitive warning system; and the late higher-order processes stage, demanding attentional resources for elaborate stimuli evaluation and categorization generally not under specific situation.

  11. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of

  12. Eyes Wide Open

    Directory of Open Access Journals (Sweden)

    Zoi Manesi

    2016-04-01

    Full Text Available Research from evolutionary psychology suggests that the mere presence of eye images can promote prosocial behavior. However, the “eye images effect” is a source of considerable debate, and findings across studies have yielded somewhat inconsistent support. We suggest that one critical factor may be whether the eyes really need to be watching to effectively enhance prosocial behavior. In three experiments, we investigated the impact of eye images on prosocial behavior, assessed in a laboratory setting. Participants were randomly assigned to view an image of watching eyes (eyes with direct gaze, an image of nonwatching eyes (i.e., eyes closed for Study 1 and averted eyes for Studies 2 and 3, or an image of flowers (control condition. Upon exposure to the stimuli, participants decided whether or not to help another participant by completing a dull cognitive task. Three independent studies produced somewhat mixed results. However, combined analysis of all three studies, with a total of 612 participants, showed that the watching component of the eyes is important for decision-making in this context. Images of watching eyes led to significantly greater inclination to offer help as compared to images of nonwatching eyes (i.e., eyes closed and averted eyes or images of flowers. These findings suggest that eyes gazing at an individual, rather than any proxy to social presence (e.g., just the eyes, serve as a reminder of reputation. Taken together, we conclude that it is “eyes that pay attention” that can lift the veil of anonymity and potentially facilitate prosocial behavior.

  13. Evaluation of head-free eye tracking as an input device for air traffic control.

    Science.gov (United States)

    Alonso, Roland; Causse, Mickaël; Vachon, François; Parise, Robert; Dehais, Frédéric; Terrier, Patrice

    2013-01-01

    The purpose of this study was to investigate the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC) activity. Sixteen participants used an eye tracker to select targets displayed on a screen as quickly and accurately as possible. We assessed the impact of the presence of visual feedback about gaze position and the method of target selection on selection performance under different difficulty levels induced by variations in target size and target-to-target separation. We tend to consider that the combined use of gaze dwell-time selection and continuous eye-gaze feedback was the best condition as it suits naturally with gaze displacement over the ATC display and free the hands of the controller, despite a small cost in terms of selection speed. In addition, target size had a greater impact on accuracy and selection time than target distance. These findings provide guidelines on possible further implementation of eye tracking in ATC everyday activity. We investigated the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC). We found that the combined use of gaze dwell-time selection and continuous eye-gaze feedback allowed the best performance and that target size had a greater impact on performance than target distance.

  14. The Relationship between Children's Gaze Reporting and Theory of Mind

    Science.gov (United States)

    D'Entremont, Barbara; Seamans, Elizabeth; Boudreau, Elyse

    2012-01-01

    Seventy-nine 3- and 4-year-old children were tested on gaze-reporting ability and Wellman and Liu's (2004) continuous measure of theory of mind (ToM). Children were better able to report where someone was looking when eye and head direction were provided as a cue compared with when only eye direction cues were provided. With the exception of…

  15. Text Entry by Gazing and Smiling

    Directory of Open Access Journals (Sweden)

    Outi Tuisku

    2013-01-01

    Full Text Available Face Interface is a wearable prototype that combines the use of voluntary gaze direction and facial activations, for pointing and selecting objects on a computer screen, respectively. The aim was to investigate the functionality of the prototype for entering text. First, three on-screen keyboard layout designs were developed and tested (n=10 to find a layout that would be more suitable for text entry with the prototype than traditional QWERTY layout. The task was to enter one word ten times with each of the layouts by pointing letters with gaze and select them by smiling. Subjective ratings showed that a layout with large keys on the edge and small keys near the center of the keyboard was rated as the most enjoyable, clearest, and most functional. Second, using this layout, the aim of the second experiment (n=12 was to compare entering text with Face Interface to entering text with mouse. The results showed that text entry rate for Face Interface was 20 characters per minute (cpm and 27 cpm for the mouse. For Face Interface, keystrokes per character (KSPC value was 1.1 and minimum string distance (MSD error rate was 0.12. These values compare especially well with other similar techniques.

  16. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  17. Olhar e contato ocular: desenvolvimento típico e comparação na Síndrome de Down Gaze and eye contact: typical development and comparison in Down syndrome

    Directory of Open Access Journals (Sweden)

    Aline Elise Gerbelli Belini

    2008-03-01

    Full Text Available OBJETIVO: Investigar o desenvolvimento do olhar e do contato ocular em bebê portadora de síndrome de Down, comparando a freqüência de seu olhar para diferentes alvos ao comportamento visual de bebês em desenvolvimento típico. MÉTODOS: Um bebê, do gênero feminino, portador de Síndrome de Down, sem distúrbios visuais diagnosticados até a conclusão da coleta, e 17 bebês em desenvolvimento típico, foram filmados mensal e domiciliarmente, em interação livre com suas mães, do primeiro ao quinto mês de vida. Foi contabilizada a freqüência do olhar dirigido a 11 alvos, entre eles "olhar para os olhos da mãe". RESULTADOS: Os bebês em desenvolvimento típico apresentaram evolução estatisticamente significante, ao longo do período, nas freqüências de "olhos fechados" e de seu olhar para "objetos", "a pesquisadora", "o ambiente", "o próprio corpo", "o rosto da mãe" e "os olhos da mãe". Houve estabilidade estatística da amostra em "olhar para outra pessoa", "olhar para o corpo da mãe" e "abrir e fechar os olhos". O desenvolvimento do olhar e do contato ocular ocorreu de forma estatisticamente muito semelhante no bebê com síndrome de Down, em comparação com as médias dos demais bebês (teste qui-quadrado e com sua variabilidade individual (análise por aglomerados significativos. CONCLUSÕES: A interação precoce entre o bebê e sua mãe parece interferir mais na comunicação não-verbal da dupla do que limitações geneticamente influenciadas. Isto pode ter refletido nas semelhanças encontradas entre o desenvolvimento do comportamento e do contato visuais no bebê com síndrome de Down e nas crianças sem alterações de desenvolvimento.PURPOSE: To assess gaze and eye contact development of a baby girl with Down syndrome and to compare the frequency of gaze directed to different targets to that of babies with normal development. METHODS: A female baby with Down syndrome, without any detected eye conditions and 17

  18. Integrating eye tracking in virtual reality for stroke rehabilitation

    OpenAIRE

    Alves, Júlio Miguel Gomes Rebelo

    2014-01-01

    This thesis reports on research done for the integration of eye tracking technology into virtual reality environments, with the goal of using it in rehabilitation of patients who suffered from stroke. For the last few years, eye tracking has been a focus on medical research, used as an assistive tool  to help people with disabilities interact with new technologies  and as an assessment tool  to track the eye gaze during computer interactions. However, tracking more complex gaze behavio...

  19. Constraining eye movement in individuals with Parkinson's disease during walking turns.

    Science.gov (United States)

    Ambati, V N Pradeep; Saucedo, Fabricio; Murray, Nicholas G; Powell, Douglas W; Reed-Jones, Rebecca J

    2016-10-01

    Walking and turning is a movement that places individuals with Parkinson's disease (PD) at increased risk for fall-related injury. However, turning is an essential movement in activities of daily living, making up to 45 % of the total steps taken in a given day. Hypotheses regarding how turning is controlled suggest an essential role of anticipatory eye movements to provide feedforward information for body coordination. However, little research has investigated control of turning in individuals with PD with specific consideration for eye movements. The purpose of this study was to examine eye movement behavior and body segment coordination in individuals with PD during walking turns. Three experimental groups, a group of individuals with PD, a group of healthy young adults (YAC), and a group of healthy older adults (OAC), performed walking and turning tasks under two visual conditions: free gaze and fixed gaze. Whole-body motion capture and eye tracking characterized body segment coordination and eye movement behavior during walking trials. Statistical analysis revealed significant main effects of group (PD, YAC, and OAC) and visual condition (free and fixed gaze) on timing of segment rotation and horizontal eye movement. Within group comparisons, revealed timing of eye and head movement was significantly different between the free and fixed gaze conditions for YAC (p  0.05). In addition, while intersegment timings (reflecting segment coordination) were significantly different for YAC and OAC during free gaze (p training programs for those with PD, possibly promoting better coordination during turning and potentially reducing the risk of falls.

  20. Classification of binary intentions for individuals with impaired oculomotor function: ‘eyes-closed’ SSVEP-based brain-computer interface (BCI)

    Science.gov (United States)

    Lim, Jeong-Hwan; Hwang, Han-Jeong; Han, Chang-Hee; Jung, Ki-Young; Im, Chang-Hwan

    2013-04-01

    Objective. Some patients suffering from severe neuromuscular diseases have difficulty controlling not only their bodies but also their eyes. Since these patients have difficulty gazing at specific visual stimuli or keeping their eyes open for a long time, they are unable to use the typical steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems. In this study, we introduce a new paradigm for SSVEP-based BCI, which can be potentially suitable for disabled individuals with impaired oculomotor function. Approach. The proposed electroencephalography (EEG)-based BCI system allows users to express their binary intentions without needing to open their eyes. A pair of glasses with two light emitting diodes flickering at different frequencies was used to present visual stimuli to participants with their eyes closed, and we classified the recorded EEG patterns in the online experiments conducted with five healthy participants and one patient with severe amyotrophic lateral sclerosis (ALS). Main results. Through offline experiments performed with 11 participants, we confirmed that human SSVEP could be modulated by visual selective attention to a specific light stimulus penetrating through the eyelids. Furthermore, the recorded EEG patterns could be classified with accuracy high enough for use in a practical BCI system. After customizing the parameters of the proposed SSVEP-based BCI paradigm based on the offline analysis results, binary intentions of five healthy participants were classified in real time. The average information transfer rate of our online experiments reached 10.83 bits min-1. A preliminary online experiment conducted with an ALS patient showed a classification accuracy of 80%. Significance. The results of our offline and online experiments demonstrated the feasibility of our proposed SSVEP-based BCI paradigm. It is expected that our ‘eyes-closed’ SSVEP-based BCI system can be potentially used for communication of

  1. Real-time recording and classification of eye movements in an immersive virtual environment.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  2. Coordination of eye and head components of movements evoked by stimulation of the paramedian pontine reticular formation

    Science.gov (United States)

    Barton, Ellen J.; Sparks, David L.

    2013-01-01

    Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibuloocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently. PMID:18458891

  3. Eye Tracker Accuracy: Quantitative Evaluation of the Invisible Eye Center Location

    OpenAIRE

    Wyder, Stephan; Cattin, Philippe C.

    2017-01-01

    Purpose. We present a new method to evaluate the accuracy of an eye tracker based eye localization system. Measuring the accuracy of an eye tracker's primary intention, the estimated point of gaze, is usually done with volunteers and a set of fixation points used as ground truth. However, verifying the accuracy of the location estimate of a volunteer's eye center in 3D space is not easily possible. This is because the eye center is an intangible point hidden by the iris. Methods. We evaluate ...

  4. Systems and methods of eye tracking calibration

    DEFF Research Database (Denmark)

    2014-01-01

    Methods and systems to facilitate eye tracking control calibration are provided. One or more objects are displayed on a display of a device, where the one or more objects are associated with a function unrelated to a calculation of one or more calibration parameters. The one or more calibration...... parameters relate to a calibration of a calculation of gaze information of a user of the device, where the gaze information indicates where the user is looking. While the one or more objects are displayed, eye movement information associated with the user is determined, which indicates eye movement of one...... or more eye features associated with at least one eye of the user. The eye movement information is associated with a first object location of the one or more objects. The one or more calibration parameters are calculated based on the first object location being associated with the eye movement information....

  5. Spotting expertise in the eyes: billiards knowledge as revealed by gaze shifts in a dynamic visual prediction task.

    Science.gov (United States)

    Crespi, Sofia; Robino, Carlo; Silva, Ottavia; de'Sperati, Claudio

    2012-10-31

    In sports, as in other activities and knowledge domains, expertise is a highly valuable asset. We assessed whether expertise in billiards is associated with specific patterns of eye movements in a visual prediction task. Professional players and novices were presented a number of simplified billiard shots on a computer screen, previously filmed in a real set, with the last part of the ball trajectory occluded. They had to predict whether or not the ball would have hit the central skittle. Experts performed better than novices, in terms of both accuracy and response time. By analyzing eye movements, we found that during occlusion, experts rarely extrapolated with the gaze the occluded part of the ball trajectory-a behavior that was widely diffused in novices-even when the unseen path was long and with two bounces interposed. Rather, they looked selectively at specific diagnostic points on the cushions along the ball's visible trajectory, in accordance with a formal metrical system used by professional players to calculate the shot coordinates. Thus, the eye movements of expert observers contained a clear signature of billiard expertise and documented empirically a strategy upgrade in visual problem solving from dynamic, analog simulation in imagery to more efficient rule-based, conceptual knowledge.

  6. Human sensitivity to eye contact in 2D and 3D videoconferencing

    NARCIS (Netherlands)

    Eijk, van R.L.J.; Kuijsters, A.; Dijkstra, K.I.; IJsselsteijn, W.A.

    2010-01-01

    Gaze awareness and eye contact serve important functions in social interaction. In order to maintain those functions in 2D and 3D videoconferencing systems, human sensitivity to eye contact and gaze direction needs to be taken into account in the design of such systems. Here we experimentally

  7. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    Energy Technology Data Exchange (ETDEWEB)

    Tourassi, Georgia [ORNL; Voisin, Sophie [ORNL; Paquit, Vincent C [ORNL; Krupinski, Elizabeth [University of Arizona

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By pooling the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.

  8. Learning to interact with a computer by gaze

    DEFF Research Database (Denmark)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    that inefficient eye movements was dramatically reduced after only 15 to 25 sentences of typing, equal to approximately 3-4 hours of practice. The performance data fits a general learning model based on the power law of practice. The learning model can be used to estimate further improvements in gaze typing...

  9. Premotor neurons encode torsional eye velocity during smooth-pursuit eye movements

    Science.gov (United States)

    Angelaki, Dora E.; Dickman, J. David

    2003-01-01

    Responses to horizontal and vertical ocular pursuit and head and body rotation in multiple planes were recorded in eye movement-sensitive neurons in the rostral vestibular nuclei (VN) of two rhesus monkeys. When tested during pursuit through primary eye position, the majority of the cells preferred either horizontal or vertical target motion. During pursuit of targets that moved horizontally at different vertical eccentricities or vertically at different horizontal eccentricities, eye angular velocity has been shown to include a torsional component the amplitude of which is proportional to half the gaze angle ("half-angle rule" of Listing's law). Approximately half of the neurons, the majority of which were characterized as "vertical" during pursuit through primary position, exhibited significant changes in their response gain and/or phase as a function of gaze eccentricity during pursuit, as if they were also sensitive to torsional eye velocity. Multiple linear regression analysis revealed a significant contribution of torsional eye movement sensitivity to the responsiveness of the cells. These findings suggest that many VN neurons encode three-dimensional angular velocity, rather than the two-dimensional derivative of eye position, during smooth-pursuit eye movements. Although no clear clustering of pursuit preferred-direction vectors along the semicircular canal axes was observed, the sensitivity of VN neurons to torsional eye movements might reflect a preservation of similar premotor coding of visual and vestibular-driven slow eye movements for both lateral-eyed and foveate species.

  10. Eye Contact Is Crucial for Referential Communication in Pet Dogs.

    Science.gov (United States)

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.

  11. Improvement of design of a surgical interface using an eye tracking device.

    Science.gov (United States)

    Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz

    2014-05-07

    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The

  12. Head movements evoked in alert rhesus monkey by vestibular prosthesis stimulation: implications for postural and gaze stabilization.

    Directory of Open Access Journals (Sweden)

    Diana E Mitchell

    Full Text Available The vestibular system detects motion of the head in space and in turn generates reflexes that are vital for our daily activities. The eye movements produced by the vestibulo-ocular reflex (VOR play an essential role in stabilizing the visual axis (gaze, while vestibulo-spinal reflexes ensure the maintenance of head and body posture. The neuronal pathways from the vestibular periphery to the cervical spinal cord potentially serve a dual role, since they function to stabilize the head relative to inertial space and could thus contribute to gaze (eye-in-head + head-in-space and posture stabilization. To date, however, the functional significance of vestibular-neck pathways in alert primates remains a matter of debate. Here we used a vestibular prosthesis to 1 quantify vestibularly-driven head movements in primates, and 2 assess whether these evoked head movements make a significant contribution to gaze as well as postural stabilization. We stimulated electrodes implanted in the horizontal semicircular canal of alert rhesus monkeys, and measured the head and eye movements evoked during a 100 ms time period for which the contribution of longer latency voluntary inputs to the neck would be minimal. Our results show that prosthetic stimulation evoked significant head movements with latencies consistent with known vestibulo-spinal pathways. Furthermore, while the evoked head movements were substantially smaller than the coincidently evoked eye movements, they made a significant contribution to gaze stabilization, complementing the VOR to ensure that the appropriate gaze response is achieved. We speculate that analogous compensatory head movements will be evoked when implanted prosthetic devices are transitioned to human patients.

  13. Research of Digital Interface Layout Design based on Eye-tracking

    OpenAIRE

    Shao Jiang; Xue Chengqi; Wang Fang; Wang Haiyan; Tang Wencheng; Chen Mo; Kang Mingwu

    2015-01-01

    The aim of this paper is to improve the low service efficiency and unsmooth human-computer interaction caused by currently irrational layouts of digital interfaces for complex systems. Also, three common layout structures for digital interfaces are to be presented and five layout types appropriate for multilevel digital interfaces are to be summarized. Based on the eye tracking technology, an assessment was conducted in advantages and disadvantages of different layout types through subjects’ ...

  14. Towards gaze-controlled platform games

    DEFF Research Database (Denmark)

    Muñoz, Jorge; Yannakakis, Georgios N.; Mulvey, Fiona

    2011-01-01

    This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial...... analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character...... controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game...

  15. Differences in gaze behaviour of expert and junior surgeons performing open inguinal hernia repair.

    Science.gov (United States)

    Tien, Tony; Pucher, Philip H; Sodergren, Mikael H; Sriskandarajah, Kumuthan; Yang, Guang-Zhong; Darzi, Ara

    2015-02-01

    Various fields have used gaze behaviour to evaluate task proficiency. This may also apply to surgery for the assessment of technical skill, but has not previously been explored in live surgery. The aim was to assess differences in gaze behaviour between expert and junior surgeons during open inguinal hernia repair. Gaze behaviour of expert and junior surgeons (defined by operative experience) performing the operation was recorded using eye-tracking glasses (SMI Eye Tracking Glasses 2.0, SensoMotoric Instruments, Germany). Primary endpoints were fixation frequency (steady eye gaze rate) and dwell time (fixation and saccades duration) and were analysed for designated areas of interest in the subject's visual field. Secondary endpoints were maximum pupil size, pupil rate of change (change frequency in pupil size) and pupil entropy (predictability of pupil change). NASA TLX scale measured perceived workload. Recorded metrics were compared between groups for the entire procedure and for comparable procedural segments. Twenty-five cases were recorded, with 13 operations analysed, from 9 surgeons giving 630 min of data, recorded at 30 Hz. Experts demonstrated higher fixation frequency (median[IQR] 1.86 [0.3] vs 0.96 [0.3]; P = 0.006) and dwell time on the operative site during application of mesh (792 [159] vs 469 [109] s; P = 0.028), closure of the external oblique (1.79 [0.2] vs 1.20 [0.6]; P = 0.003) (625 [154] vs 448 [147] s; P = 0.032) and dwelled more on the sterile field during cutting of mesh (716 [173] vs 268 [297] s; P = 0.019). NASA TLX scores indicated experts found the procedure less mentally demanding than juniors (3 [2] vs 12 [5.2]; P = 0.038). No subjects reported problems with wearing of the device, or obstruction of view. Use of portable eye-tracking technology in open surgery is feasible, without impinging surgical performance. Differences in gaze behaviour during open inguinal hernia repair can be seen between expert and junior surgeons and may have

  16. Eye size and visual acuity influence vestibular anatomy in mammals.

    Science.gov (United States)

    Kemp, Addison D; Christopher Kirk, E

    2014-04-01

    The semicircular canals of the inner ear detect head rotations and trigger compensatory movements that stabilize gaze and help maintain visual fixation. Mammals with large eyes and high visual acuity require precise gaze stabilization mechanisms because they experience diminished visual functionality at low thresholds of uncompensated motion. Because semicircular canal radius of curvature is a primary determinant of canal sensitivity, species with large canal radii are expected to be capable of more precise gaze stabilization than species with small canal radii. Here, we examine the relationship between mean semicircular canal radius of curvature, eye size, and visual acuity in a large sample of mammals. Our results demonstrate that eye size and visual acuity both explain a significant proportion of the variance in mean canal radius of curvature after statistically controlling for the effects of body mass and phylogeny. These findings suggest that variation in mean semicircular canal radius of curvature among mammals is partly the result of selection for improved gaze stabilization in species with large eyes and acute vision. Our results also provide a possible functional explanation for the small semicircular canal radii of fossorial mammals and plesiadapiforms. Copyright © 2014 Wiley Periodicals, Inc.

  17. Increased Eye Contact during Conversation Compared to Play in Children with Autism

    Science.gov (United States)

    Jones, Rebecca M.; Southerland, Audrey; Hamo, Amarelle; Carberry, Caroline; Bridges, Chanel; Nay, Sarah; Stubbs, Elizabeth; Komarow, Emily; Washington, Clay; Rehg, James M.; Lord, Catherine; Rozga, Agata

    2017-01-01

    Children with autism have atypical gaze behavior but it is unknown whether gaze differs during distinct types of reciprocal interactions. Typically developing children (N = 20) and children with autism (N = 20) (4-13 years) made similar amounts of eye contact with an examiner during a conversation. Surprisingly, there was minimal eye contact…

  18. The Benslimane's Artistic Model for Females' Gaze Beauty: An Original Assessment Tool.

    Science.gov (United States)

    Benslimane, Fahd; van Harpen, Laura; Myers, Simon R; Ingallina, Fabio; Ghanem, Ali M

    2017-02-01

    The aim of this paper is to analyze the aesthetic characteristics of the human females' gaze using anthropometry and to present an artistic model to represent it: "The Frame Concept." In this model, the eye fissure represents a painting, and the most peripheral shadows around it represent the frame of this painting. The narrower the frame, the more aesthetically pleasing and youthful the gaze appears. This study included a literature review of the features that make the gaze appear attractive. Photographs of models with attractive gazes were examined, and old photographs of patients were compared to recent photographs. The frame ratio was defined by anthropometric measurements of modern portraits of twenty consecutive Miss World winners. The concept was then validated for age and attractiveness across centuries by analysis of modern female photographs and works of art acknowledged for portraying beautiful young and older women in classical paintings. The frame height inversely correlated with attractiveness in modern female portrait photographs. The eye fissure frame ratio of modern idealized female portraits was similar to that of beautiful female portraits idealized by classical artists. In contrast, the eye fissure frames of classical artists' mothers' portraits were significantly wider than those of beautiful younger women. The Frame Concept is a valid artistic tool that provides an understanding of both the aesthetic and aging characteristics of the female periorbital region, enabling the practitioner to plan appropriate aesthetic interventions. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the A3 online Instructions to Authors. www.springer.com/00266 .

  19. A new mapping function in table-mounted eye tracker

    Science.gov (United States)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  20. Eye Movements in Darkness Modulate Self-Motion Perception.

    Science.gov (United States)

    Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter

    2017-01-01

    During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.

  1. Genuine eye contact elicits self-referential processing.

    Science.gov (United States)

    Hietanen, Jonne O; Hietanen, Jari K

    2017-05-01

    The effect of eye contact on self-awareness was investigated with implicit measures based on the use of first-person singular pronouns in sentences. The measures were proposed to tap into self-referential processing, that is, information processing associated with self-awareness. In addition, participants filled in a questionnaire measuring explicit self-awareness. In Experiment 1, the stimulus was a video clip showing another person and, in Experiment 2, the stimulus was a live person. In both experiments, participants were divided into two groups and presented with the stimulus person either making eye contact or gazing downward, depending on the group assignment. During the task, the gaze stimulus was presented before each trial of the pronoun-selection task. Eye contact was found to increase the use of first-person pronouns, but only when participants were facing a real person, not when they were looking at a video of a person. No difference in self-reported self-awareness was found between the two gaze direction groups in either experiment. The results indicate that eye contact elicits self-referential processing, but the effect may be stronger, or possibly limited to, live interaction. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection

    OpenAIRE

    Chen, Zhaokang; Shi, Bertram E.

    2017-01-01

    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...

  3. Investigating the link between radiologists’ gaze, diagnostic decision, and image content

    Science.gov (United States)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent; Krupinski, Elizabeth

    2013-01-01

    Objective To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Image analysis was performed in mammographic regions that attracted radiologists’ attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results By pooling the data from all readers, machine learning produced highly accurate predictive models linking image content, gaze, and cognition. Potential linking of those with diagnostic error was also supported to some extent. Merging readers’ gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the readers’ diagnostic errors while confirming 97.3% of their correct diagnoses. The readers’ individual perceptual and cognitive behaviors could be adequately predicted by modeling the behavior of others. However, personalized tuning was in many cases beneficial for capturing more accurately individual behavior. Conclusions There is clearly an interaction between radiologists’ gaze, diagnostic decision, and image content which can be modeled with machine learning algorithms. PMID:23788627

  4. Direct gaze elicits atypical activation of the theory-of-mind network in autism spectrum conditions.

    Science.gov (United States)

    von dem Hagen, Elisabeth A H; Stoyanova, Raliza S; Rowe, James B; Baron-Cohen, Simon; Calder, Andrew J

    2014-06-01

    Eye contact plays a key role in social interaction and is frequently reported to be atypical in individuals with autism spectrum conditions (ASCs). Despite the importance of direct gaze, previous functional magnetic resonance imaging in ASC has generally focused on paradigms using averted gaze. The current study sought to determine the neural processing of faces displaying direct and averted gaze in 18 males with ASC and 23 matched controls. Controls showed an increased response to direct gaze in brain areas implicated in theory-of-mind and gaze perception, including medial prefrontal cortex, temporoparietal junction, posterior superior temporal sulcus region, and amygdala. In contrast, the same regions showed an increased response to averted gaze in individuals with an ASC. This difference was confirmed by a significant gaze direction × group interaction. Relative to controls, participants with ASC also showed reduced functional connectivity between these regions. We suggest that, in the typical brain, perceiving another person gazing directly at you triggers spontaneous attributions of mental states (e.g. he is "interested" in me), and that such mental state attributions to direct gaze may be reduced or absent in the autistic brain.

  5. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Science.gov (United States)

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.

  6. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Keiko Sakurai

    2017-01-01

    Full Text Available A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor.

  7. A comprehensive gaze stabilization controller based on cerebellar internal models

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Falotico, Egidio; Tolu, Silvia

    2017-01-01

    . The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows to move the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we implement on a humanoid robot a model of gaze stabilization...... based on the coordination of VCR and VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot...

  8. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.

    Science.gov (United States)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon

    2017-02-28

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  9. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    Science.gov (United States)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  10. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans.

    Science.gov (United States)

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach-avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.

  11. Studying the influence of race on the gaze cueing effect using eye tracking method

    Directory of Open Access Journals (Sweden)

    Galina Ya. Menshikova

    2017-06-01

    Full Text Available The gaze direction of another person is an important social cue, allowing us to orient quickly in social interactions. The effect of short-term redirection of visual attention to the same object that other people are looking at is known as the gaze cueing effect. There is evidence that the strength of this effect depends on many social factors, such as the trust in a partner, her/his gender, social attitudes, etc. In our study we investigated the influence of race of face stimuli on the strength of the gaze cueing effect. Using the modified Posner Cueing Task an attentional shift was assessed in a scene where avatar faces of different race were used as distractors. Participants were instructed to fix the black dot in the centre of the screen until it changes colour, and then as soon as possible to make a rightward or leftward saccade, depending on colour of a fixed point. A male distractor face was shown in the centre of the screen simultaneously with a fixed point. The gaze direction of the distractor face changed from straight ahead to rightward or leftward at the moment when colour of a fixed point changed. It could be either congruent or incongruent with the saccade direction. We used face distractors of three race categories: Caucasian (own race faces, Asian and African (other race faces. Twenty five Caucasian participants took part in our study. The results showed that the race of face distractors influence the strength of the gaze cueing effect, that manifested in the change of latency and velocity of the ongoing saccades.

  12. TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile Tablets

    OpenAIRE

    Huang, Qiong; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    We study gaze estimation on tablets, our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet is not constrained. We collected the first large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all o...

  13. Illusory shadow person causing paradoxical gaze deviations during temporal lobe seizures

    NARCIS (Netherlands)

    Zijlmans, M.; van Eijsden, P.; Ferrier, C. H.; Kho, K. H.; van Rijen, P. C.; Leijten, F. S. S.

    Generally, activation of the frontal eye field during seizures can cause versive (forced) gaze deviation, while non-versive head deviation is hypothesised to result from ictal neglect after inactivation of the ipsilateral temporoparietal area. Almost all non-versive head deviations occurring during

  14. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  15. Attention, Exposure Duration, and Gaze Shifting in Naming Performance

    Science.gov (United States)

    Roelofs, Ardi

    2011-01-01

    Two experiments are reported in which the role of attribute exposure duration in naming performance was examined by tracking eye movements. Participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to…

  16. Eye Typing using Markov and Active Appearance Models

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin; Nielsen, Mads

    2002-01-01

    We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...

  17. Gender and facial dominance in gaze cuing: Emotional context matters in the eyes that we follow

    NARCIS (Netherlands)

    Ohlsen, G.; van Zoest, W.; van Vugt, M.

    2013-01-01

    Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual's environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We

  18. Usability Analysis of Online Bank Login Interface Based on Eye Tracking Experiment

    Directory of Open Access Journals (Sweden)

    Xiaofang YUAN

    2014-02-01

    Full Text Available With the rapid development of information technology and rapid popularization of online banking, it is used by the more and more consumers. Studying on the usability of online banking interface, improving the user-friendliness of web interface, and enhancing attraction of bank website, which have gradually become the basic network marketing strategy of the banks. Therefore, this study took three banks as an example to record subjects’’ eye tracking data of time to first fixation, fixation duration and blink count and so on by using Tobii T60XL Eye Tracking equipment, while they login online banking web interface, and analyzed that the factors of webpage layout, colors, the amount of information presentation which impacts on the usability of online banking login interface. The results shows that the login entry, account login information and other key control buttons should be placed in the upper left corner to quickly lock the target, and the interface should have a moderate amount of information presentation, the appropriate proportion, reasonable font size settings, harmonious, simple, and warmth design style.

  19. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Directory of Open Access Journals (Sweden)

    Christoforos eChristoforou

    2015-05-01

    Full Text Available Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e. advertising, shelf testing, and website usability. However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewer’s lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e. movie, narrative content, emotional content, etc., and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group and individual differences in the study of attention-deficit disorders and, the study of desensitization to media violence.

  20. Eye Movements Affect Postural Control in Young and Older Females.

    Science.gov (United States)

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  1. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  2. E-gaze : create gaze communication for peoplewith visual disability

    NARCIS (Netherlands)

    Qiu, S.; Osawa, H.; Hu, J.; Rauterberg, G.W.M.

    2015-01-01

    Gaze signals are frequently used by the sighted in social interactions as visual cues. However, these signals and cues are hardly accessible for people with visual disability. A conceptual design of E-Gaze glasses is proposed, assistive to create gaze communication between blind and sighted people

  3. Context-sensitivity in Conversation. Eye gaze and the German Repair Initiator ‘bitte?’ (´pardon?´)

    DEFF Research Database (Denmark)

    Egbert, Maria

    1996-01-01

    . In addition, repair is sensitive to certain characteristics of social situations. The selection of a particular repair initiator, German bitte? ‘pardon?’, indexes that there is no mutual gaze between interlocutors; i.e., there is no common course of action. The selection of bitte? not only initiates repair......; it also spurs establishment of mutual gaze, and thus displays that there is attention to a common focus. (Conversation analysis, context, cross-linguistic analysis, repair, gaze, telephone conversation, co-present interaction, grammar and interaction)...

  4. Eye contact perception in the West and East: a cross-cultural study.

    Directory of Open Access Journals (Sweden)

    Shota Uono

    Full Text Available This study investigated whether eye contact perception differs in people with different cultural backgrounds. Finnish (European and Japanese (East Asian participants were asked to determine whether Finnish and Japanese neutral faces with various gaze directions were looking at them. Further, participants rated the face stimuli for emotion and other affect-related dimensions. The results indicated that Finnish viewers had a smaller bias toward judging slightly averted gazes as directed at them when judging Finnish rather than Japanese faces, while the bias of Japanese viewers did not differ between faces from their own and other cultural backgrounds. This may be explained by Westerners experiencing more eye contact in their daily life leading to larger visual experience of gaze perception generally, and to more accurate perception of eye contact with people from their own cultural background particularly. The results also revealed cultural differences in the perception of emotion from neutral faces that could also contribute to the bias in eye contact perception.

  5. Eye contact perception in the West and East: a cross-cultural study.

    Science.gov (United States)

    Uono, Shota; Hietanen, Jari K

    2015-01-01

    This study investigated whether eye contact perception differs in people with different cultural backgrounds. Finnish (European) and Japanese (East Asian) participants were asked to determine whether Finnish and Japanese neutral faces with various gaze directions were looking at them. Further, participants rated the face stimuli for emotion and other affect-related dimensions. The results indicated that Finnish viewers had a smaller bias toward judging slightly averted gazes as directed at them when judging Finnish rather than Japanese faces, while the bias of Japanese viewers did not differ between faces from their own and other cultural backgrounds. This may be explained by Westerners experiencing more eye contact in their daily life leading to larger visual experience of gaze perception generally, and to more accurate perception of eye contact with people from their own cultural background particularly. The results also revealed cultural differences in the perception of emotion from neutral faces that could also contribute to the bias in eye contact perception.

  6. A novel approach to training attention and gaze in ASD: A feasibility and efficacy pilot study.

    Science.gov (United States)

    Chukoskie, Leanne; Westerfield, Marissa; Townsend, Jeanne

    2018-05-01

    In addition to the social, communicative and behavioral symptoms that define the disorder, individuals with ASD have difficulty re-orienting attention quickly and accurately. Similarly, fast re-orienting saccadic eye movements are also inaccurate and more variable in both endpoint and timing. Atypical gaze and attention are among the earliest symptoms observed in ASD. Disruption of these foundation skills critically affects the development of higher level cognitive and social behavior. We propose that interventions aimed at these early deficits that support social and cognitive skills will be broadly effective. We conducted a pilot clinical trial designed to demonstrate the feasibility and preliminary efficacy of using gaze-contingent video games for low-cost in-home training of attention and eye movement. Eight adolescents with ASD participated in an 8-week training, with pre-, mid- and post-testing of eye movement and attention control. Six of the eight adolescents completed the 8 weeks of training and all six showed improvement in attention (orienting, disengagement) and eye movement control or both. All game systems remained intact for the duration of training and all participants could use the system independently. We delivered a robust, low-cost, gaze-contingent game system for home use that, in our pilot training sample, improved the attention orienting and eye movement performance of adolescent participants in 8 weeks of training. We are currently conducting a clinical trial to replicate these results and to examine what, if any, aspects of training transfer to more real-world tasks. © 2017 Wiley Periodicals, Inc. Develop Neurobiol 78: 546-554, 2018. © 2017 Wiley Periodicals, Inc.

  7. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  8. The Metamorphosis of Polyphemus's Gaze in Marij Pregelj's Painting (1913-1967

    Directory of Open Access Journals (Sweden)

    Jure Mikuž

    2015-04-01

    Full Text Available In 1949-1951 Marij Pregelj, one of the most interesting Slovenian modernist painters, illustrated his version of Homer's Iliad and Odsssey. His illustrations were presented in the time of socialist realist aesthetics announce a reintegration of Slovenian art into the global (Western context. Among the illustrations is the figure of Cyclops devouring Odysseus' comrades. The image of the one-eyed giant Polyphemus is one which concerned Pregelj all his life: the painter, whose vocation is most dependent on the gaze, can show one eye in profile. And the profiles of others' faces and of his own face interested Pregelj his whole life through. Not only people but also objects were one-eyed: the rosette of a cathedral, which changes into a human figure, a washing machine door, a meat grinder's orifice, a blind “windeye” or window, and so on. The themes of his final two paintings, which he, already more than a year before his boding senseless death at the age of 54, executed but did not complete, are Polyphemus and the Portrait of His Son Vasko. In the first, blood flows from the pricked-out eye towards a stylized camera, in the second, the gaze of the son, an enthusiastic filmmaker, extends to the camera that will displace the father's brush.

  9. Gaze stabilization reflexes in the mouse: New tools to study vision and sensorimotor

    NARCIS (Netherlands)

    B. van Alphen (Bart)

    2010-01-01

    markdownabstract__abstract__ Gaze stabilization reflexes are a popular model system in neuroscience for connecting neurophysiology and behavior as well as studying the neural correlates of behavioral plasticity. These compensatory eye movements are one of the simplest motor behaviors,

  10. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Trait Anxiety Impacts the Perceived Gaze Direction of Fearful But Not Angry Faces

    Directory of Open Access Journals (Sweden)

    Zhonghua Hu

    2017-07-01

    Full Text Available Facial expression and gaze direction play an important role in social communication. Previous research has demonstrated the perception of anger is enhanced by direct gaze, whereas, it is unclear whether perception of fear is enhanced by averted gaze. In addition, previous research has shown the anxiety affects the processing of facial expression and gaze direction, but hasn’t measured or controlled for depression. As a result, firm conclusions cannot be made regarding the impact of individual differences in anxiety and depression on perceptions of face expressions and gaze direction. The current study attempted to reexamine the effect of the anxiety level on the processing of facial expressions and gaze direction by matching participants on depression scores. A reliable psychophysical index of the range of eye gaze angles judged as being directed at oneself [the cone of direct gaze (CoDG] was used as the dependent variable in this study. Participants were stratified into high/low trait anxiety groups and asked to judge the gaze of angry, fearful, and neutral faces across a range of gaze directions. The result showed: (1 the perception of gaze direction was influenced by facial expression and this was modulated by trait anxiety. For the high trait anxiety group, the CoDG for angry expressions was wider than for fearful and neutral expressions, and no significant difference emerged between fearful and neutral expressions; For the low trait anxiety group, the CoDG for both angry and fearful expressions was wider than for neutral, and no significant difference emerged between angry and fearful expressions. (2 Trait anxiety modulated the perception of gaze direction only in the fearful condition, such that the fearful CoDG for the high trait anxiety group was narrower than the low trait anxiety group. This demonstrated that anxiety distinctly affected gaze perception in expressions that convey threat (angry, fearful, such that a high trait anxiety

  12. Influence of Gaze Direction on Face Recognition: A Sensitive Effect

    Directory of Open Access Journals (Sweden)

    Noémy Daury

    2011-08-01

    Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.

  13. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    Directory of Open Access Journals (Sweden)

    Colas N. Authié

    2017-07-01

    Full Text Available In retinitis pigmentosa (RP, loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present, of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures.

  14. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    Science.gov (United States)

    Authié, Colas N.; Berthoz, Alain; Sahel, José-Alain; Safran, Avinoam B.

    2017-01-01

    In retinitis pigmentosa (RP), loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present), of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures. PMID:28798674

  15. There is more to gaze than meets the eye: How animals perceive the visual behaviour of others

    NARCIS (Netherlands)

    Goossens, B.M.A.

    2008-01-01

    Gaze following and the ability to understand that another individual sees something different from oneself are considered important components of human and animal social cognition. In animals, gaze following has been documented in various species, however, the underlying cognitive mechanisms and the

  16. Gaze-Stabilizing Central Vestibular Neurons Project Asymmetrically to Extraocular Motoneuron Pools.

    Science.gov (United States)

    Schoppik, David; Bianco, Isaac H; Prober, David A; Douglass, Adam D; Robson, Drew N; Li, Jennifer M B; Greenwood, Joel S F; Soucy, Edward; Engert, Florian; Schier, Alexander F

    2017-11-22

    Within reflex circuits, specific anatomical projections allow central neurons to relay sensations to effectors that generate movements. A major challenge is to relate anatomical features of central neural populations, such as asymmetric connectivity, to the computations the populations perform. To address this problem, we mapped the anatomy, modeled the function, and discovered a new behavioral role for a genetically defined population of central vestibular neurons in rhombomeres 5-7 of larval zebrafish. First, we found that neurons within this central population project preferentially to motoneurons that move the eyes downward. Concordantly, when the entire population of asymmetrically projecting neurons was stimulated collectively, only downward eye rotations were observed, demonstrating a functional correlate of the anatomical bias. When these neurons are ablated, fish failed to rotate their eyes following either nose-up or nose-down body tilts. This asymmetrically projecting central population thus participates in both upward and downward gaze stabilization. In addition to projecting to motoneurons, central vestibular neurons also receive direct sensory input from peripheral afferents. To infer whether asymmetric projections can facilitate sensory encoding or motor output, we modeled differentially projecting sets of central vestibular neurons. Whereas motor command strength was independent of projection allocation, asymmetric projections enabled more accurate representation of nose-up stimuli. The model shows how asymmetric connectivity could enhance the representation of imbalance during nose-up postures while preserving gaze stabilization performance. Finally, we found that central vestibular neurons were necessary for a vital behavior requiring maintenance of a nose-up posture: swim bladder inflation. These observations suggest that asymmetric connectivity in the vestibular system facilitates representation of ethologically relevant stimuli without

  17. Intentional gaze shift to neglected space: a compensatory strategy during recovery after unilateral spatial neglect.

    Science.gov (United States)

    Takamura, Yusaku; Imanishi, Maho; Osaka, Madoka; Ohmatsu, Satoko; Tominaga, Takanori; Yamanaka, Kentaro; Morioka, Shu; Kawashima, Noritaka

    2016-11-01

    Unilateral spatial neglect is a common neurological syndrome following predominantly right hemispheric stroke. While most patients lack insight into their neglect behaviour and do not initiate compensatory behaviours in the early recovery phase, some patients recognize it and start to pay attention towards the neglected space. We aimed to characterize visual attention capacity in patients with unilateral spatial neglect with specific focus on cortical processes underlying compensatory gaze shift towards the neglected space during the recovery process. Based on the Behavioural Inattention Test score and presence or absence of experience of neglect in their daily life from stroke onset to the enrolment date, participants were divided into USN+‰‰+ (do not compensate, n = 15), USN+ (compensate, n = 10), and right hemisphere damage groups (no neglect, n = 24). The patients participated in eye pursuit-based choice reaction tasks and were asked to pursue one of five horizontally located circular objects flashed on a computer display. The task consisted of 25 trials with 4-s intervals, and the order of highlighted objects was randomly determined. From the recorded eye tracking data, eye movement onset and gaze shift were calculated. To elucidate the cortical mechanism underlying behavioural results, electroencephalagram activities were recorded in three USN+‰‰+, 13 USN+ and eight patients with right hemisphere damage. We found that while lower Behavioural Inattention Test scoring patients (USN+‰‰+) showed gaze shift to non-neglected space, some higher scoring patients (USN+) showed clear leftward gaze shift at visual stimuli onset. Moreover, we found a significant correlation between Behavioural Inattention Test score and gaze shift extent in the unilateral spatial neglect group (r = -0.62, P attention to the neglected space) and its neural correlates in patients with unilateral spatial neglect. In conclusion, patients with unilateral spatial neglect who recognized

  18. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  19. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  20. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  1. Clinical Approach to Supranuclear Brainstem Saccadic Gaze Palsies

    Directory of Open Access Journals (Sweden)

    Alexandra Lloyd-Smith Sequeira

    2017-08-01

    Full Text Available Failure of brainstem supranuclear centers for saccadic eye movements results in the clinical presence of a brainstem-mediated supranuclear saccadic gaze palsy (SGP, which is manifested as slowing of saccades with or without range of motion limitation of eye movements and as loss of quick phases of optokinetic nystagmus. Limitation in the range of motion of eye movements is typically worse with saccades than with smooth pursuit and is overcome with vestibular–ocular reflexive eye movements. The differential diagnosis of SGPs is broad, although acute-onset SGP is most often from brainstem infarction and chronic vertical SGP is most commonly caused by the neurodegenerative condition progressive supranuclear palsy. In this review, we discuss the brainstem anatomy and physiology of the brainstem saccade-generating network; we discuss the clinical features of SGPs, with an emphasis on insights from quantitative ocular motor recordings; and we consider the broad differential diagnosis of SGPs.

  2. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  3. State-dependent sensorimotor processing: gaze and posture stability during simulated flight in birds.

    Science.gov (United States)

    McArthur, Kimberly L; Dickman, J David

    2011-04-01

    Vestibular responses play an important role in maintaining gaze and posture stability during rotational motion. Previous studies suggest that these responses are state dependent, their expression varying with the environmental and locomotor conditions of the animal. In this study, we simulated an ethologically relevant state in the laboratory to study state-dependent vestibular responses in birds. We used frontal airflow to simulate gliding flight and measured pigeons' eye, head, and tail responses to rotational motion in darkness, under both head-fixed and head-free conditions. We show that both eye and head response gains are significantly higher during flight, thus enhancing gaze and head-in-space stability. We also characterize state-specific tail responses to pitch and roll rotation that would help to maintain body-in-space orientation during flight. These results demonstrate that vestibular sensorimotor processing is not fixed but depends instead on the animal's behavioral state.

  4. Examining the durability of incidentally learned trust from gaze cues.

    Science.gov (United States)

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  5. Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs.

    Science.gov (United States)

    Liang, Jiali; Wilkinson, Krista

    2018-04-18

    A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.

  6. The influence of social and symbolic cues on observers' gaze behaviour.

    Science.gov (United States)

    Hermens, Frouke; Walker, Robin

    2016-08-01

    Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task. © 2015 The British Psychological Society.

  7. Eye contact facilitates awareness of faces during interocular suppression

    NARCIS (Netherlands)

    Stein, T.; Senju, A.; Peelen, M.V.; Sterzer, P.

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct

  8. Look at my poster! Active gaze, preference and memory during a poster session.

    Science.gov (United States)

    Foulsham, Tom; Kingstone, Alan

    2011-01-01

    In science, as in advertising, people often present information on a poster, yet little is known about attention during a poster session. A mobile eye-tracker was used to record participants' gaze during a mock poster session featuring a range of academic psychology posters. Participants spent the most time looking at introductions and conclusions. Larger posters were looked at for longer, as were posters rated more interesting (but not necessarily more aesthetically pleasing). Interestingly, gaze did not correlate with memory for poster details or liking, suggesting that attracting someone towards your poster may not be enough.

  9. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    Science.gov (United States)

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  10. Gaze beats mouse

    DEFF Research Database (Denmark)

    Mateo, Julio C.; San Agustin, Javier; Hansen, John Paulin

    2008-01-01

    Facial EMG for selection is fast, easy and, combined with gaze pointing, it can provide completely hands-free interaction. In this pilot study, 5 participants performed a simple point-and-select task using mouse or gaze for pointing and a mouse button or a facial-EMG switch for selection. Gaze...

  11. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    Directory of Open Access Journals (Sweden)

    Hwan Heo

    2014-05-01

    Full Text Available We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user’s gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD, stereoscopic disparity (SD, frame cancellation effect (FCE, and edge component (EC of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  12. Simulating interaction: Using gaze-contingent eye-tracking to measure the reward value of social signals in toddlers with and without autism

    Directory of Open Access Journals (Sweden)

    Angelina Vernetti

    2018-01-01

    Full Text Available Several accounts have been proposed to explain difficulties with social interaction in autism spectrum disorder (ASD, amongst which atypical social orienting, decreased social motivation or difficulties with understanding the regularities driving social interaction. This study uses gaze-contingent eye-tracking to tease apart these accounts by measuring reward related behaviours in response to different social videos. Toddlers at high or low familial risk for ASD took part in this study at age 2 and were categorised at age 3 as low risk controls (LR, high-risk with no ASD diagnosis (HR-no ASD, or with a diagnosis of ASD (HR-ASD. When the on-demand social interaction was predictable, all groups, including the HR-ASD group, looked longer and smiled more towards a person greeting them compared to a mechanical Toy (Condition 1 and also smiled more towards a communicative over a non-communicative person (Condition 2. However, all groups, except the HR-ASD group, selectively oriented towards a person addressing the child in different ways over an invariant social interaction (Condition 3. These findings suggest that social interaction is intrinsically rewarding for individuals with ASD, but the extent to which it is sought may be modulated by the specific variability of naturalistic social interaction. Keywords: Social orienting, Social motivation, Unpredictability, Autism spectrum disorder, High-risk siblings, Gaze-contingency

  13. Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls.

    Science.gov (United States)

    Panchuk, Derek; Vickers, Joan N

    2011-08-01

    We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.

  14. Exploration of Computer Game Interventions in Improving Gaze Following Behavior in Children with Autism Spectrum Disorders

    OpenAIRE

    Kane, Jessi Lynn

    2011-01-01

    Statistics show the prevalence of autism spectrum disorder (ASD), a developmental delay disorder, is now 1 in 110 children in the United States (Rice, 2009), nearing 1% of the population. Therefore, this study looked at ways modern technology could assist these children and their families. One deficit in ASD is the inability to respond to gaze referencing (i.e. follow the eye gaze of another adult/child/etc), a correlate of the responding to joint attention (RJA) process. This not only aff...

  15. Evaluation of a remote webcam-based eye tracker

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Agustin, Javier San; Johansen, Sune Alstrup

    2011-01-01

    In this paper we assess the performance of an open-source gaze tracker in a remote (i.e. table-mounted) setup, and compare it with two other commercial eye trackers. An experiment with 5 subjects showed the open-source eye tracker to have a significantly higher level of accuracy than one...

  16. Spatial Coding of Eye Movements Relative to Perceived Orientations During Roll Tilt with Different Gravitoinertial Loads

    Science.gov (United States)

    Wood, Scott; Clement, Gilles

    2013-01-01

    This purpose of this study was to examine the spatial coding of eye movements during roll tilt relative to perceived orientations while free-floating during the microgravity phase of parabolic flight or during head tilt in normal gravity. Binocular videographic recordings obtained in darkness from six subjects allowed us to quantify the mean deviations in gaze trajectories along both horizontal and vertical coordinates relative to the aircraft and head orientations. Both variability and curvature of gaze trajectories increased during roll tilt compared to the upright position. The saccades were less accurate during parabolic flight compared to measurements obtained in normal gravity. The trajectories of saccades along perceived horizontal orientations tended to deviate in the same direction as the head tilt, while the deviations in gaze trajectories along the perceived vertical orientations deviated in the opposite direction relative to the head tilt. Although subjects were instructed to look off in the distance while performing the eye movements, fixation distance varied with vertical gaze direction independent of whether the saccades were made along perceived aircraft or head orientations. This coupling of horizontal vergence with vertical gaze is in a consistent direction with the vertical slant of the horopter. The increased errors in gaze trajectories along both perceived orientations during microgravity can be attributed to the otolith's role in spatial coding of eye movements.

  17. Attention to the Mouth and Gaze Following in Infancy Predict Language Development

    Science.gov (United States)

    Tenenbaum, Elena J.; Sobel, David M.; Sheinkpof, Stephen J.; Malle, Bertram F.; Morgan, James L.

    2015-01-01

    We investigated longitudinal relations among gaze following and face scanning in infancy and later language development. At 12 months, infants watched videos of a woman describing an object while their passive viewing was measured with an eye-tracker. We examined the relation between infants' face scanning behavior and their tendency to follow the…

  18. Neural activity in the posterior superior temporal region during eye contact perception correlates with autistic traits.

    Science.gov (United States)

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Endo, Taro; Someya, Toshiyuki

    2013-08-09

    The present study investigated the relationship between neural activity associated with gaze processing and autistic traits in typically developed subjects using magnetoencephalography. Autistic traits in 24 typically developed college students with normal intelligence were assessed using the Autism Spectrum Quotient (AQ). The Minimum Current Estimates method was applied to estimate the cortical sources of magnetic responses to gaze stimuli. These stimuli consisted of apparent motion of the eyes, displaying direct or averted gaze motion. Results revealed gaze-related brain activations in the 150-250 ms time window in the right posterior superior temporal sulcus (pSTS), and in the 150-450 ms time window in medial prefrontal regions. In addition, the mean amplitude in the 150-250 ms time window in the right pSTS region was modulated by gaze direction, and its activity in response to direct gaze stimuli correlated with AQ score. pSTS activation in response to direct gaze is thought to be related to higher-order social processes. Thus, these results suggest that brain activity linking eye contact and social signals is associated with autistic traits in a typical population. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.

    Science.gov (United States)

    Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M

    2013-02-01

    We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. In the Eye of the Beholder-  A Survey of Models for Eyes and Gaze

    DEFF Research Database (Denmark)

    Witzner Hansen, Dan; Ji, Qiang

    2010-01-01

    Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and...

  1. Looking for Action: Talk and Gaze Home Position in the Airline Cockpit

    Science.gov (United States)

    Nevile, Maurice

    2010-01-01

    This paper considers the embodied nature of discourse for a professional work setting. It examines language in interaction in the airline cockpit, and specifically how shifts in pilots' eye gaze direction can indicate the action of talk, that is, what talk is doing and its relative contribution to work-in-progress. Looking towards the other…

  2. New perspectives in gaze sensitivity research.

    Science.gov (United States)

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention.

  3. EyeFrame: Real-time memory aid improves human multitasking via domain-general eye tracking procedures

    Directory of Open Access Journals (Sweden)

    P. eTaylor

    2015-09-01

    Full Text Available OBJECTIVE: We developed an extensively general closed-loop system to improve human interaction in various multitasking scenarios, with semi-autonomous agents, processes, and robots. BACKGROUND: Much technology is converging toward semi-independent processes with intermittent human supervision distributed over multiple computerized agents. Human operators multitask notoriously poorly, in part due to cognitive load and limited working memory. To multitask optimally, users must remember task order, e.g., the most neglected task, since longer times not monitoring an element indicates greater probability of need for user input. The secondary task of monitoring attention history over multiple spatial tasks requires similar cognitive resources as primary tasks themselves. Humans can not reliably make more than ~2 decisions/s. METHODS: Participants managed a range of 4-10 semi-autonomous agents performing rescue tasks. To optimize monitoring and controlling multiple agents, we created an automated short term memory aid, providing visual cues from users' gaze history. Cues indicated when and where to look next, and were derived from an inverse of eye fixation recency. RESULTS: Contingent eye tracking algorithms drastically improved operator performance, increasing multitasking capacity. The gaze aid reduced biases, and reduced cognitive load, measured by smaller pupil dilation. CONCLUSIONS: Our eye aid likely helped by delegating short-term memory to the computer, and by reducing decision making load. Past studies used eye position for gaze-aware control and interactive updating of displays in application-specific scenarios, but ours is the first to successfully implement domain-general algorithms. Procedures should generalize well to: process control, factory operations, robot control, surveillance, aviation, air traffic control, driving, military, mobile search and rescue, and many tasks where probability of utility is predicted by duration since last

  4. Gaze-Based Controlling a Vehicle

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    ) as an example of a complex gaze-based task in environment. This paper discusses the possibilities and limitations of how gaze interaction can be performed for controlling vehicles not only using a remote gaze tracker but also in general challenging situations where the user and robot are mobile...... modality if gaze trackers are embedded into the head- mounted devices. The domain of gaze-based interactive applications increases dramatically as interaction is no longer constrained to 2D displays. This paper proposes a general framework for gaze-based controlling a non- stationary robot (vehicle...... and the movements may be governed by several degrees of freedom (e.g. flying). A case study is also introduced where the mobile gaze tracker is used for controlling a Roomba vacuum cleaner....

  5. The head tracks and gaze predicts: how the world's best batters hit the ball

    NARCIS (Netherlands)

    Mann, D.L.; Spratford, W.; Abernethy, B.

    2013-01-01

    Hitters in fast ball-sports do not align their gaze with the ball throughout ball-flight; rather, they use predictive eye movement strategies that contribute towards their level of interceptive skill. Existing studies claim that (i) baseball and cricket batters cannot track the ball because it moves

  6. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    Science.gov (United States)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  7. Quantifying the cognitive cost of laparo-endoscopic single-site surgeries: Gaze-based indices.

    Science.gov (United States)

    Di Stasi, Leandro L; Díaz-Piedra, Carolina; Ruiz-Rabelo, Juan Francisco; Rieiro, Héctor; Sanchez Carrion, Jose M; Catena, Andrés

    2017-11-01

    Despite the growing interest concerning the laparo-endoscopic single-site surgery (LESS) procedure, LESS presents multiple difficulties and challenges that are likely to increase the surgeon's cognitive cost, in terms of both cognitive load and performance. Nevertheless, there is currently no objective index capable of assessing the surgeon cognitive cost while performing LESS. We assessed if gaze-based indices might offer unique and unbiased measures to quantify LESS complexity and its cognitive cost. We expect that the assessment of surgeon's cognitive cost to improve patient safety by measuring fitness-for-duty and reducing surgeons overload. Using a wearable eye tracker device, we measured gaze entropy and velocity of surgical trainees and attending surgeons during two surgical procedures (LESS vs. multiport laparoscopy surgery [MPS]). None of the participants had previous experience with LESS. They performed two exercises with different complexity levels (Low: Pattern Cut vs. High: Peg Transfer). We also collected performance and subjective data. LESS caused higher cognitive demand than MPS, as indicated by increased gaze entropy in both surgical trainees and attending surgeons (exploration pattern became more random). Furthermore, gaze velocity was higher (exploration pattern became more rapid) for the LESS procedure independently of the surgeon's expertise. Perceived task complexity and laparoscopic accuracy confirmed gaze-based results. Gaze-based indices have great potential as objective and non-intrusive measures to assess surgeons' cognitive cost and fitness-for-duty. Furthermore, gaze-based indices might play a relevant role in defining future guidelines on surgeons' examinations to mark their achievements during the entire training (e.g. analyzing surgical learning curves). Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. [Left lateral gaze paresis due to subcortical hematoma in the right precentral gyrus].

    Science.gov (United States)

    Sato, K; Takamori, M

    1998-03-01

    We report a case of transient left lateral gaze paresis due to a hemorrhagic lesion restricted in the right precentral gyrus. A 74-year-old female experienced a sudden clumsiness of the left upper extremity. A neurological examination revealed a left central facial paresis, distal dominant muscle weakness in the left upper limb and left lateral gaze paresis. There were no other focal neurological signs. Laboratory data were all normal. Brain CTs and MRIs demonstrated a subcortical hematoma in the right precentral gyrus. The neurological symptoms and signs disappeared over seven days. A recent physiological study suggested that the human frontal eye field (FEF) is located in the posterior part of the middle frontal gyrus (Brodmann's area 8) and the precentral gyrus around the precentral sulcus. More recent studies stressed the role of the precentral sulcus and the precentral gyrus. Our case supports those physiological findings. The hematoma affected both the FEF and its underlying white matter in our case. We assume the lateral gaze paresis is attributable to the disruption of the fibers from the FEF. It is likely that fibers for motor control of the face, upper extremity, and lateral gaze lie adjacently in the subcortical area.

  9. SD OCT Features of Macula and Silicon Oil–Retinal Interface in Eyes Status Post Vitrectomy for RRD

    Directory of Open Access Journals (Sweden)

    Manish Nagpal

    2015-03-01

    Full Text Available Aim: To objectively document findings at the Silicon oil-Retinal interface, macular status and tamponade effect in Silicon Oil (SO filled eyes using SD OCT. Methods: 104 eyes of 104 patients underwent SD OCT examination, horizontal and vertical macular scans, in silicone oil filled eyes which underwent silicone oil injection post vitrectomy for rhegmatogenous retinal detachment. Findings were divided into 3 Groups; Group A: Findings at silicon oil retinal interface, Group B: Macular pathology and Group C: Tamponade effect. Group C was further divided into two groups; Group 1: Complete tamponade and Group 2: Incomplete tamponade. Results: Group A: subsilicon epiretinal membranes N = 17 (16.3%, emulsified silicon oil N = 16 (15.4% Group B: foveal thickening N = 22 (21.2%, foveal thinning N = 6 (5.7%, subfoveal fluid N = 8 (7.6%, macular hole N = 2 (1.9%; Group C: Incomplete tamponade was noted in N = 12 (11.5%, complete tamponade N = 92 (88.5%.10 out of 104 eyes (9.6% had recurrent retinal detachment post silicon oil removal. 8 of these eyes had complete tamponade and 2 had incomplete tamponade. Conclusion: SD OCT is a useful tool to assess the SO–Retina interface, tamponade effect and macular pathology in SO filled eyes. There is lesser incidence of redetachment with incomplete tamponade in OCT.

  10. Gaze Tracking Through Smartphones

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Hansen, John Paulin; Møllenbach, Emilie

    Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest.......Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest....

  11. Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments

    OpenAIRE

    Thies Pfeiffer; Ipke Wachsmuth; Marc E. Latoschik

    2009-01-01

    Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of...

  12. Reading beyond the glance: eye tracking in neurosciences.

    Science.gov (United States)

    Popa, Livia; Selejan, Ovidiu; Scott, Allan; Mureşanu, Dafin F; Balea, Maria; Rafila, Alexandru

    2015-05-01

    From an interdisciplinary approach, the neurosciences (NSs) represent the junction of many fields (biology, chemistry, medicine, computer science, and psychology) and aim to explore the structural and functional aspects of the nervous system. Among modern neurophysiological methods that "measure" different processes of the human brain to salience stimuli, a special place belongs to eye tracking (ET). By detecting eye position, gaze direction, sequence of eye movement and visual adaptation during cognitive activities, ET is an effective tool for experimental psychology and neurological research. It provides a quantitative and qualitative analysis of the gaze, which is very useful in understanding choice behavior and perceptual decision making. In the high tech era, ET has several applications related to the interaction between humans and computers. Herein, ET is used to evaluate the spatial orienting of attention, the performance in visual tasks, the reactions to information on websites, the customer response to advertising, and the emotional and cognitive impact of various spurs to the brain.

  13. Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Julia eIrwin

    2014-05-01

    Full Text Available Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD and those with typical development (TD. Patterns of gaze were observed under three conditions: audiovisual (AV speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.

  14. Adaptive gaze stabilization through cerebellar internal models in a humanoid robot

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Tolu, Silvia; Falotico, Egidio

    2016-01-01

    Two main classes of reflexes relying on the vestibular system are involved in the stabilization of the human gaze: The vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR...... on the coordination of VCR and VOR and OKR. The model, inspired on neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. Tests on a simulated humanoid platform confirm the effectiveness of our approach....... works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism for moving the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we present the first complete model of gaze stabilization based...

  15. Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Zenghai Chen

    2018-01-01

    Full Text Available Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject’s eye movements. A gaze deviation (GaDe image is then proposed to characterize the subject’s eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image’s features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method.

  16. I spy with my little eye: Analysis of airline pilots' gaze patterns in a manual instrument flight scenario.

    Science.gov (United States)

    Haslbeck, Andreas; Zhang, Bo

    2017-09-01

    The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Conjugate Gaze Palsies

    Science.gov (United States)

    ... version Home Brain, Spinal Cord, and Nerve Disorders Cranial Nerve Disorders Conjugate Gaze Palsies Horizontal gaze palsy Vertical ... Version. DOCTORS: Click here for the Professional Version Cranial Nerve Disorders Overview of the Cranial Nerves Internuclear Ophthalmoplegia ...

  18. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    Science.gov (United States)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  19. Fear of eyes: triadic relation among social anxiety, trypophobia, and discomfort for eye cluster.

    Science.gov (United States)

    Chaya, Kengo; Xue, Yuting; Uto, Yusuke; Yao, Qirui; Yamada, Yuki

    2016-01-01

    Imagine you are being gazed at by multiple individuals simultaneously. Is the provoked anxiety a learned social-specific response or related to a pathological disorder known as trypophobia? A previous study revealed that spectral properties of images induced aversive reactions in observers with trypophobia. However, it is not clear whether individual differences such as social anxiety traits are related to the discomfort associated with trypophobic images. To investigate this issue, we conducted two experiments with social anxiety and trypophobia and images of eyes and faces. In Experiment 1, participants completed a social anxiety scale and trypophobia questionnaire before evaluation of the discomfort experienced upon exposure to pictures of eye. The results showed that social anxiety had a significant indirect effect on the discomfort associated with the eye clusters, and that the effect was mediated by trypophobia. Experiment 2 replicated Experiment 1 using images of human face. The results showed that, as in Experiment 1, a significant mediation effect of trypophobia was obtained, although the relationship between social anxiety and the discomfort rating was stronger than in Experiment 1. Our findings suggest that both social anxiety and trypophobia contribute to the induction of discomfort when one is gazed at by many people.

  20. Fear of eyes: triadic relation among social anxiety, trypophobia, and discomfort for eye cluster

    Directory of Open Access Journals (Sweden)

    Kengo Chaya

    2016-05-01

    Full Text Available Imagine you are being gazed at by multiple individuals simultaneously. Is the provoked anxiety a learned social-specific response or related to a pathological disorder known as trypophobia? A previous study revealed that spectral properties of images induced aversive reactions in observers with trypophobia. However, it is not clear whether individual differences such as social anxiety traits are related to the discomfort associated with trypophobic images. To investigate this issue, we conducted two experiments with social anxiety and trypophobia and images of eyes and faces. In Experiment 1, participants completed a social anxiety scale and trypophobia questionnaire before evaluation of the discomfort experienced upon exposure to pictures of eye. The results showed that social anxiety had a significant indirect effect on the discomfort associated with the eye clusters, and that the effect was mediated by trypophobia. Experiment 2 replicated Experiment 1 using images of human face. The results showed that, as in Experiment 1, a significant mediation effect of trypophobia was obtained, although the relationship between social anxiety and the discomfort rating was stronger than in Experiment 1. Our findings suggest that both social anxiety and trypophobia contribute to the induction of discomfort when one is gazed at by many people.

  1. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    , and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...

  2. Gaze stability, dynamic balance and participation deficits in people with multiple sclerosis at fall-risk.

    Science.gov (United States)

    Garg, Hina; Dibble, Leland E; Schubert, Michael C; Sibthorp, Jim; Foreman, K Bo; Gappmaier, Eduard

    2018-05-05

    Despite the common complaints of dizziness and demyelination of afferent or efferent pathways to and from the vestibular nuclei which may adversely affect the angular Vestibulo-Ocular Reflex (aVOR) and vestibulo-spinal function in persons with Multiple Sclerosis (PwMS), few studies have examined gaze and dynamic balance function in PwMS. 1) Determine the differences in gaze stability, dynamic balance and participation measures between PwMS and controls, 2) Examine the relationships between gaze stability, dynamic balance and participation. Nineteen ambulatory PwMS at fall-risk and 14 age-matched controls were recruited. Outcomes included (a) gaze stability [angular Vestibulo-Ocular Reflex (aVOR) gain (ratio of eye to head velocity); number of Compensatory Saccades (CS) per head rotation; CS latency; gaze position error; Coefficient of Variation (CV) of aVOR gain], (b) dynamic balance [Functional Gait Assessment, FGA; four square step test], and (c) participation [dizziness handicap inventory; activities-specific balance confidence scale]. Separate independent t-tests and Pearson's correlations were calculated. PwMS were age = 53 ± 11.7yrs and had 4.2 ± 3.3 falls/yr. PwMS demonstrated significant (pbalance and participation measures compared to controls. CV of aVOR gain and CS latency were significantly correlated with FGA. Deficits and correlations across a spectrum of disability measures highlight the relevance of gaze and dynamic balance assessment in PwMS. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  3. Vigilance and Avoidance of Threat in the Eye Movements of Children with Separation Anxiety Disorder

    Science.gov (United States)

    In-Albon, Tina; Kossowsky, Joe; Schneider, Silvia

    2010-01-01

    The "vigilance-avoidance" attention pattern is found in anxious adults, who initially gaze more at threatening pictures than nonanxious adults (vigilance), but subsequently gaze less at them than nonanxious adults (avoidance). The present research, using eye tracking methodology, tested whether anxious children show the same pattern. Children with…

  4. Synergistic convergence and split pons in horizontal gaze palsy and progressive scoliosis in two sisters

    Directory of Open Access Journals (Sweden)

    Jain Nitin

    2011-01-01

    Full Text Available Synergistic convergence is an ocular motor anomaly where on attempted abduction or on attempted horizontal gaze, both the eyes converge. It has been related to peripheral causes such as congenital fibrosis of extraocular muscles (CFEOM, congenital cranial dysinnervation syndrome, ocular misinnervation or rarely central causes like horizontal gaze palsy with progressive scoliosis, brain stem dysplasia. We hereby report the occurrence of synergistic convergence in two sisters. Both of them also had kyphoscoliosis. Magnetic resonance imaging (MRI brain and spine in both the patients showed signs of brain stem dysplasia (split pons sign differing in degree (younger sister had more marked changes.

  5. Evidence for a functional subdivision of Premotor Ear-Eye Field (Area 8B.

    Directory of Open Access Journals (Sweden)

    Marco eLanzilotto

    2015-01-01

    Full Text Available The Supplementary Eye Field (SEF and the Frontal Eye Field (FEF have been described as participating in gaze shift control. Recent evidence suggests, however, that other areas of the dorsomedial prefrontal cortex also influence gaze shift. Herein, we have investigated electrically evoked ear- and eye movements from the Premotor Ear-Eye Field, or PEEF (area 8B of macaque monkeys. We stimulated PEEF during spontaneous condition (outside the task performance and during the execution of a visual fixation task (VFT. In the first case, we functionally identified two regions within the PEEF: a core and a belt. In the core region, stimulation elicited forward ear movements; regarding the evoked eye movements, in some penetrations, stimulation elicited contraversive fixed-vectors with a mean amplitude of 5.14°; while in other penetrations, we observed prevalently contralateral goal-directed eye movements having end-points that fell within 15° in respect to the primary eye position. On the contrary, in the belt region, stimulation elicited backward ear movements; regarding the eye movements, in some penetrations stimulation elicited prevalently contralateral goal-directed eye movements having end-points that fell within 15° in respect to the primary eye position, while in the lateral edge of the investigated region, stimulation elicited contralateral goal-directed eye movements having end-points that fell beyond 15° in respect to the primary eye position. Stimulation during VFT either did not elicit eye movements or evoked saccades of only a few degrees. Finally, even though no head rotation movements were observed during the stimulation period, we viewed a relationship between the duration of stimulation and the neck forces exerted by the monkey’s head. We propose an updated vision of the PEEF composed of two functional regions, core and belt, which may be involved in integrating auditory and visual information important to the programming of gaze

  6. Perceiving where another person is looking: The integration of head and body information in estimating another person’s gaze.

    Directory of Open Access Journals (Sweden)

    Pieter eMoors

    2015-06-01

    Full Text Available The process through which an observer allocates his/her attention based on the attention of another person is known as joint attention. To be able to do this, the observer effectively has to compute where the other person is looking. It has been shown that observers integrate information from the head and the eyes to determine the gaze of another person. Most studies have documented that observers show a bias called the overshoot effect when eyes and head are misaligned. The present study addresses whether body information is also used as a cue to compute perceived gaze direction. In Experiment 1, we observed a similar overshoot effect in both behavioral and saccadic responses when manipulating body orientation. In Experiment 2, we explored whether the overshoot effect was due to observers assuming that the eyes are oriented further than the head when head and body orientation are misaligned. We removed horizontal eye information by presenting the stimulus from a side view. Head orientation was now manipulated in a vertical direction and the overshoot effect was replicated. In summary, this study shows that body orientation is indeed used as a cue to determine where another person is looking.

  7. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  8. Does social presence or the potential for interaction reduce social gaze in online social scenarios? Introducing the "live lab" paradigm.

    Science.gov (United States)

    Gregory, Nicola J; Antolin, Jastine V

    2018-05-01

    Research has shown that people's gaze is biased away from faces in the real world but towards them when they are viewed onscreen. Non-equivalent stimulus conditions may have represented a confound in this research, however, as participants viewed onscreen stimuli as pre-recordings where interaction was not possible compared with real-world stimuli which were viewed in real time where interaction was possible. We assessed the independent contributions of online social presence and ability for interaction on social gaze by developing the "live lab" paradigm. Participants in three groups ( N = 132) viewed a confederate as (1) a live webcam stream where interaction was not possible (one-way), (2) a live webcam stream where an interaction was possible (two-way), or (3) a pre-recording. Potential for interaction, rather than online social presence, was the primary influence on gaze behaviour: participants in the pre-recorded and one-way conditions looked more to the face than those in the two-way condition, particularly, when the confederate made "eye contact." Fixation durations to the face were shorter when the scene was viewed live, particularly, during a bid for eye contact. Our findings support the dual function of gaze but suggest that online social presence alone is not sufficient to activate social norms of civil inattention. Implications for the reinterpretation of previous research are discussed.

  9. Does beauty catch the eye?: Sex differences in gazing at attractive opposite-sex targets

    NARCIS (Netherlands)

    van Straaten, I.; Holland, R.; Finkenauer, C.; Hollenstein, T.; Engels, R.C.M.E.

    2010-01-01

    We investigated to what extent the length of people's gazes during conversations with opposite-sex persons is affected by the physical attractiveness of the partner. Single participants (N = 115) conversed for 5 min with confederates who were rated either as low or high on physical attractiveness.

  10. Fearful gaze cueing: gaze direction and facial expression independently influence overt orienting responses in 12-month-olds.

    Directory of Open Access Journals (Sweden)

    Reiko Matsunaka

    Full Text Available Gaze direction cues and facial expressions have been shown to influence object processing in infants. For example, infants around 12 months of age utilize others' gaze directions and facial expressions to regulate their own behaviour toward an ambiguous target (i.e., social referencing. However, the mechanism by which social signals influence overt orienting in infants is unclear. The present study examined the effects of static gaze direction cues and facial expressions (neutral vs. fearful on overt orienting using a gaze-cueing paradigm in 6- and 12-month-old infants. Two experiments were conducted: in Experiment 1, a face with a leftward or rightward gaze direction was used as a cue, and a face with a forward gaze direction was added in Experiment 2. In both experiments, an effect of facial expression was found in 12-month-olds; no effect was found in 6-month-olds. Twelve-month-old infants exhibited more rapid overt orienting in response to fearful expressions than neutral expressions, irrespective of gaze direction. These findings suggest that gaze direction information and facial expressions independently influence overt orienting in infants, and the effect of facial expression emerges earlier than that of static gaze direction. Implications for the development of gaze direction and facial expression processing systems are discussed.

  11. Neurons in the monkey amygdala detect eye-contact during naturalistic social interactions

    Science.gov (United States)

    Mosher, Clayton P.; Zimmerman, Prisca E.; Gothard, Katalin M.

    2014-01-01

    Summary Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea while fixations stabilize the image [1]. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others [2]. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status [3-6]. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations at the eyes of others and to eye contact. These “eye cells” share several features with the canonical, visually responsive neurons in the monkey amygdala, however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade, or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye-movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. PMID:25283782

  12. Neurons in the monkey amygdala detect eye contact during naturalistic social interactions.

    Science.gov (United States)

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2014-10-20

    Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea, while fixations stabilize the image. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations on the eyes of others and to eye contact. These "eye cells" share several features with the canonical, visually responsive neurons in the monkey amygdala; however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Impact of cognitive and linguistic ability on gaze behavior in children with hearing impairment

    Directory of Open Access Journals (Sweden)

    Olof eSandgren

    2013-11-01

    Full Text Available In order to explore verbal-nonverbal integration, we investigated the influence of cognitive and linguistic ability on gaze behavior during spoken language conversation between children with mild-to-moderate hearing impairment (HI and normal-hearing (NH peers. Ten HI-NH and ten NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Cox proportional hazards regression was used to model associations between performance on cognitive and linguistic tasks and the probability of gaze to the conversational partner’s face. Analyses compare the listeners in each dyad (HI: n = 10, mean age = 12;6 years, SD = 2;0, mean better ear pure-tone average 33.0 dB HL, SD = 7.8; NH: n = 10, mean age = 13;7 years, SD = 1;11. Group differences in gaze behavior – with HI gazing more to the conversational partner than NH – remained significant despite adjustment for ability on receptive grammar, expressive vocabulary, and complex working memory. Adjustment for phonological short term memory, as measured by nonword repetition, removed group differences, revealing an interaction between group membership and nonword repetition ability. Stratified analysis showed a twofold increase of the probability of gaze-to-partner for HI with low phonological short term memory capacity, and a decreased probability for HI with high capacity, as compared to NH peers. The results revealed differences in gaze behavior attributable to performance on a phonological short term memory task. Participants with hearing impairment and low phonological short term memory capacity showed a doubled probability of gaze to the conversational partner, indicative of a visual bias. The results stress the need to look beyond the hearing impairment in diagnostics and intervention. Acknowledgment of the finding requires clinical assessment of children with hearing impairment to be supported by tasks tapping

  14. A covert attention P300-based brain-computer interface: Geospell.

    Science.gov (United States)

    Aloise, Fabio; Aricò, Pietro; Schettini, Francesca; Riccio, Angela; Salinari, Serenella; Mattia, Donatella; Babiloni, Fabio; Cincotti, Febo

    2012-01-01

    The Farwell and Donchin P300 speller interface is one of the most widely used brain-computer interface (BCI) paradigms for writing text. Recent studies have shown that the recognition accuracy of the P300 speller decreases significantly when eye movement is impaired. This report introduces the GeoSpell interface (Geometric Speller), which implements a stimulation framework for a P300-based BCI that has been optimised for operation in covert visual attention. We compared the Geospell with the P300 speller interface under overt attention conditions with regard to effectiveness, efficiency and user satisfaction. Ten healthy subjects participated in the study. The performance of the GeoSpell interface in covert attention was comparable with that of the P300 speller in overt attention. As expected, the effectiveness of the spelling decreased with the new interface in covert attention. The NASA task load index (TLX) for workload assessment did not differ significantly between the two modalities. This study introduces and evaluates a gaze-independent, P300-based brain-computer interface, the efficacy and user satisfaction of which were comparable with those off the classical P300 speller. Despite a decrease in effectiveness due to the use of covert attention, the performance of the GeoSpell far exceeded the threshold of accuracy with regard to effective spelling.

  15. Horizontal gaze palsy with progressive scoliosis: CT and MR findings

    Energy Technology Data Exchange (ETDEWEB)

    Bomfim, Rodrigo C.; Tavora, Daniel G.F.; Nakayama, Mauro; Gama, Romulo L. [Sarah Network of Rehabilitation Hospitals, Department of Radiology, Ceara (Brazil)

    2009-02-15

    Horizontal gaze palsy with progressive scoliosis (HGPPS) is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and progressive scoliosis developing in childhood and adolescence. We present a child with clinical and neuroimaging findings typical of HGPPS. CT and MRI of the brain demonstrated pons hypoplasia, absence of the facial colliculi, butterfly configuration of the medulla and a deep midline pontine cleft. We briefly discuss the imaging aspects of this rare entity in light of the current literature. (orig.)

  16. On uniform resampling and gaze analysis of bidirectional texture functions

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Chantler, M.J.; Haindl, Michal

    2009-01-01

    Roč. 6, č. 3 (2009), s. 1-15 ISSN 1544-3558 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie(BE) 41358 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF * texture * eye tracking Subject RIV: BD - Theory of Information Impact factor: 1.447, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-on uniform resampling and gaze analysis of bidirectional texture functions.pdf

  17. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    Directory of Open Access Journals (Sweden)

    Muhammad Ilhamdi Rusydi

    2014-06-01

    Full Text Available Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2 produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  18. Affine transform to reform pixel coordinates of EOG signals for controlling robot manipulators using gaze motions.

    Science.gov (United States)

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-06-10

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  19. Gazing and Performing

    DEFF Research Database (Denmark)

    Larsen, Jonas; Urry, John

    2011-01-01

    The Tourist Gaze [Urry J, 1990 (Sage, London)] is one of the most discussed and cited tourism books (with about 4000 citations on Google scholar). Whilst wide ranging in scope, the book is known for the Foucault-inspired concept of the tourist gaze that brings out the fundamentally visual and image...

  20. Loneliness and hypervigilance to social cues in females: an eye-tracking study.

    Directory of Open Access Journals (Sweden)

    Gerine M A Lodder

    Full Text Available The goal of the present study was to examine whether lonely individuals differ from nonlonely individuals in their overt visual attention to social cues. Previous studies showed that loneliness was related to biased post-attentive processing of social cues (e.g., negative interpretation bias, but research on whether lonely and nonlonely individuals also show differences in an earlier information processing stage (gazing behavior is very limited. A sample of 25 lonely and 25 nonlonely students took part in an eye-tracking study consisting of four tasks. We measured gazing (duration, number of fixations and first fixation at the eyes, nose and mouth region of faces expressing emotions (Task 1, at emotion quadrants (anger, fear, happiness and neutral expression (Task 2, at quadrants with positive and negative social and nonsocial images (Task 3, and at the facial area of actors in video clips with positive and negative content (Task 4. In general, participants tended to gaze most often and longest at areas that conveyed most social information, such as the eye region of the face (T1, and social images (T3. Participants gazed most often and longest at happy faces (T2 in still images, and more often and longer at the facial area in negative than in positive video clips (T4. No differences occurred between lonely and nonlonely participants in their gazing times and frequencies, nor at first fixations at social cues in the four different tasks. Based on this study, we found no evidence that overt visual attention to social cues differs between lonely and nonlonely individuals. This implies that biases in social information processing of lonely individuals may be limited to other phases of social information processing. Alternatively, biased overt attention to social cues may only occur under specific conditions, for specific stimuli or for specific lonely individuals.

  1. How does image noise affect actual and predicted human gaze allocation in assessing image quality?

    Science.gov (United States)

    Röhrbein, Florian; Goddard, Peter; Schneider, Michael; James, Georgina; Guo, Kun

    2015-07-01

    A central research question in natural vision is how to allocate fixation to extract informative cues for scene perception. With high quality images, psychological and computational studies have made significant progress to understand and predict human gaze allocation in scene exploration. However, it is unclear whether these findings can be generalised to degraded naturalistic visual inputs. In this eye-tracking and computational study, we methodically distorted both man-made and natural scenes with Gaussian low-pass filter, circular averaging filter and Additive Gaussian white noise, and monitored participants' gaze behaviour in assessing perceived image qualities. Compared with original high quality images, distorted images attracted fewer numbers of fixations but longer fixation durations, shorter saccade distance and stronger central fixation bias. This impact of image noise manipulation on gaze distribution was mainly determined by noise intensity rather than noise type, and was more pronounced for natural scenes than for man-made scenes. We furthered compared four high performing visual attention models in predicting human gaze allocation in degraded scenes, and found that model performance lacked human-like sensitivity to noise type and intensity, and was considerably worse than human performance measured as inter-observer variance. Furthermore, the central fixation bias is a major predictor for human gaze allocation, which becomes more prominent with increased noise intensity. Our results indicate a crucial role of external noise intensity in determining scene-viewing gaze behaviour, which should be considered in the development of realistic human-vision-inspired attention models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The Epistemology of the Gaze

    DEFF Research Database (Denmark)

    Kramer, Mette

    2007-01-01

    In psycho-semiotic film theory the gaze is often considered to be a straitjacket for the female spectator. If we approach the gaze from an empiric so-called ‘naturalised’ lens, it is possible to regard the gaze as a functional devise through which the spectator can obtain knowledge essential for ...... for her self-preservation....

  3. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Alireza Haji Fathaliyan

    2018-04-01

    Full Text Available Human–robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze–object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring may require more visual attention than other actions (reach, pick up, set down, and move. 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The “gaze object sequence” was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that

  4. Quantitative linking hypotheses for infant eye movements.

    Directory of Open Access Journals (Sweden)

    Daniel Yurovsky

    Full Text Available The study of cognitive development hinges, largely, on the analysis of infant looking. But analyses of eye gaze data require the adoption of linking hypotheses: assumptions about the relationship between observed eye movements and underlying cognitive processes. We develop a general framework for constructing, testing, and comparing these hypotheses, and thus for producing new insights into early cognitive development. We first introduce the general framework--applicable to any infant gaze experiment--and then demonstrate its utility by analyzing data from a set of experiments investigating the role of attentional cues in infant learning. The new analysis uncovers significantly more structure in these data, finding evidence of learning that was not found in standard analyses and showing an unexpected relationship between cue use and learning rate. Finally, we discuss general implications for the construction and testing of quantitative linking hypotheses. MATLAB code for sample linking hypotheses can be found on the first author's website.

  5. The effects of varying contextual demands on age-related positive gaze preferences.

    Science.gov (United States)

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  6. Effect of direct eye contact in PTSD related to interpersonal trauma: an fMRI study of activation of an innate alarm system.

    Science.gov (United States)

    Steuwe, Carolin; Daniels, Judith K; Frewen, Paul A; Densmore, Maria; Pannasch, Sebastian; Beblo, Thomas; Reiss, Jeffrey; Lanius, Ruth A

    2014-01-01

    In healthy individuals, direct eye contact initially leads to activation of a fast subcortical pathway, which then modulates a cortical route eliciting social cognitive processes. The aim of this study was to gain insight into the neurobiological effects of direct eye-to-eye contact using a virtual reality paradigm in individuals with posttraumatic stress disorder (PTSD) related to prolonged childhood abuse. We examined 16 healthy comparison subjects and 16 patients with a primary diagnosis of PTSD using a virtual reality functional magnetic resonance imaging paradigm involving direct vs averted gaze (happy, sad, neutral) as developed by Schrammel et al. in 2009. Irrespective of the displayed emotion, controls exhibited an increased blood oxygenation level-dependent response during direct vs averted gaze within the dorsomedial prefrontal cortex, left temporoparietal junction and right temporal pole. Under the same conditions, individuals with PTSD showed increased activation within the superior colliculus (SC)/periaqueductal gray (PAG) and locus coeruleus. Our findings suggest that healthy controls react to the exposure of direct gaze with an activation of a cortical route that enhances evaluative 'top-down' processes underlying social interactions. In individuals with PTSD, however, direct gaze leads to sustained activation of a subcortical route of eye-contact processing, an innate alarm system involving the SC and the underlying circuits of the PAG.

  7. Demo of Gaze Controlled Flying

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Hansen, John Paulin; Scott MacKenzie, I.

    2012-01-01

    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV.......Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV....

  8. A Proactive Approach of Robotic Framework for Making Eye Contact with Humans

    Directory of Open Access Journals (Sweden)

    Mohammed Moshiul Hoque

    2014-01-01

    Full Text Available Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.

  9. Oxytocin increases attention to the eyes and selectively enhances self-reported affective empathy for fear.

    Science.gov (United States)

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-11-01

    Oxytocin (OXT) has previously been implicated in a range of prosocial behaviors such as trust and emotion recognition. Nevertheless, recent studies have questioned the evidence for this link. In addition, there has been relatively little conclusive research on the effect of OXT on empathic ability and such studies as there are have not examined the mechanisms through which OXT might affect empathy, or whether OXT selectively facilitates empathy for specific emotions. In the current study, we used eye-tracking to assess attention to socially relevant information while participants viewed dynamic, empathy-inducing video clips, in which protagonists expressed sadness, happiness, pain or fear. In a double-blind, within-subjects, randomized control trial, 40 healthy male participants received 24 IU intranasal OXT or placebo in two identical experimental sessions, separated by a 2-week interval. OXT led to an increase in time spent fixating upon the eye-region of the protagonist's face across emotions. OXT also selectively enhanced self-reported affective empathy for fear, but did not affect cognitive or affective empathy for other emotions. Nevertheless, there was no positive relationship between eye-gaze patterns and affective empathy, suggesting that although OXT influences eye-gaze and may enhance affective empathy for fear, these two systems are independent. Future studies need to further examine the effect of OXT on eye-gaze to fully ascertain whether this can explain the improvements in emotional behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie [ORNL; Pinto, Frank M [ORNL; Morin-Ducote, Garnetta [University of Tennessee, Knoxville (UTK); Hudson, Kathy [University of Tennessee, Knoxville (UTK); Tourassi, Georgia [ORNL

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  11. Gazes and Performances

    DEFF Research Database (Denmark)

    Larsen, Jonas

    ethnographic studies I spell out the embodied, hybridised, mobile and performative nature of tourist gazing especially with regard to tourist photography. The talk draws on my recent book Tourism, Performance and the Everyday: Consuming the Orient (Routledge, 2009, With M. Haldrup) and the substantially......Abstract: Recent literature has critiqued this notion of the 'tourist gaze' for reducing tourism to visual experiences 'sightseeing' and neglecting other senses and bodily experiences of doing tourism. A so-called 'performance turn' within tourist studies highlights how tourists experience places...... to onceptualise the corporeality of tourist bodies and the embodied actions of and interactions between tourist workers, tourists and 'locals' on various stages. It has been suggested that it is necessary to choose between gazing and performing as the tourism paradigm (Perkin and Thorns 2001). Rather than...

  12. Using eye tracking technology to compare the effectiveness of malignant hyperthermia cognitive aid design.

    Science.gov (United States)

    King, Roderick; Hanhan, Jaber; Harrison, T Kyle; Kou, Alex; Howard, Steven K; Borg, Lindsay K; Shum, Cynthia; Udani, Ankeet D; Mariano, Edward R

    2018-05-15

    Malignant hyperthermia is a rare but potentially fatal complication of anesthesia, and several different cognitive aids designed to facilitate a timely and accurate response to this crisis currently exist. Eye tracking technology can measure voluntary and involuntary eye movements, gaze fixation within an area of interest, and speed of visual response and has been used to a limited extent in anesthesiology. With eye tracking technology, we compared the accessibility of five malignant hyperthermia cognitive aids by collecting gaze data from twelve volunteer participants. Recordings were reviewed and annotated to measure the time required for participants to locate objects on the cognitive aid to provide an answer; cumulative time to answer was the primary outcome. For the primary outcome, there were differences detected between cumulative time to answer survival curves (P typescript with minimal use of single color blocking.

  13. Eye contact with neutral and smiling faces: effects on frontal EEG asymmetry and autonomic responses

    Directory of Open Access Journals (Sweden)

    Laura Maria Pönkänen

    2012-05-01

    Full Text Available In our previous studies we have shown that seeing another person live with a direct vs. averted gaze results in greater relative left-sided frontal asymmetry in the electroencephalography (EEG, associated with approach motivation, and in enhanced skin conductance responses indicating autonomic arousal. In our studies, however, the stimulus persons had a neutral expression. In real-life social interaction, eye contact is often associated with a smile, which is another signal of the sender’s approach-related motivation. A smile could therefore enhance the affective-motivational responses to eye contact. In the present study, we investigated whether the facial expression (neutral vs. smile would modulate the frontal EEG asymmetry and autonomic arousal to seeing a direct vs. an averted gaze in faces presented live through a liquid crystal shutter. The results showed that the skin conductance responses were greater for the direct than the averted gaze and that the effect of gaze direction was more pronounced for a smiling than a neutral face. However, the frontal EEG asymmetry results revealed a more complex pattern. Participants whose responses to seeing the other person were overall indicative of leftward frontal activity (indicative of approach showed greater relative left-sided asymmetry for the direct vs. averted gaze, whereas participants whose responses were overall indicative of rightward frontal activity (indicative of avoidance showed greater relative right-sided asymmetry to direct vs. averted gaze. The other person’s facial expression did not have an effect on the frontal EEG asymmetry. These findings may reflect that another’s direct gaze, as compared to their smile, has a more dominant role in regulating perceivers’ approach motivation.

  14. Exploring combinations of different color and facial expression stimuli for gaze-independent BCIs

    Directory of Open Access Journals (Sweden)

    Long eChen

    2016-01-01

    Full Text Available AbstractBackground: Some studies have proven that a conventional visual brain computer interface (BCI based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention had been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression change had been presented, and obtained high performance. However, some facial expressions were so similar that users couldn’t tell them apart. Especially they were presented at the same position in a rapid serial visual presentation (RSVP paradigm. Consequently, the performance of BCIs is reduced.New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help subjects to locate the target and evoke larger event-related potentials (ERPs. In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the grey dummy face pattern and the colored ball pattern. Comparison with Existing Method(s: The key point that determined the value of the colored dummy faces stimuli in BCI systems were whether dummy face stimuli could obtain higher performance than grey faces or colored balls stimuli. Ten healthy subjects (7 male, aged 21-26 years, mean 24.5±1.25 participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the grey dummy face pattern and the colored ball pattern. Online results showed

  15. Gaze patterns reveal how texts are remembered: A mental model of what was described is favored over the text itself

    DEFF Research Database (Denmark)

    Traub, Franziska; Johansson, Roger; Holmqvist, Kenneth

    or incongruent with the spatial layout of the text itself. 28 participants read and recalled three texts: (1) a scene description congruent with the spatial layout of the text; (2) a scene description incongruent with the spatial layout of the text; and (3) a control text without any spatial scene content....... Recollection was performed orally while gazing at a blank screen. 
Results demonstrate that participant’s gaze patterns during recall more closely reflect the spatial layout of the scene than the physical locations of the text. We conclude that participants formed a mental model that represents the content...... of what was described, i.e., visuospatial information of the scene, which then guided the retrieval process. During their retellings, they moved the eyes across the blank screen as if they saw the scene in front of them. Whereas previous studies on the involvement of eye movements in mental imagery tasks...

  16. Attentional bias to betel quid cues: An eye tracking study.

    Science.gov (United States)

    Shen, Bin; Chiu, Meng-Chun; Li, Shuo-Heng; Huang, Guo-Joe; Liu, Ling-Jun; Ho, Ming-Chou

    2016-09-01

    The World Health Organization regards betel quid as a human carcinogen, and DSM-IV and ICD-10 dependence symptoms may develop with heavy use. This study, conducted in central Taiwan, investigated whether betel quid chewers can exhibit overt orienting to selectively respond to the betel quid cues. Twenty-four male chewers' and 23 male nonchewers' eye movements to betel-quid-related pictures and matched pictures were assessed during a visual probe task. The eye movement index showed that betel quid chewers were more likely to initially direct their gaze to the betel quid cues, t(23) = 3.70, p betel quid chewers' attentional bias. The results demonstrated that the betel quid chewers (but not the nonchewers) were more likely to initially direct their gaze to the betel quid cues, and spent more time and were more fixated on them. These findings suggested that when attention is directly measured through the eye tracking technique, this methodology may be more sensitive to detecting attentional biases in betel quid chewers. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Strange-face Illusions During Interpersonal-Gazing and Personality Differences of Spirituality.

    Science.gov (United States)

    Caputo, Giovanni B

    Strange-face illusions are produced when two individuals gaze at each other in the eyes in low illumination for more than a few minutes. Usually, the members of the dyad perceive numinous apparitions, like the other's face deformations and perception of a stranger or a monster in place of the other, and feel a short lasting dissociation. In the present experiment, the influence of the spirituality personality trait on strength and number of strange-face illusions was investigated. Thirty participants were preliminarily tested for superstition (Paranormal Belief Scale, PBS) and spirituality (Spiritual Transcendence Scale, STS); then, they were randomly assigned to 15 dyads. Dyads performed the intersubjective gazing task for 10 minutes and, finally, strange-face illusions (measured through the Strange-Face Questionnaire, SFQ) were evaluated. The first finding was that SFQ was independent of PBS; hence, strange-face illusions during intersubjective gazing are authentically perceptual, hallucination-like phenomena, and not due to superstition. The second finding was that SFQ depended on the spiritual-universality scale of STS (a belief in the unitive nature of life; e.g., "there is a higher plane of consciousness or spirituality that binds all people") and the two variables were negatively correlated. Thus, strange-face illusions, in particular monstrous apparitions, could potentially disrupt binding among human beings. Strange-face illusions can be considered as 'projections' of the subject's unconscious into the other's face. In conclusion, intersubjective gazing at low illumination can be a tool for conscious integration of unconscious 'shadows of the Self' in order to reach completeness of the Self. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Gaze interaction from bed

    DEFF Research Database (Denmark)

    Hansen, John Paulin; San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner

    2011-01-01

    This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assis...

  19. Fusion of P300 and eye-tracker data for spelling using BCI2000

    Science.gov (United States)

    Kalika, Dmitry; Collins, Leslie; Caves, Kevin; Throckmorton, Chandra

    2017-10-01

    Objective. Various augmentative and alternative communication (AAC) devices have been developed in order to aid communication for individuals with communication disorders. Recently, there has been interest in combining EEG data and eye-gaze data with the goal of developing a hybrid (or ‘fused’) BCI (hBCI) AAC system. This work explores the effectiveness of a speller that fuses data from an eye-tracker and the P300 speller in order to create a hybrid P300 speller. Approach. This hybrid speller collects both eye-tracking and EEG data in parallel, and the user spells characters on the screen in the same way that they would if they were only using the P300 speller. Online and offline experiments were performed. The online experiments measured the performance of the speller for sixteen non-disabled participants, while the offline simulations were used to assess the robustness of the hybrid system. Main results. Online results showed that for fifteen non-disabled participants, using eye-gaze in a Bayesian framework with EEG data from the P300 speller improved accuracy (0.0163+/- 2.72 , 0.085+/- 0.111 , 0.080+/- 0.106 for estimated, medium and high variance configurations) and reduced the average number of flashes required to spell a character compared to the standard P300 speller that relies solely on EEG data (-53.27+/- 25.87 , -36.15+/- 19.3 , -18.85+/- 12.43 for estimated, medium and high variance configurations). Offline simulations indicate that the system provides more robust performance than a standalone eye gaze system. Significance. The results of this work on non-disabled participants shows the potential efficacy of hybrid P300 and eye-tracker speller. Further validation on the amyotrophic lateral sceloris population is needed to assess the benefit of this hybrid system.

  20. The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception.

    Science.gov (United States)

    Buchan, Julie N; Paré, Martin; Munhall, Kevin G

    2008-11-25

    During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

  1. Combining user logging with eye tracking for interactive and dynamic applications.

    Science.gov (United States)

    Ooms, Kristien; Coltekin, Arzu; De Maeyer, Philippe; Dupont, Lien; Fabrikant, Sara; Incoul, Annelies; Kuhn, Matthias; Slabbinck, Hendrik; Vansteenkiste, Pieter; Van der Haegen, Lise

    2015-12-01

    User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields.

  2. Spatiotemporal characteristics of gaze of children with autism spectrum disorders while looking at classroom scenes.

    Directory of Open Access Journals (Sweden)

    Takahiro Higuchi

    Full Text Available Children with autism spectrum disorders (ASD who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years and 27 age-matched children with typical development (TD (14 boys and 13 girls; mean age, 8.2 years. We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.

  3. Spatiotemporal characteristics of gaze of children with autism spectrum disorders while looking at classroom scenes.

    Science.gov (United States)

    Higuchi, Takahiro; Ishizaki, Yuko; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari

    2017-01-01

    Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.

  4. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  5. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  6. Brief Report: Broad Autism Phenotype in Adults Is Associated with Performance on an Eye-Tracking Measure of Joint Attention

    Science.gov (United States)

    Swanson, Meghan R.; Siller, Michael

    2014-01-01

    The current study takes advantage of modern eye-tracking technology and evaluates how individuals allocate their attention when viewing social videos that display an adult model who is gazing at a series of targets that appear and disappear in the four corners of the screen (congruent condition), or gazing elsewhere (incongruent condition). Data…

  7. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task

    Directory of Open Access Journals (Sweden)

    Gregor eHardiess

    2013-06-01

    Full Text Available Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements the overall task for the subjects was to predict the potential-of-collision (POC of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing. In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (i which sensory variables can be identified supporting adequate collision detection? (ii How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (iii how do they correlate with task performance? (iv How do patients with homonymous visual field defects use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined

  8. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    Science.gov (United States)

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  9. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    Directory of Open Access Journals (Sweden)

    André eKlostermann

    2015-12-01

    Full Text Available For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called colour-cueing method which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behaviour. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path with a conservative control condition (anticipation-related instructions. Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behaviour, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback.

  10. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    Science.gov (United States)

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  11. Eye movements and manual interception of ballistic trajectories: effects of law of motion perturbations and occlusions.

    Science.gov (United States)

    Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco

    2015-02-01

    Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0 g) or hypergravity (2 g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response.

  12. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and

  13. Multifunctional ZnO interfaces with hierarchical micro- and nanostructures: bio-inspiration from the compound eyes of butterflies

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Sha; Yang, Yefeng; Jin, Yizheng; Huang, Jingyun; Zhao, Binghui; Ye, Zhizhen [Zhejiang University, State Key Laboratory of Silicon Materials, Department of Materials Science and Engineering, Hangzhou (China)

    2010-07-15

    Multifunctional zinc oxide (ZnO) interfaces were fabricated by utilizing the technique of low-temperature metal-organic chemical vapor deposition (MOCVD). The ZnO interfacial material exhibit antiwetting, antireflectance, and photonic properties derived from the unique hierarchical micro- and nanostructures of the compound eye of the butterflies. We demonstrate that the fabrication of the multifunctional interfaces by using biotemplates can be applied to other materials, such as Pt. Our study provides an excellent example to obtain multifunctional interfaces by learning from nature. (orig.)

  14. ANALYSIS OF THE GAZE BEHAVIOUR OF THE WORKER ON THE CARBURETOR ASSEMBLY TASK

    Directory of Open Access Journals (Sweden)

    Novie Susanto

    2015-06-01

    Full Text Available This study presents analysis of the area of interest (AOI and the gaze behavior of human during assembly task. This study aims at investigating the human behavior in detail using an eye‐tracking system during assembly task using LEGO brick and an actual manufactured product, a carburetor. An analysis using heat map data based on the recorded videos from the eye-tracking system is taken into account to examine and investigate the gaze behavior of human. The results of this study show that the carburetor assembly requires more attention than the product made from LEGO bricks. About 50% of the participants experience the necessity to visually inspect the interim state of the work object during the simulation of the assembly sequence on the screen. They also show the tendency to want to be more certain about part fitting in the actual work object.

  15. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children.

    Directory of Open Access Journals (Sweden)

    Natalia I Vargas-Cuentas

    Full Text Available Autism spectrum disorder (ASD currently affects nearly 1 in 160 children worldwide. In over two-thirds of evaluations, no validated diagnostics are used and gold standard diagnostic tools are used in less than 5% of evaluations. Currently, the diagnosis of ASD requires lengthy and expensive tests, in addition to clinical confirmation. Therefore, fast, cheap, portable, and easy-to-administer screening instruments for ASD are required. Several studies have shown that children with ASD have a lower preference for social scenes compared with children without ASD. Based on this, eye-tracking and measurement of gaze preference for social scenes has been used as a screening tool for ASD. Currently available eye-tracking software requires intensive calibration, training, or holding of the head to prevent interference with gaze recognition limiting its use in children with ASD.In this study, we designed a simple eye-tracking algorithm that does not require calibration or head holding, as a platform for future validation of a cost-effective ASD potential screening instrument. This system operates on a portable and inexpensive tablet to measure gaze preference of children for social compared to abstract scenes. A child watches a one-minute stimulus video composed of a social scene projected on the left side and an abstract scene projected on the right side of the tablet's screen. We designed five stimulus videos by changing the social/abstract scenes. Every child observed all the five videos in random order. We developed an eye-tracking algorithm that calculates the child's gaze preference for the social and abstract scenes, estimated as the percentage of the accumulated time that the child observes the left or right side of the screen, respectively. Twenty-three children without a prior history of ASD and 8 children with a clinical diagnosis of ASD were evaluated. The recorded video of the child´s eye movement was analyzed both manually by an observer

  16. Preliminary Experience Using Eye-Tracking Technology to Differentiate Novice and Expert Image Interpretation for Ultrasound-Guided Regional Anesthesia.

    Science.gov (United States)

    Borg, Lindsay K; Harrison, T Kyle; Kou, Alex; Mariano, Edward R; Udani, Ankeet D; Kim, T Edward; Shum, Cynthia; Howard, Steven K

    2018-02-01

    Objective measures are needed to guide the novice's pathway to expertise. Within and outside medicine, eye tracking has been used for both training and assessment. We designed this study to test the hypothesis that eye tracking may differentiate novices from experts in static image interpretation for ultrasound (US)-guided regional anesthesia. We recruited novice anesthesiology residents and regional anesthesiology experts. Participants wore eye-tracking glasses, were shown 5 sonograms of US-guided regional anesthesia, and were asked a series of anatomy-based questions related to each image while their eye movements were recorded. The answer to each question was a location on the sonogram, defined as the area of interest (AOI). The primary outcome was the total gaze time in the AOI (seconds). Secondary outcomes were the total gaze time outside the AOI (seconds), total time to answer (seconds), and time to first fixation on the AOI (seconds). Five novices and 5 experts completed the study. Although the gaze time (mean ± SD) in the AOI was not different between groups (7 ± 4 seconds for novices and 7 ± 3 seconds for experts; P = .150), the gaze time outside the AOI was greater for novices (75 ± 18 versus 44 ± 4 seconds for experts; P = .005). The total time to answer and total time to first fixation in the AOI were both shorter for experts. Experts in US-guided regional anesthesia take less time to identify sonoanatomy and spend less unfocused time away from a target compared to novices. Eye tracking is a potentially useful tool to differentiate novices from experts in the domain of US image interpretation. © 2017 by the American Institute of Ultrasound in Medicine.

  17. Sideways displacement and curved path of recti eye muscles

    NARCIS (Netherlands)

    H.J. Simonsz (Huib); F. Harting (Friedrich); B.J. de Waal (Bob); B.W.J.M. Verbeeten (Ben)

    1985-01-01

    textabstractWe investigated the sideways displacement of recti muscles with the eye in various gaze-positions by making computed tomographic (CT) scans in a plane perpendicular to the muscle cone, posterior to the globe. We found no consistent sideways displacement of the horizontal recti in the up

  18. Gaze inspired subtitle position evaluation for MOOCs videos

    Science.gov (United States)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  19. Who is the Usual Suspect? Evidence of a Selection Bias Toward Faces That Make Direct Eye Contact in a Lineup Task

    Science.gov (United States)

    van Golde, Celine; Verstraten, Frans A. J.

    2017-01-01

    The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly) increase the perceived familiarity of a face. PMID:28203355

  20. Who is the Usual Suspect? Evidence of a Selection Bias Toward Faces That Make Direct Eye Contact in a Lineup Task

    Directory of Open Access Journals (Sweden)

    Jessica Taubert

    2017-02-01

    Full Text Available The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly increase the perceived familiarity of a face.

  1. Wrist-worn pervasive gaze interaction

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Lund, Haakon; Biermann, Florian

    2016-01-01

    This paper addresses gaze interaction for smart home control, conducted from a wrist-worn unit. First we asked ten people to enact the gaze movements they would propose for e.g. opening a door or adjusting the room temperature. On basis of their suggestions we built and tested different versions...... selection. Their subjective evaluations were positive with regard to the speed of the interaction. We conclude that gaze gesture input seems feasible for fast and brief remote control of smart home technology provided that robustness of tracking is improved....

  2. The Interplay between Gaze Following, Emotion Recognition, and Empathy across Adolescence; a Pubertal Dip in Performance?

    Directory of Open Access Journals (Sweden)

    Rianne van Rooijen

    2018-02-01

    Full Text Available During puberty a dip in face recognition is often observed, possibly caused by heightened levels of gonadal hormones which in turn affects the re-organization of relevant cortical circuitry. In the current study we investigated whether a pubertal dip could be observed in three other abilities related to social information processing: gaze following, emotion recognition from the eyes, and empathizing abilities. Across these abilities we further explored whether these measurements revealed sex differences as another way to understand how gonadal hormones affect processing of social information. Results show that across adolescence, there are improvements in emotion recognition from the eyes and in empathizing abilities. These improvements did not show a dip, but are more plateau-like. The gaze cueing effect did not change over adolescence. We only observed sex differences in empathizing abilities, with girls showing higher scores than boys. Based on these results it appears that gonadal hormones are not exerting a unified influence on higher levels of social information processing. Further research should also explore changes in (visual information processing around puberty onset to find a more fitted explanation for changes in social behavior across adolescence.

  3. Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP).

    Science.gov (United States)

    Acqualagna, Laura; Blankertz, Benjamin

    2013-05-01

    A Brain Computer Interface (BCI) speller is a communication device, which can be used by patients suffering from neurodegenerative diseases to select symbols in a computer application. For patients unable to overtly fixate the target symbol, it is crucial to develop a speller independent of gaze shifts. In the present online study, we investigated rapid serial visual presentation (RSVP) as a paradigm for mental typewriting. We investigated the RSVP speller in three conditions, regarding the Stimulus Onset Asynchrony (SOA) and the use of color features. A vocabulary of 30 symbols was presented one-by-one in a pseudo random sequence at the same location of display. All twelve participants were able to successfully operate the RSVP speller. The results show a mean online spelling rate of 1.43 symb/min and a mean symbol selection accuracy of 94.8% in the best condition. We conclude that the RSVP is a promising paradigm for BCI spelling and its performance is competitive with the fastest gaze-independent spellers in literature. The RSVP speller does not require gaze shifts towards different target locations and can be operated by non-spatial visual attention, therefore it can be considered as a valid paradigm in applications with patients for impaired oculo-motor control. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia.

    Science.gov (United States)

    Han, Kiwan; Shin, Jungeun; Yoon, Sang Young; Jang, Dong-Pyo; Kim, Jae-Jin

    2014-06-01

    Virtual reality has been used to measure abnormal social characteristics, particularly in one-to-one situations. In real life, however, conversations with multiple companions are common and more complicated than two-party conversations. In this study, we explored the features of social behaviors in patients with schizophrenia during virtual multiparty conversations. Twenty-three patients with schizophrenia and 22 healthy controls performed the virtual three-party conversation task, which included leading and aiding avatars, positive- and negative-emotion-laden situations, and listening and speaking phases. Patients showed a significant negative correlation in the listening phase between the amount of gaze on the between-avatar space and reasoning ability, and demonstrated increased gaze on the between-avatar space in the speaking phase that was uncorrelated with attentional ability. These results suggest that patients with schizophrenia have active avoidance of eye contact during three-party conversations. Virtual reality may provide a useful way to measure abnormal social characteristics during multiparty conversations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. How do we update faces? Effects of gaze direction and facial expressions on working memory updating

    Directory of Open Access Journals (Sweden)

    Caterina eArtuso

    2012-09-01

    Full Text Available The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM. We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g. joy, while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g. fear. Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g. joy-direct gaze were compared to low binding conditions (e.g. joy-averted gaze. Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.

  6. How do we update faces? Effects of gaze direction and facial expressions on working memory updating.

    Science.gov (United States)

    Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola

    2012-01-01

    The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g., fear). Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g., joy-direct gaze) were compared to low binding conditions (e.g., joy-averted gaze). Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.

  7. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  8. Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention.

    Science.gov (United States)

    Graham, Reiko; Labar, Kevin S

    2012-04-01

    The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic vs. static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Controlling Attention to Gaze and Arrows in Childhood: An fMRI Study of Typical Development and Autism Spectrum Disorders

    Science.gov (United States)

    Vaidya, Chandan J.; Foss-Feig, Jennifer; Shook, Devon; Kaplan, Lauren; Kenworthy, Lauren; Gaillard, William D.

    2011-01-01

    Functional magnetic resonance imaging was used to examine functional anatomy of attention to social (eye gaze) and nonsocial (arrow) communicative stimuli in late childhood and in a disorder defined by atypical processing of social stimuli, Autism Spectrum Disorders (ASD). Children responded to a target word ("LEFT"/"RIGHT") in the context of a…

  10. Nonarteritic ischemic optic neuropathy secondary to severe ocular hypertension masked by interface fluid in a post-LASIK eye.

    Science.gov (United States)

    Pham, Mai T; Peck, Rachel E; Dobbins, Kendall R B

    2013-06-01

    We report a case of ischemic optic neuropathy arising from elevated intraocular pressure (IOP) masked by interface fluid in a post-laser in situ keratomileusis (LASIK) eye. A 51-year-old man, who had had LASIK 6 years prior to presentation, sustained blunt trauma to the left eye that resulted in a hyphema and ocular hypertension. Elevated IOP resulted in accumulation of fluid in the stromal bed-LASIK flap interface, leading to underestimation of IOP when measured centrally over the flap. After days of unrecognized ocular hypertension, ischemic optic neuropathy developed. To our knowledge, this is the first reported case of ischemic optic neuropathy resulting from underestimated IOP measurements in a post-LASIK patient. It highlights the inaccuracy of IOP measurements in post-LASIK eyes and a vision-threatening potential complication. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    International Nuclear Information System (INIS)

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content

  12. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie; Tourassi, Georgia D. [Biomedical Science and Engineering Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pinto, Frank [School of Engineering, Science, and Technology, Virginia State University, Petersburg, Virginia 23806 (United States); Morin-Ducote, Garnetta; Hudson, Kathleen B. [Department of Radiology, University of Tennessee Medical Center at Knoxville, Knoxville, Tennessee 37920 (United States)

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  13. Eye movements in interception with delayed visual feedback.

    Science.gov (United States)

    Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli

    2018-04-19

    The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.

  14. Lens oscillations in the human eye. Implications for post-saccadic suppression of vision.

    Directory of Open Access Journals (Sweden)

    Juan Tabernero

    Full Text Available The eye changes gaze continuously from one visual stimulus to another. Using a high speed camera to record eye and lens movements we demonstrate how the crystalline lens sustains an inertial oscillatory decay movement immediately after every change of gaze. This behavior fit precisely with the movement of a classical damped harmonic oscillator. The time course of the oscillations range from 50 to 60 msec with an oscillation frequency of around 20 Hz. That has dramatic implications on the image quality at the retina on the very short times (∼50 msec that follow the movement. However, it is well known that our vision is nearly suppressed on those periods (post-saccadic suppression. Both phenomenon follow similar time courses and therefore might be synchronized to avoid the visual impairment.

  15. Cognitive assessment of executive functions using brain computer interface and eye-tracking

    Directory of Open Access Journals (Sweden)

    P. Cipresso

    2013-03-01

    Full Text Available New technologies to enable augmentative and alternative communication in Amyotrophic Lateral Sclerosis (ALS have been recently used in several studies. However, a comprehensive battery for cognitive assessment has not been implemented yet. Brain computer interfaces are innovative systems able to generate a control signal from brain responses conveying messages directly to a computer. Another available technology for communication purposes is the Eye-tracker system, that conveys messages from eye-movement to a computer. In this study we explored the use of these two technologies for the cognitive assessment of executive functions in a healthy population and in a ALS patient, also verifying usability, pleasantness, fatigue, and emotional aspects related to the setting. Our preliminary results may have interesting implications for both clinical practice (the availability of an effective tool for neuropsychological evaluation of ALS patients and ethical issues.

  16. Owners' direct gazes increase dogs' attention-getting behaviors.

    Science.gov (United States)

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Allocentrically implied target locations are updated in an eye-centred reference frame.

    Science.gov (United States)

    Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P

    2012-04-18

    When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Eye-movements During Translation

    DEFF Research Database (Denmark)

    Balling, Laura Winther

    2013-01-01

    texts as well as both eye-tracking and keylogging data. Based on this database, I present a large-scale analysis of gaze on the source text based on 91 translators' translations of six different texts from English into four different target languages. I use mixed-effects modelling to compare from......, and variables indexing the alignment between the source and target texts. The results are related to current models of translation processes and reading and compared to a parallel analysis of production time....

  19. Driver fatigue alarm based on eye detection and gaze estimation

    Science.gov (United States)

    Sun, Xinghua; Xu, Lu; Yang, Jingyu

    2007-11-01

    The driver assistant system has attracted much attention as an essential component of intelligent transportation systems. One task of driver assistant system is to prevent the drivers from fatigue. For the fatigue detection it is natural that the information about eyes should be utilized. The driver fatigue can be divided into two types, one is the sleep with eyes close and another is the sleep with eyes open. Considering that the fatigue detection is related with the prior knowledge and probabilistic statistics, the dynamic Bayesian network is used as the analysis tool to perform the reasoning of fatigue. Two kinds of experiments are performed to verify the system effectiveness, one is based on the video got from the laboratory and another is based on the video got from the real driving situation. Ten persons participate in the test and the experimental result is that, in the laboratory all the fatigue events can be detected, and in the practical vehicle the detection ratio is about 85%. Experiments show that in most of situations the proposed system works and the corresponding performance is satisfying.

  20. The Neuroergonomics of Aircraft Cockpits: The Four Stages of Eye-Tracking Integration to Enhance Flight Safety

    Directory of Open Access Journals (Sweden)

    Vsevolod Peysakhovich

    2018-02-01

    Full Text Available Commercial aviation is currently one of the safest modes of transportation; however, human error is still one major contributing cause of aeronautical accidents and incidents. One promising avenue to further enhance flight safety is Neuroergonomics, an approach at the intersection of neuroscience, cognitive engineering and human factors, which aims to create better human–system interaction. Eye-tracking technology allows users to “monitor the monitoring” by providing insights into both pilots’ attentional distribution and underlying decisional processes. In this position paper, we identify and define a framework of four stages of step-by-step integration of eye-tracking systems in modern cockpits. Stage I concerns Pilot Training and Flight Performance Analysis on-ground; stage II proposes On-board Gaze Recordings as extra data for the “black box” recorders; stage III describes Gaze-Based Flight Deck Adaptation including warning and alerting systems, and, eventually, stage IV prophesies Gaze-Based Aircraft Adaptation including authority taking by the aircraft. We illustrate the potential of these four steps with a description of incidents or accidents that we could certainly have avoided thanks to eye-tracking. Estimated milestones for the integration of each stage are also proposed together with a list of some implementation limitations. We believe that the research institutions and industrial actors of the domain will all benefit from the integration of the framework of the eye-tracking systems into cockpits.

  1. Fear of Negative Evaluation Influences Eye Gaze in Adolescents with Autism Spectrum Disorder: A Pilot Study

    Science.gov (United States)

    White, Susan W.; Maddox, Brenna B.; Panneton, Robin K.

    2015-01-01

    Social anxiety is common among adolescents with Autism Spectrum Disorder (ASD). In this modest-sized pilot study, we examined the relationship between social worries and gaze patterns to static social stimuli in adolescents with ASD (n = 15) and gender-matched adolescents without ASD (control; n = 18). Among cognitively unimpaired adolescents with…

  2. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    Science.gov (United States)

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  3. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Directory of Open Access Journals (Sweden)

    Per Olav Folgerø

    2016-09-01

    larger contrast between the gaze directions for profiles.Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  4. Attention to eye contact in the West and East: autonomic responses and evaluative ratings.

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Uibo, Helen; Kikuchi, Yukiko; Hasegawa, Toshikazu; Hietanen, Jari K

    2013-01-01

    Eye contact has a fundamental role in human social interaction. The special appearance of the human eye (i.e., white sclera contrasted with a coloured iris) implies the importance of detecting another person's face through eye contact. Empirical studies have demonstrated that faces making eye contact are detected quickly and processed preferentially (i.e., the eye contact effect). Such sensitivity to eye contact seems to be innate and universal among humans; however, several studies suggest that cultural norms affect eye contact behaviours. For example, Japanese individuals exhibit less eye contact than do individuals from Western European or North American cultures. However, how culture modulates eye contact behaviour is unclear. The present study investigated cultural differences in autonomic correlates of attentional orienting (i.e., heart rate) and looking time. Additionally, we examined evaluative ratings of eye contact with another real person, displaying an emotionally neutral expression, between participants from Western European (Finnish) and East Asian (Japanese) cultures. Our results showed that eye contact elicited stronger heart rate deceleration responses (i.e., attentional orienting), shorter looking times, and higher ratings of subjective feelings of arousal as compared to averted gaze in both cultures. Instead, cultural differences in the eye contact effect were observed in various evaluative responses regarding the stimulus faces (e.g., facial emotion, approachability etc.). The rating results suggest that individuals from an East Asian culture perceive another's face as being angrier, unapproachable, and unpleasant when making eye contact as compared to individuals from a Western European culture. The rating results also revealed that gaze direction (direct vs. averted) could influence perceptions about another person's facial affect and disposition. These results suggest that cultural differences in eye contact behaviour emerge from differential

  5. Attention to eye contact in the West and East: autonomic responses and evaluative ratings.

    Directory of Open Access Journals (Sweden)

    Hironori Akechi

    Full Text Available Eye contact has a fundamental role in human social interaction. The special appearance of the human eye (i.e., white sclera contrasted with a coloured iris implies the importance of detecting another person's face through eye contact. Empirical studies have demonstrated that faces making eye contact are detected quickly and processed preferentially (i.e., the eye contact effect. Such sensitivity to eye contact seems to be innate and universal among humans; however, several studies suggest that cultural norms affect eye contact behaviours. For example, Japanese individuals exhibit less eye contact than do individuals from Western European or North American cultures. However, how culture modulates eye contact behaviour is unclear. The present study investigated cultural differences in autonomic correlates of attentional orienting (i.e., heart rate and looking time. Additionally, we examined evaluative ratings of eye contact with another real person, displaying an emotionally neutral expression, between participants from Western European (Finnish and East Asian (Japanese cultures. Our results showed that eye contact elicited stronger heart rate deceleration responses (i.e., attentional orienting, shorter looking times, and higher ratings of subjective feelings of arousal as compared to averted gaze in both cultures. Instead, cultural differences in the eye contact effect were observed in various evaluative responses regarding the stimulus faces (e.g., facial emotion, approachability etc.. The rating results suggest that individuals from an East Asian culture perceive another's face as being angrier, unapproachable, and unpleasant when making eye contact as compared to individuals from a Western European culture. The rating results also revealed that gaze direction (direct vs. averted could influence perceptions about another person's facial affect and disposition. These results suggest that cultural differences in eye contact behaviour emerge from

  6. The Influences of Face Inversion and Facial Expression on Sensitivity to Eye Contact in High-Functioning Adults with Autism Spectrum Disorders

    Science.gov (United States)

    Vida, Mark D.; Maurer, Daphne; Calder, Andrew J.; Rhodes, Gillian; Walsh, Jennifer A.; Pachai, Matthew V.; Rutherford, M. D.

    2013-01-01

    We examined the influences of face inversion and facial expression on sensitivity to eye contact in high-functioning adults with and without an autism spectrum disorder (ASD). Participants judged the direction of gaze of angry, fearful, and neutral faces. In the typical group only, the range of directions of gaze leading to the perception of eye…

  7. Electronic medical records in diabetes consultations: participants' gaze as an interactional resource.

    Science.gov (United States)

    Rhodes, Penny; Small, Neil; Rowley, Emma; Langdon, Mark; Ariss, Steven; Wright, John

    2008-09-01

    Two routine consultations in primary care diabetes clinics are compared using extracts from video recordings of interactions between nurses and patients. The consultations were chosen to present different styles of interaction, in which the nurse's gaze was either primarily toward the computer screen or directed more toward the patient. Using conversation analysis, the ways in which nurses shift both gaze and body orientation between the computer screen and patient to influence the style, pace, content, and structure of the consultation were investigated. By examining the effects of different levels of engagement between the electronic medical record and the embodied patient in the consultation room, we argue for the need to consider the contingent nature of the interface of technology and the person in the consultation. Policy initiatives designed to deliver what is considered best-evidenced practice are modified in the micro context of the interactions of the consultation.

  8. Effect that Smell Presentation Has on an Individual in Regards to Eye Catching and Memory

    Science.gov (United States)

    Tomono, Akira; Kanda, Koyori; Otake, Syunya

    If a person's eyes are greater attracted to the target objects by matching the smell to an important scene of a movie or commercial image, the value of the image contents will rise. In this paper, we attempt to describe the image system that can also present smells, and the reason behind the improvement, from gaze point analysis, of the presence of smell when it is matched to the image. The relationship between the eye catching property and the position of the sight object was examined using the image with the scene where someone eats three kinds of fruits. These objects were gazed at for a long time once releasing their smells. When the smell was not released, the gaze moved actively to try and receive a lot of information from the entire screen. On the other hand, when the smell was inserted, the subject was interested in the object and there was a tendency for their gaze to stay within the narrow area surrounding the image. Moreover, we investigated the effect on the memory by putting the smell on the flowers in the virtual flower shop using immersive virtual reality system (HoloStageTM). It was memorized more easily compared with a scentless case. It seems that the viewer obtains the information actively by reacting to its smell.

  9. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  10. Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze.

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2016-08-01

    Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Examining Vision and Attention in Sports Performance Using a Gaze-Contingent Paradigm

    Directory of Open Access Journals (Sweden)

    Donghyun Ryu

    2012-10-01

    Full Text Available In time-constrained activities, such as competitive sports, the rapid acquisition and comprehension of visual information is vital for successful performance. Currently our understanding of how and what visual information is acquired and how this changes with skill development is quite rudimentary. Interpretation of eye movement behaviour is limited by uncertainties surrounding the relationship between attention, line-of-gaze data, and the mechanism of information pick-up from different sectors of the visual field. We used gaze-contingent display methodology to provide a selective information presentation to the central and peripheral parts of the visual field while performing a decision-making task. Eleven skilled and 11 less-skilled players watched videos of basketball scenarios under three different vision conditions (tunnel, masked, and full vision and in a forced-choice paradigm responded whether it was more appropriate for the ball carrier to pass or drive. In the tunnel and mask conditions vision was selectively restricted to, or occluded from, 5o around the line of gaze respectively. The skilled players showed significantly higher response accuracy and faster response times compared to their lesser skilled counterparts irrespective of the vision condition, demonstrating the skilled players' superiority in information extraction irrespective of the segment of visual field they rely on. Findings suggest that the capability to interpret visual information appears to be the key limiting factor to expert performance rather than the sector of the visual field in which the information is detected.

  12. Eye Carduino: A Car Control System using Eye Movements

    Science.gov (United States)

    Kumar, Arjun; Nagaraj, Disha; Louzardo, Joel; Hegde, Rajeshwari

    2011-12-01

    Modern automotive systems are rapidly becoming highly of transportation, but can be a web integrated media centre. This paper explains the implementation of a vehicle control defined and characterized by embedded electronics and software. With new technologies, the vehicle industry is facing new opportunities and also new challenges. Electronics have improved the performance of vehicles and at the same time, new more complex applications are introduced. Examples of high level applications include adaptive cruise control and electronic stability programs (ESP). Further, a modern vehicle does not have to be merely a means using only eye movements. The EyeWriter's native hardware and software work to return the co-ordinates of where the user is looking. These co-ordinates are then used to control the car. A centre-point is defined on the screen. The higher on the screen the user's gaze is, the faster the car will accelerate. Braking is done by looking below centre. Steering is done by looking left and right on the screen.

  13. Postural control and head stability during natural gaze behaviour in 6- to 12-year-old children.

    Science.gov (United States)

    Schärli, A M; van de Langenberg, R; Murer, K; Müller, R M

    2013-06-01

    We investigated how the influence of natural exploratory gaze behaviour on postural control develops from childhood into adulthood. In a cross-sectional design, we compared four age groups: 6-, 9-, 12-year-olds and young adults. Two experimental trials were performed: quiet stance with a fixed gaze (fixed) and quiet stance with natural exploratory gaze behaviour (exploratory). The latter was elicited by having participants watch an animated short film on a large screen in front of them. 3D head rotations in space and centre of pressure (COP) excursions on the ground plane were measured. Across conditions, both head rotation and COP displacement decreased with increasing age. Head movement was greater in the exploratory condition in all age groups. In all children-but not in adults-COP displacement was markedly greater in the exploratory condition. Bivariate correlations across groups showed highly significant positive correlations between COP displacement in ML direction and head rotation in yaw, roll, and pitch in both conditions. The regularity of COP displacements did not show a clear developmental trend, which indicates that COP dynamics were qualitatively similar across age groups. Together, the results suggest that the contribution of head movement to eye-head saccades decreases with age and that head instability-in part resulting from such gaze-related head movements-is an important limiting factor in children's postural control. The lack of head stabilisation might particularly affect children in everyday activities in which both postural control and visual exploration are required.

  14. LAND WHERE YOU LOOK? – FUNCTIONAL RELATIONSHIPS BETWEEN GAZE AND MOVEMENT BEHAVIOUR IN A BACKWARD SALTO

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2012-07-01

    Full Text Available In most everyday actions the eyes look towards objects and locations they are engaged with in a specific task and this information is used to guide the corresponding action. The question is, however, whether this strategy also holds for skills incorporating a whole-body rotation in sport. Therefore, the goal of this study was to investigate relationships between gaze behaviour and movement behaviour in a complex gymnastics skill, namely the backward salto performed as a dismount on the uneven bars. Thirteen expert gymnasts were instructed to fixate a light spot on the landing mat during the downswing phase when performing a backward salto as dismount. The location of the light spot was varied systematically with regard to each gymnast’s individual landing distance. Time-discrete kinematic parameters of the swing motion and the dismount were measured. It was expected that fixating the gaze towards different locations of the light spot on the landing mat would directly affect the landing location. We had, however, no specific predictions on the effects of manipulating gaze direction on the remaining kinematic parameters. The hip angle at the top of the backswing, the duration of the downswing phase, the hip angle prior to kick-through, and the landing distance varied clearly as a function of the location of the light spot. It is concluded that fixating the gaze towards the landing mat serves the function to execute the skill in a way to land on a particular location.

  15. Mental state attribution and the gaze cueing effect.

    Science.gov (United States)

    Cole, Geoff G; Smith, Daniel T; Atkinson, Mark A

    2015-05-01

    Theory of mind is said to be possessed by an individual if he or she is able to impute mental states to others. Recently, some authors have demonstrated that such mental state attributions can mediate the "gaze cueing" effect, in which observation of another individual shifts an observer's attention. One question that follows from this work is whether such mental state attributions produce mandatory modulations of gaze cueing. Employing the basic gaze cueing paradigm, together with a technique commonly used to assess mental-state attribution in nonhuman animals, we manipulated whether the gazing agent could see the same thing as the participant (i.e., the target) or had this view obstructed by a physical barrier. We found robust gaze cueing effects, even when the observed agent in the display could not see the same thing as the participant. These results suggest that the attribution of "seeing" does not necessarily modulate the gaze cueing effect.

  16. Gaze-Aware Streaming Solutions for the Next Generation of Mobile VR Experiences.

    Science.gov (United States)

    Lungaro, Pietro; Sjoberg, Rickard; Valero, Alfredo Jose Fanghella; Mittal, Ashutosh; Tollmar, Konrad

    2018-04-01

    This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.

  17. Eye movements reveal epistemic curiosity in human observers.

    Science.gov (United States)

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study.

    Science.gov (United States)

    Tsujimura, Akira; Kiuchi, Hiroshi; Soda, Tetsuji; Takezawa, Kentaro; Fukuhara, Shinichiro; Takao, Tetsuya; Sekiguchi, Yuki; Iwasa, Atsushi; Nonomura, Norio; Miyagawa, Yasushi

    2017-09-01

    Very little has been elucidated about sexual interest in female-to-male (FtM) transsexual persons. To investigate the sexual interest of FtM transsexual persons vs that of men using an eye-tracking system. The study included 15 men and 13 FtM transsexual subjects who viewed three sexual videos (clip 1: sexy clothed young woman kissing the region of the male genitals covered by underwear; clip 2: naked actor and actress kissing and touching each other; and clip 3: heterosexual intercourse between a naked actor and actress) in which several regions were designated for eye-gaze analysis in each frame. The designation of each region was not visible to the participants. Visual attention was measured across each designated region according to gaze duration. For clip 1, there was a statistically significant sex difference in the viewing pattern between men and FtM transsexual subjects. Longest gaze time was for the eyes of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. For clip 2, there also was a statistically significant sex difference. Longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects, and there was a significant difference between regions with longest gaze time. The most apparent difference was in the gaze time for the body of the actor: the percentage of time spent gazing at the body of the actor was 8.35% in FtM transsexual subjects, whereas it was only 0.03% in men. For clip 3, there were no statistically significant differences in viewing patterns between men and FtM transsexual subjects, although longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. We suggest that the characteristics of sexual interest of FtM transsexual persons are not the same as those of biological men. Tsujimura A, Kiuchi H, Soda T, et al. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons

  19. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    Science.gov (United States)

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking.

    Science.gov (United States)

    Kim, Byung Hyung; Kim, Minho; Jo, Sungho

    2014-08-01

    We propose a wearable hybrid interface where eye movements and mental concentration directly influence the control of a quadcopter in three-dimensional space. This noninvasive and low-cost interface addresses limitations of previous work by supporting users to complete their complicated tasks in a constrained environment in which only visual feedback is provided. The combination of the two inputs augments the number of control commands to enable the flying robot to travel in eight different directions within the physical environment. Five human subjects participated in the experiments to test the feasibility of the hybrid interface. A front view camera on the hull of the quadcopter provided the only visual feedback to each remote subject on a laptop display. Based on the visual feedback, the subjects used the interface to navigate along pre-set target locations in the air. The flight performance was evaluated by comparing with a keyboard-based interface. We demonstrate the applicability of the hybrid interface to explore and interact with a three-dimensional physical space through a flying robot. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Eye position effects on the remapped memory trace of visual motion in cortical area MST.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2016-02-23

    After a saccade, most MST neurons respond to moving visual stimuli that had existed in their post-saccadic receptive fields and turned off before the saccade ("trans-saccadic memory remapping"). Neuronal responses in higher visual processing areas are known to be modulated in relation to gaze angle to represent image location in spatiotopic coordinates. In the present study, we investigated the eye position effects after saccades and found that the gaze angle modulated the visual sensitivity of MST neurons after saccades both to the actually existing visual stimuli and to the visual memory traces remapped by the saccades. We suggest that two mechanisms, trans-saccadic memory remapping and gaze modulation, work cooperatively in individual MST neurons to represent a continuous visual world.

  2. Noise Challenges in Monomodal Gaze Interaction

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik

    of life via dedicated interface tools especially tailored to the users’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eye-tracking systems and there are many available on the market. Even...

  3. Embodied social robots trigger gaze following in real-time

    OpenAIRE

    Wiese, Eva; Weis, Patrick; Lofaro, Daniel

    2018-01-01

    In human-human interaction, we use information from gestures, facial expressions and gaze direction to make inferences about what interaction partners think, feel or intend to do next. Observing changes in gaze direction triggers shifts of attention to gazed-at locations and helps establish shared attention between gazer and observer - a prerequisite for more complex social skills like mentalizing, action understanding and joint action. The ability to follow others’ gaze develops early in lif...

  4. Adaptive gaze control for object detection

    NARCIS (Netherlands)

    De Croon, G.C.H.E.; Postma, E.O.; Van den Herik, H.J.

    2011-01-01

    We propose a novel gaze-control model for detecting objects in images. The model, named act-detect, uses the information from local image samples in order to shift its gaze towards object locations. The model constitutes two main contributions. The first contribution is that the model’s setup makes

  5. Infants in control: rapid anticipation of action outcomes in a gaze-contingent paradigm.

    Directory of Open Access Journals (Sweden)

    Quan Wang

    Full Text Available Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.

  6. The dependencies of fronto-parietal BOLD responses evoked by covert visual search suggest eye-centred coding.

    Science.gov (United States)

    Atabaki, A; Dicke, P W; Karnath, H-O; Thier, P

    2013-04-01

    Visual scenes explored covertly are initially represented in a retinal frame of reference (FOR). On the other hand, 'later' stages of the cortical network allocating spatial attention most probably use non-retinal or non-eye-centred representations as they may ease the integration of different sensory modalities for the formation of supramodal representations of space. We tested if the cortical areas involved in shifting covert attention are based on eye-centred or non-eye-centred coding by using functional magnetic resonance imaging. Subjects were scanned while detecting a target item (a regularly oriented 'L') amidst a set of distractors (rotated 'L's). The array was centred either 5° right or left of the fixation point, independent of eye-gaze orientation, the latter varied in three steps: straight relative to the head, 10° left or 10° right. A quantitative comparison of the blood-oxygen-level-dependent (BOLD) responses for the three eye-gaze orientations revealed stronger BOLD responses in the right intraparietal sulcus (IPS) and the right frontal eye field (FEF) for search in the contralateral (i.e. left) eye-centred space, independent of whether the array was located in the right or left head-centred hemispace. The left IPS showed the reverse pattern, i.e. an activation by search in the right eye-centred hemispace. In other words, the IPS and the right FEF, members of the cortical network underlying covert search, operate in an eye-centred FOR. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  7. Evidence for a link between changes to gaze behaviour and risk of falling in older adults during adaptive locomotion.

    Science.gov (United States)

    Chapman, G J; Hollands, M A

    2006-11-01

    There is increasing evidence that gaze stabilization with respect to footfall targets plays a crucial role in the control of visually guided stepping and that there are significant changes to gaze behaviour as we age. However, past research has not measured if age-related changes in gaze behaviour are associated with changes to stepping performance. This paper aims to identify differences in gaze behaviour between young (n=8) adults, older adults determined to be at a low-risk of falling (low-risk, n=4) and older adults prone to falling (high-risk, n=4) performing an adaptive locomotor task and attempts to relate observed differences in gaze behaviour to decline in stepping performance. Participants walked at a self-selected pace along a 9m pathway stepping into two footfall target locations en route. Gaze behaviour and lower limb kinematics were recorded using an ASL 500 gaze tracker interfaced with a Vicon motion analysis system. Results showed that older adults looked significantly sooner to targets, and fixated the targets for longer, than younger adults. There were also significant differences in these measures between high and low-risk older adults. On average, high-risk older adults looked away from targets significantly sooner and demonstrated less accurate and more variable foot placements than younger adults and low-risk older adults. These findings suggest that, as we age, we need more time to plan precise stepping movements and clearly demonstrate that there are differences between low-risk and high-risk older adults in both where and when they look at future stepping targets and the precision with which they subsequently step. We propose that high-risk older adults may prioritize the planning of future actions over the accurate execution of ongoing movements and that adoption of this strategy may contribute to an increased likelihood of falls. Copyright 2005 Elsevier B.V.

  8. Latvijas Gaze buyback likely to flop

    Index Scriptorium Estoniae

    2003-01-01

    Veerandi Läti gaasifirma Latvijas Gaze omanik Itera kavatseb lähiajal lõpule viia üheksa protsendi Läti firma aktsiate müügi ettevõttele Gazprom. Gazprom'i kontrolli all on praegu 25 protsenti, Ruhrgas'il 28,66 ning E.ON Energie AG-l 18,06 protsenti Latvijas Gaze aktsiatest

  9. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  10. Gazing behavior reactions of Vietnamese and Austrian consumers to Austrian wafers and their relations to wanting, expected and tasted liking.

    Science.gov (United States)

    Vu, Thi Minh Hang; Tu, Viet Phu; Duerrschmid, Klaus

    2018-05-01

    Predictability of consumers' food choice based on their gazing behavior using eye-tracking has been shown and discussed in recent research. By applying this observational technique and conventional methods on a specific food product, this study aims at investigating consumers' reactions associated with gazing behavior, wanting, building up expectations, and the experience of tasting. The tested food products were wafers from Austria with hazelnut, whole wheat, lemon and vanilla flavors, which are very well known in Austria and not known in Vietnam. 114 Vietnamese and 128 Austrian participants took part in three sections: The results indicate that: i) the gazing behavior parameters are highly correlated in a positive way with the wanting-to-try choice; ii) wanting to try is in compliance with the expected liking for the Austrian consumer panel only, which is very familiar with the products; iii) the expected and tasted liking of the products are highly country and product dependent. The expected liking is strongly correlated with the tasted liking for the Austrian panel only. Differences between the reactions of the Vietnamese and Austrian consumers are discussed in detail. The results, which reflect the complex process from gazing for "wanting to try" to the expected and tasted liking, are discussed in the context of the cognitive theory and food choice habits of the consumers. Copyright © 2018. Published by Elsevier Ltd.

  11. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  12. Natural user interface as a supplement of the holographic Raman tweezers

    Science.gov (United States)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  13. Virtual button interface

    Science.gov (United States)

    Jones, J.S.

    1999-01-12

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment are disclosed. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch. 4 figs.

  14. Visual straight-ahead preference in saccadic eye movements.

    Science.gov (United States)

    Camors, Damien; Trotter, Yves; Pouget, Pierre; Gilardeau, Sophie; Durand, Jean-Baptiste

    2016-03-15

    Ocular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades.

  15. Animating Flames: Recovering Fire-Gazing as a Moving-Image Technology

    Directory of Open Access Journals (Sweden)

    Anne Sullivan

    2017-12-01

    Full Text Available In nineteenth-century England, the industrialization of heat and light rendered fire-gazing increasingly obsolete. Fire-gazing is a form of flame-based reverie that typically involves a solitary viewer who perceives animated, moving images dissolving into and out of view in a wood or coal fire. When fire-gazing, the viewer may perceive arbitrary pictures, fantastic landscapes, or more familiar forms, such as the faces of friends and family. This article recovers fire-gazing as an early and more intimate animation technology by examining remediations of fire-gazing in print. After reviewing why an analysis of fire-gazing requires a joint literary and media history approach, I build from Michael Faraday’s mid-nineteenth-century theorization of flame as a moving image to argue that fire-gazing must be included in the history of animation technologies. I then demonstrate the uneasy connections that form between automatism, mechanical reproduction, and creativity in Leigh Hunt’s description of fire-gazing in his 1811 essay ‘A Day by the Fire’. The tension between conscious and unconscious modes of production culminates in a discussion of fireside scenes of (reanimation in Charles Dickens’s 'Our Mutual Friend' (1864–65, including those featuring one of his more famous fire-gazers, Lizzie Hexam. The article concludes with a brief discussion of the 1908 silent film 'Fireside Reminiscences' as an example of the continued remediations of fire-gazing beyond the nineteenth century.

  16. The Way Dogs (Canis familiaris Look at Human Emotional Faces Is Modulated by Oxytocin. An Eye-Tracking Study

    Directory of Open Access Journals (Sweden)

    Anna Kis

    2017-10-01

    Full Text Available Dogs have been shown to excel in reading human social cues, including facial cues. In the present study we used eye-tracking technology to further study dogs’ face processing abilities. It was found that dogs discriminated between human facial regions in their spontaneous viewing pattern and looked most to the eye region independently of facial expression. Furthermore dogs played most attention to the first two images presented, afterwards their attention dramatically decreases; a finding that has methodological implications. Increasing evidence indicates that the oxytocin system is involved in dogs’ human-directed social competence, thus as a next step we investigated the effects of oxytocin on processing of human facial emotions. It was found that oxytocin decreases dogs’ looking to the human faces expressing angry emotional expression. More interestingly, however, after oxytocin pre-treatment dogs’ preferential gaze toward the eye region when processing happy human facial expressions disappears. These results provide the first evidence that oxytocin is involved in the regulation of human face processing in dogs. The present study is one of the few empirical investigations that explore eye gaze patterns in naïve and untrained pet dogs using a non-invasive eye-tracking technique and thus offers unique but largely untapped method for studying social cognition in dogs.

  17. Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.

    Science.gov (United States)

    Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P

    2018-02-01

    Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Effects of galvanic skin response feedback on user experience in gaze-controlled gaming: A pilot study.

    Science.gov (United States)

    Larradet, Fanny; Barresi, Giacinto; Mattos, Leonardo S

    2017-07-01

    Eye-tracking (ET) is one of the most intuitive solutions for enabling people with severe motor impairments to control devices. Nevertheless, even such an effective assistive solution can detrimentally affect user experience during demanding tasks because of, for instance, the user's mental workload - using gaze-based controls for an extensive period of time can generate fatigue and cause frustration. Thus, it is necessary to design novel solutions for ET contexts able to improve the user experience, with particular attention to its aspects related to workload. In this paper, a pilot study evaluates the effects of a relaxation biofeedback system on the user experience in the context of a gaze-controlled task that is mentally and temporally demanding: ET-based gaming. Different aspects of the subjects' experience were investigated under two conditions of a gaze-controlled game. In the Biofeedback group (BF), the user triggered a command by means of voluntary relaxation, monitored through Galvanic Skin Response (GSR) and represented by visual feedback. In the No Biofeedback group (NBF), the same feedback was timed according to the average frequency of commands in BF. After the experiment, each subject filled out a user experience questionnaire. The results showed a general appreciation for BF, with a significant between-group difference in the perceived session time duration, with the latter being shorter for subjects in BF than for the ones in NBF. This result implies a lower mental workload for BF than for NBF subjects. Other results point toward a potential role of user's engagement in the improvement of user experience in BF. Such an effect highlights the value of relaxation biofeedback for improving the user experience in a demanding gaze-controlled task.

  19. Does Visual Attention Span Relate to Eye Movements during Reading and Copying?

    Science.gov (United States)

    Bosse, Marie-Line; Kandel, Sonia; Prado, Chloé; Valdois, Sylviane

    2014-01-01

    This research investigated whether text reading and copying involve visual attention-processing skills. Children in grades 3 and 5 read and copied the same text. We measured eye movements while reading and the number of gaze lifts (GL) during copying. The children were also administered letter report tasks that constitute an estimation of the…

  20. Proximity and Gaze Influences Facial Temperature: A Thermal Infrared Imaging Study.

    Directory of Open Access Journals (Sweden)

    Stephanos eIoannou

    2014-08-01

    Full Text Available Direct gaze and interpersonal proximity are known to lead to changes in psycho-physiology, behaviour and brain function. We know little, however, about subtler facial reactions such as rise and fall in temperature, which may be sensitive to contextual effects and functional in social interactions. Using thermal infrared imaging cameras 18 female adult participants were filmed at two interpersonal distances (intimate and social and two gaze conditions (averted and direct. The order of variation in distance was counterbalanced: half the participants experienced a female experimenter’s gaze at the social distance first before the intimate distance (a socially ‘normal’ order and half experienced the intimate distance first and then the social distance (an odd social order. At both distances averted gaze always preceded direct gaze. We found strong correlations in thermal changes between six areas of the face (forehead, chin, cheeks, nose, maxilliary and periorbital regions for all experimental conditions and developed a composite measure of thermal shifts for all analyses. Interpersonal proximity led to a thermal rise, but only in the ‘normal’ social order. Direct gaze, compared to averted gaze, led to a thermal increase at both distances with a stronger effect at intimate distance, in both orders of distance variation. Participants reported direct gaze as more intrusive than averted gaze, especially at the intimate distance. These results demonstrate the powerful effects of another person’s gaze on psycho-physiological responses, even at a distance and independent of context.

  1. Gaze-independent ERP-BCIs: augmenting performance through location-congruent bimodal stimuli

    Science.gov (United States)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Werkhoven, Peter

    2014-01-01

    Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bimodal BCIs may be that the user can choose which modality or modalities to attend to. We studied bimodal, visual-tactile, gaze-independent BCIs and investigated whether or not ERP components’ tAUCs and subsequent classification accuracies are increased for (1) bimodal vs. unimodal stimuli; (2) location-congruent vs. location-incongruent bimodal stimuli; and (3) attending to both modalities vs. to either one modality. We observed an enhanced bimodal (compared to unimodal) P300 tAUC, which appeared to be positively affected by location-congruency (p = 0.056) and resulted in higher classification accuracies. Attending either to one or to both modalities of the bimodal location-congruent stimuli resulted in differences between ERP components, but not in classification performance. We conclude that location-congruent bimodal stimuli improve ERP-BCIs, and offer the user the possibility to switch the attended modality without losing performance. PMID:25249947

  2. Gaze anchoring guides real but not pantomime reach-to-grasp: support for the action-perception theory.

    Science.gov (United States)

    Kuntz, Jessica R; Karl, Jenni M; Doan, Jon B; Whishaw, Ian Q

    2018-04-01

    Reach-to-grasp movements feature the integration of a reach directed by the extrinsic (location) features of a target and a grasp directed by the intrinsic (size, shape) features of a target. The action-perception theory suggests that integration and scaling of a reach-to-grasp movement, including its trajectory and the concurrent digit shaping, are features that depend upon online action pathways of the dorsal visuomotor stream. Scaling is much less accurate for a pantomime reach-to-grasp movement, a pretend reach with the target object absent. Thus, the action-perception theory proposes that pantomime movement is mediated by perceptual pathways of the ventral visuomotor stream. A distinguishing visual feature of a real reach-to-grasp movement is gaze anchoring, in which a participant visually fixates the target throughout the reach and disengages, often by blinking or looking away/averting the head, at about the time that the target is grasped. The present study examined whether gaze anchoring is associated with pantomime reaching. The eye and hand movements of participants were recorded as they reached for a ball of one of three sizes, located on a pedestal at arms' length, or pantomimed the same reach with the ball and pedestal absent. The kinematic measures for real reach-to-grasp movements were coupled to the location and size of the target, whereas the kinematic measures for pantomime reach-to-grasp, although grossly reflecting target features, were significantly altered. Gaze anchoring was also tightly coupled to the target for real reach-to-grasp movements, but there was no systematic focus for gaze, either in relation with the virtual target, the previous location of the target, or the participant's reaching hand, for pantomime reach-to-grasp. The presence of gaze anchoring during real vs. its absence in pantomime reach-to-grasp supports the action-perception theory that real, but not pantomime, reaches are online visuomotor actions and is discussed in

  3. Art Across The Curriculum: A (Not-So) Private Eye

    Science.gov (United States)

    Sartorius, Tara Cady

    2010-01-01

    Back in the 18th century, it was popular to give one's lover a locket containing a painted image of one's eye. Possibly this was a way of keeping privacy between two secret lovers. Or it may have been a way of keeping close to the gaze of a loved one while spending time apart. The pendants may have included a lock of hair or other tiny element…

  4. Simulating interaction: Using gaze-contingent eye-tracking to measure the reward value of social signals in toddlers with and without autism.

    Science.gov (United States)

    Vernetti, Angelina; Senju, Atsushi; Charman, Tony; Johnson, Mark H; Gliga, Teodora

    2018-01-01

    Several accounts have been proposed to explain difficulties with social interaction in autism spectrum disorder (ASD), amongst which atypical social orienting, decreased social motivation or difficulties with understanding the regularities driving social interaction. This study uses gaze-contingent eye-tracking to tease apart these accounts by measuring reward related behaviours in response to different social videos. Toddlers at high or low familial risk for ASD took part in this study at age 2 and were categorised at age 3 as low risk controls (LR), high-risk with no ASD diagnosis (HR-no ASD), or with a diagnosis of ASD (HR-ASD). When the on-demand social interaction was predictable, all groups, including the HR-ASD group, looked longer and smiled more towards a person greeting them compared to a mechanical Toy (Condition 1) and also smiled more towards a communicative over a non-communicative person (Condition 2). However, all groups, except the HR-ASD group, selectively oriented towards a person addressing the child in different ways over an invariant social interaction (Condition 3). These findings suggest that social interaction is intrinsically rewarding for individuals with ASD, but the extent to which it is sought may be modulated by the specific variability of naturalistic social interaction. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    Science.gov (United States)

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  6. Eye movements during the recollection of text information reflect content rather than the text itself

    DEFF Research Database (Denmark)

    Traub, Franziska; Johansson, Roger; Holmqvist, Kenneth

    Several studies have reported that spontaneous eye movements occur when visuospatial information is recalled from memory. Such gazes closely reflect the content and spatial relations from the original scene layout (e.g., Johansson et al., 2012). However, when someone has originally read a scene....... Recollection was performed orally while gazing at a blank screen. Results demonstrate that participant’s gaze patterns during recall more closely reflect the spatial layout of the scene than the physical locations of the text. Memory data provide evidence that mental models representing either the situation...... description, the memory of the physical layout of the text itself might compete with the memory of the spatial arrangement of the described scene. The present study was designed to address this fundamental issue by having participants read scene descriptions that where manipulated to be either congruent...

  7. Using Genetic Algorithm for Eye Detection and Tracking in Video Sequence

    Directory of Open Access Journals (Sweden)

    Takuya Akashi

    2007-04-01

    Full Text Available We propose a high-speed size and orientation invariant eye tracking method, which can acquire numerical parameters to represent the size and orientation of the eye. In this paper, we discuss that high tolerance in human head movement and real-time processing that are needed for many applications, such as eye gaze tracking. The generality of the method is also important. We use template matching with genetic algorithm, in order to overcome these problems. A high speed and accuracy tracking scheme using Evolutionary Video Processing for eye detection and tracking is proposed. Usually, a genetic algorithm is unsuitable for a real-time processing, however, we achieved real-time processing. The generality of this proposed method is provided by the artificial iris template used. In our simulations, an eye tracking accuracy is 97.9% and, an average processing time of 28 milliseconds per frame.

  8. Differences in Sequential Eye Movement Behavior between Taiwanese and American Viewers

    Directory of Open Access Journals (Sweden)

    Yen Ju eLee

    2016-05-01

    Full Text Available Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context.

  9. Look together : Using gaze for assisting co-located collaborative search

    NARCIS (Netherlands)

    Zhang, Y.; Pfeuffer, Ken; Chong, Ming Ki; Alexander, Jason; Bulling, Andreas; Gellersen, Hans

    2017-01-01

    Gaze information provides indication of users focus which complements remote collaboration tasks, as distant users can see their partner’s focus. In this paper, we apply gaze for co-located collaboration, where users’ gaze locations are presented on the same display, to help collaboration between

  10. The specificity of attentional biases by type of gambling: An eye-tracking study.

    Directory of Open Access Journals (Sweden)

    Daniel S McGrath

    Full Text Available A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games. Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers.

  11. The specificity of attentional biases by type of gambling: An eye-tracking study.

    Science.gov (United States)

    McGrath, Daniel S; Meitner, Amadeus; Sears, Christopher R

    2018-01-01

    A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers.

  12. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task

    Science.gov (United States)

    Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.

    2014-12-01

    Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet

  13. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.

    Science.gov (United States)

    Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A

    2014-12-01

    To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  14. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    Directory of Open Access Journals (Sweden)

    Alexis D. J. Makin

    2016-03-01

    Full Text Available Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene and orientation (0° to 90°, orientation, ORI gene. An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference.

  15. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    Science.gov (United States)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  16. Toward FRP-Based Brain-Machine Interfaces-Single-Trial Classification of Fixation-Related Potentials.

    Directory of Open Access Journals (Sweden)

    Andrea Finke

    Full Text Available The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant's body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction.

  17. The Gaze as constituent and annihilator

    Directory of Open Access Journals (Sweden)

    Mats Carlsson

    2012-11-01

    Full Text Available This article aims to join the contemporary effort to promote a psychoanalytic renaissance within cinema studies, post Post-Theory. In trying to shake off the burden of the 1970s film theory's distortion of the Lacanian Gaze, rejuvenating it with the strength of the Real and fusing it with Freudian thoughts on the uncanny, hopefully this new dawn can be reached. I aspire to conceptualize the Gaze in a straightforward manner. This in order to obtain an instrument for the identification of certain strategies within the filmic realm aimed at depicting the subjective destabilizing of diegetic characters as well as thwarting techniques directed at the spectorial subject. In setting this capricious Gaze against the uncanny phenomena described by Freud, we find that these two ideas easily intertwine into a draft description of a powerful, potentially reconstitutive force worth being highlighted.

  18. Ultrathin flexible piezoelectric sensors for monitoring eye fatigue

    Science.gov (United States)

    Lü, Chaofeng; Wu, Shuang; Lu, Bingwei; Zhang, Yangyang; Du, Yangkun; Feng, Xue

    2018-02-01

    Eye fatigue is a symptom induced by long-term work of both eyes and brains. Without proper treatment, eye fatigue may incur serious problems. Current studies on detecting eye fatigue mainly focus on computer vision detect technology which can be very unreliable due to occasional bad visual conditions. As a solution, we proposed a wearable conformal in vivo eye fatigue monitoring sensor that contains an array of piezoelectric nanoribbons integrated on an ultrathin flexible substrate. By detecting strains on the skin of eyelid, the sensors may collect information about eye blinking, and, therefore, reveal human’s fatigue state. We first report the design and fabrication of the piezoelectric sensor and experimental characterization of voltage responses of the piezoelectric sensors. Under bending stress, the output voltage curves yield key information about the motion of human eyelid. We also develop a theoretical model to reveal the underlying mechanism of detecting eyelid motion. Both mechanical load test and in vivo test are conducted to convince the working performance of the sensors. With satisfied durability and high sensitivity, this sensor may efficiently detect abnormal eyelid motions, such as overlong closure, high blinking frequency, low closing speed and weak gazing strength, and may hopefully provide feedback for assessing eye fatigue in time so that unexpected situations can be prevented.

  19. Latvijas gaze

    International Nuclear Information System (INIS)

    1994-04-01

    A collection of photocopies of materials (such as overheads etc.) used at a seminar (organized by the Board of Directors of the company designated ''Latvijas Gaze'' in connection with The National Oil and Gas Company of Denmark, DONG) comprising an analysis of training needs with regard to marketing of gas technology and consultancy to countries in Europe, especially with regard to Latvia. (AB)

  20. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    Science.gov (United States)

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  1. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    Directory of Open Access Journals (Sweden)

    Cesco Willemse

    2018-02-01

    Full Text Available Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition and in the other condition it looked at the opposite object 80% of the time (disjoint condition. Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  2. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    Science.gov (United States)

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  3. Exogenous orienting of attention depends upon the ability to execute eye movements.

    Science.gov (United States)

    Smith, Daniel T; Rorden, Chris; Jackson, Stephen R

    2004-05-04

    Shifts of attention can be made overtly by moving the eyes or covertly with attention being allocated to a region of space that does not correspond to the current direction of gaze. However, the precise relationship between eye movements and the covert orienting of attention remains controversial. The influential premotor theory proposes that the covert orienting of attention is produced by the programming of (unexecuted) eye movements and thus predicts a strong relationship between the ability to execute eye movements and the operation of spatial attention. Here, we demonstrate for the first time that impaired spatial attention is observed in an individual (AI) who is neurologically healthy but who cannot execute eye movements as a result of a congenital impairment in the elasticity of her eye muscles. This finding provides direct support for the role of the eye-movement system in the covert orienting of attention and suggests that whereas intact cortical structures may be necessary for normal attentional reflexes, they are not sufficient. The ability to move our eyes is essential for the development of normal patterns of spatial attention.

  4. A gaze independent hybrid-BCI based on visual spatial attention

    Science.gov (United States)

    Egan, John M.; Loughnane, Gerard M.; Fletcher, Helen; Meade, Emma; Lalor, Edmund C.

    2017-08-01

    Objective. Brain-computer interfaces (BCI) use measures of brain activity to convey a user’s intent without the need for muscle movement. Hybrid designs, which use multiple measures of brain activity, have been shown to increase the accuracy of BCIs, including those based on EEG signals reflecting covert attention. Our study examined whether incorporating a measure of the P3 response improved the performance of a previously reported attention-based BCI design that incorporates measures of steady-state visual evoked potentials (SSVEP) and alpha band modulations. Approach. Subjects viewed stimuli consisting of two bi-laterally located flashing white boxes on a black background. Streams of letters were presented sequentially within the boxes, in random order. Subjects were cued to attend to one of the boxes without moving their eyes, and they were tasked with counting the number of target-letters that appeared within. P3 components evoked by target appearance, SSVEPs evoked by the flashing boxes, and power in the alpha band are modulated by covert attention, and the modulations can be used to classify trials as left-attended or right-attended. Main Results. We showed that classification accuracy was improved by including a P3 feature along with the SSVEP and alpha features (the inclusion of a P3 feature lead to a 9% increase in accuracy compared to the use of SSVEP and Alpha features alone). We also showed that the design improves the robustness of BCI performance to individual subject differences. Significance. These results demonstrate that incorporating multiple neurophysiological indices of covert attention can improve performance in a gaze-independent BCI.

  5. Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study

    Science.gov (United States)

    Farzin, Faraz; Rivera, Susan M.; Hessl, David

    2009-01-01

    Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…

  6. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    Science.gov (United States)

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

  7. Eye tracking detects disconjugate eye movements associated with structural traumatic brain injury and concussion.

    Science.gov (United States)

    Samadani, Uzma; Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H; McStay, Christopher; Todd, S Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul

    2015-04-15

    Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury.

  8. Effects of Lighting Direction on the Impression of Faces and Objects and the Role of Gaze Direction in the Impression-Forming

    Directory of Open Access Journals (Sweden)

    Keiichi Horibata

    2011-10-01

    Full Text Available We examined whether lighting direction, left or right, has an influence on the impression of faces and objects, and the role of gaze direction in the impression-forming. In the first experiment, we examined how lighting directions influenced the impression of faces and objects. On each trial, a pair of faces or objects was presented on top of each other. Left side was brighter in one and right side was brighter in the other. The participants were asked to answer which face or object was more preferable. The results showed that the participants preferred left-brighter faces and objects significantly more frequently than right-brighter stimuli (p < .05, chi-square test. The effect was especially strong in the upright faces. The results suggested that faces and objects give better impressions when they are lit from the left. In the second experiment, we examined whether eye-movement would play a role in our preference for left-brighter faces and objects. We recorded eye-movements of the participants while doing the same task as the first experiment. The results showed that the participants' preference for left-brighter faces were stronger when the participant started viewing from the left. The gaze direction may modulate our impression-formation (Shimojo et al. 2003.

  9. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    Science.gov (United States)

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  10. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  11. Receptive fields for smooth pursuit eye movements and motion perception.

    Science.gov (United States)

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Salient eyes deter conspecific nest intruders in wild jackdaws (Corvus monedula).

    Science.gov (United States)

    Davidson, Gabrielle L; Clayton, Nicola S; Thornton, Alex

    2014-02-01

    Animals often respond fearfully when encountering eyes or eye-like shapes. Although gaze aversion has been documented in mammals when avoiding group-member conflict, the importance of eye coloration during interactions between conspecifics has yet to be examined in non-primate species. Jackdaws (Corvus monedula) have near-white irides, which are conspicuous against their dark feathers and visible when seen from outside the cavities where they nest. Because jackdaws compete for nest sites, their conspicuous eyes may act as a warning signal to indicate that a nest is occupied and deter intrusions by conspecifics. We tested whether jackdaws' pale irides serve as a deterrent to prospecting conspecifics by comparing prospectors' behaviour towards nest-boxes displaying images with bright eyes (BEs) only, a jackdaw face with natural BEs, or a jackdaw face with dark eyes. The jackdaw face with BEs was most effective in deterring birds from making contact with nest-boxes, whereas both BE conditions reduced the amount of time jackdaws spent in proximity to the image. We suggest BEs in jackdaws may function to prevent conspecific competitors from approaching occupied nest sites.

  13. Avoidance of a moving threat in the common chameleon (Chamaeleo chamaeleon): rapid tracking by body motion and eye use.

    Science.gov (United States)

    Lev-Ari, Tidhar; Lustig, Avichai; Ketter-Katz, Hadas; Baydach, Yossi; Katzir, Gadi

    2016-08-01

    A chameleon (Chamaeleo chamaeleon) on a perch responds to a nearby threat by moving to the side of the perch opposite the threat, while bilaterally compressing its abdomen, thus minimizing its exposure to the threat. If the threat moves, the chameleon pivots around the perch to maintain its hidden position. How precise is the body rotation and what are the patterns of eye movement during avoidance? Just-hatched chameleons, placed on a vertical perch, on the side roughly opposite to a visual threat, adjusted their position to precisely opposite the threat. If the threat were moved on a horizontal arc at angular velocities of up to 85°/s, the chameleons co-rotated smoothly so that (1) the angle of the sagittal plane of the head relative to the threat and (2) the direction of monocular gaze, were positively and significantly correlated with threat angular position. Eye movements were role-dependent: the eye toward which the threat moved maintained a stable gaze on it, while the contralateral eye scanned the surroundings. This is the first description, to our knowledge, of such a response in a non-flying terrestrial vertebrate, and it is discussed in terms of possible underlying control systems.

  14. "Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze": Correction to Werhahn et al. (2016).

    Science.gov (United States)

    2017-02-01

    Reports an error in "Wolves ( Canis lupus ) and dogs ( Canis familiaris ) differ in following human gaze into distant space but respond similar to their packmates' gaze" by Geraldine Werhahn, Zsófia Virányi, Gabriela Barrera, Andrea Sommese and Friederike Range ( Journal of Comparative Psychology , 2016[Aug], Vol 130[3], 288-298). In the article, the affiliations for the second and fifth authors should be Wolf Science Center, Ernstbrunn, Austria, and Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna/ Medical University of Vienna/University of Vienna. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-26311-001.) Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills

  15. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    Science.gov (United States)

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  16. Using eye movements to investigate selective attention in chronic daily headache.

    Science.gov (United States)

    Liossi, Christina; Schoth, Daniel E; Godwin, Hayward J; Liversedge, Simon P

    2014-03-01

    Previous research has demonstrated that chronic pain is associated with biased processing of pain-related information. Most studies have examined this bias by measuring response latencies. The present study extended previous work by recording eye movement behaviour in individuals with chronic headache and in healthy controls while participants viewed a set of images (i.e., facial expressions) from 4 emotion categories (pain, angry, happy, neutral). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze on the picture that was initially fixated, and the mean number of visits, and mean fixation duration per image category. The eye movement behaviour of the participants in the chronic headache group was characterised by a bias in initial shift of orienting to pain. There was no evidence of individuals with chronic headache visiting more often, or spending significantly more time viewing, pain images compared to other images. Both participant groups showed a significantly greater bias to maintain gaze longer on happy images, relative to pain, angry, and neutral images. Results are consistent with a pain-related bias that operates in the orienting of attention on pain-related stimuli, and suggest that chronic pain participants' attentional biases for pain-related information are evident even when other emotional stimuli are present. Pain-related information-processing biases appear to be a robust feature of chronic pain and may have an important role in the maintenance of the disorder. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  17. Gaze behaviour during space perception and spatial decision making.

    Science.gov (United States)

    Wiener, Jan M; Hölscher, Christoph; Büchner, Simon; Konieczny, Lars

    2012-11-01

    A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screenshots of choice points taken in large virtual environments. Each screenshot depicted alternative path options. In Experiment 1, participants had to decide between them to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently, they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 and 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.

  18. Differential impact of partial cortical blindness on gaze strategies when sitting and walking - an immersive virtual reality study.

    Science.gov (United States)

    Iorizzo, Dana B; Riley, Meghan E; Hayhoe, Mary; Huxlin, Krystel R

    2011-05-25

    The present experiments aimed to characterize the visual performance of subjects with long-standing, unilateral cortical blindness when walking in a naturalistic, virtual environment. Under static, seated testing conditions, cortically blind subjects are known to exhibit compensatory eye movement strategies. However, they still complain of significant impairment in visual detection during navigation. To assess whether this is due to a change in compensatory eye movement strategy between sitting and walking, we measured eye and head movements in subjects asked to detect peripherally-presented, moving basketballs. When seated, cortically blind subjects detected ∼80% of balls, while controls detected almost all balls. Seated blind subjects did not make larger head movements than controls, but they consistently biased their fixation distribution towards their blind hemifield. When walking, head movements were similar in the two groups, but the fixation bias decreased to the point that fixation distribution in cortically blind subjects became similar to that in controls - with one major exception: at the time of basketball appearance, walking controls looked primarily at the far ground, in upper quadrants of the virtual field of view; cortically blind subjects looked significantly more at the near ground, in lower quadrants of the virtual field. Cortically blind subjects detected only 58% of the balls when walking while controls detected ∼90%. Thus, the adaptive gaze strategies adopted by cortically blind individuals as a compensation for their visual loss are strongest and most effective when seated and stationary. Walking significantly alters these gaze strategies in a way that seems to favor walking performance, but impairs peripheral target detection. It is possible that this impairment underlies the experienced difficulty of those with cortical blindness when navigating in real life. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. The Politics of the Gaze Foucault, Lacan and Zizek

    Directory of Open Access Journals (Sweden)

    Henry Krips

    2010-03-01

    Full Text Available Joan Copjec accuses orthodox film theory of misrepresenting the Lacanian gaze by assimilating it to Foucauldian panopticon (Copjec 1994: 18-19. Although Copjec is correct that orthodox film theory misrepresents the Lacanian gaze, she, in turn, misrepresents Foucault by choosing to focus exclusively upon those as-pects of his work on the panopticon that have been taken up by orthodox film the-ory (Copjec 1994: 4. In so doing, I argue, Copjec misses key parallels between the Lacanian and Foucauldian concepts of the gaze. More than a narrow academic dispute about how to read Foucault and Lacan, this debate has wider political sig-nificance. In particular, using Slavoj Zizek's work, I show that a correct account of the panoptic gaze leads us to rethink the question of how to oppose modern techniques of surveillance.

  20. A testimony to Muzil: Hervé Guibert, Foucault, and the medical gaze.

    Science.gov (United States)

    Rendell, Joanne

    2004-01-01

    Testimony to Muzil: Hervé Guibert, Michel Foucault, and the "Medical Gaze" examines the fictional/autobiographical AIDS writings of the French writer Hervé Guibert. Locating Guibert's writings alongside the work of his friend Michel Foucault, the article explores how they echo Foucault's evolving notions of the "medical gaze." The article also explores how Guilbert's narrators and Guibert himself (as writer) resist and challenge the medical gaze; a gaze which particularly in the era of AIDS has subjected, objectified, and even sometimes punished the body of the gay man. It is argued that these resistances to the gaze offer a literary extension to Foucault's later work on power and resistance strategies.

  1. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  2. Towards Wearable Gaze Supported Augmented Cognition

    DEFF Research Database (Denmark)

    Toshiaki Kurauchi, Andrew; Hitoshi Morimoto, Carlos; Mardanbeigi, Diako

    Augmented cognition applications must deal with the problem of how to exhibit information in an orderly, understandable, and timely fashion. Though context have been suggested to control the kind, amount, and timing of the information delivered, we argue that gaze can be a fundamental tool...... by the wearable computing community to develop a gaze supported augmented cognition application with three interaction modes. The application provides information of the person being looked at. The continuous mode updates information every time the user looks at a different face. The key activated discrete mode...

  3. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children

    Directory of Open Access Journals (Sweden)

    Yifang Wang

    2017-12-01

    Full Text Available This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.

  4. The impact of visual gaze direction on auditory object tracking

    OpenAIRE

    Pomper, U.; Chait, M.

    2017-01-01

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention wh...

  5. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  6. Watch out! Magnetoencephalographic evidence for early modulation of attention orienting by fearful gaze cueing.

    Directory of Open Access Journals (Sweden)

    Fanny Lachat

    Full Text Available Others' gaze and emotional facial expression are important cues for the process of attention orienting. Here, we investigated with magnetoencephalography (MEG whether the combination of averted gaze and fearful expression may elicit a selectively early effect of attention orienting on the brain responses to targets. We used the direction of gaze of centrally presented fearful and happy faces as the spatial attention orienting cue in a Posner-like paradigm where the subjects had to detect a target checkerboard presented at gazed-at (valid trials or non gazed-at (invalid trials locations of the screen. We showed that the combination of averted gaze and fearful expression resulted in a very early attention orienting effect in the form of additional parietal activity between 55 and 70 ms for the valid versus invalid targets following fearful gaze cues. No such effect was obtained for the targets following happy gaze cues. This early cue-target validity effect selective of fearful gaze cues involved the left superior parietal region and the left lateral middle occipital region. These findings provide the first evidence for an effect of attention orienting induced by fearful gaze in the time range of C1. In doing so, they demonstrate the selective impact of combined gaze and fearful expression cues in the process of attention orienting.

  7. Preliminary results of BRAVO project: brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks.

    Science.gov (United States)

    Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo

    2011-01-01

    This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE

  8. A novel integrative method for analyzing eye and hand behaviour during reaching and grasping in an MRI environment.

    Science.gov (United States)

    Lawrence, Jane M; Abhari, Kamyar; Prime, Steven L; Meek, Benjamin P; Desanghere, Loni; Baugh, Lee A; Marotta, Jonathan J

    2011-06-01

    The development of noninvasive neuroimaging techniques, such as fMRI, has rapidly advanced our understanding of the neural systems underlying the integration of visual and motor information. However, the fMRI experimental design is restricted by several environmental elements, such as the presence of the magnetic field and the restricted view of the participant, making it difficult to monitor and measure behaviour. The present article describes a novel, specialized software package developed in our laboratory called Biometric Integration Recording and Analysis (BIRA). BIRA integrates video with kinematic data derived from the hand and eye, acquired using MRI-compatible equipment. The present article demonstrates the acquisition and analysis of eye and hand data using BIRA in a mock (0 Tesla) scanner. A method for collecting and integrating gaze and kinematic data in fMRI studies on visuomotor behaviour has several advantages: Specifically, it will allow for more sophisticated, behaviourally driven analyses and eliminate potential confounds of gaze or kinematic data.

  9. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    Science.gov (United States)

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  10. Face and gaze perception in borderline personality disorder: An electrical neuroimaging study.

    Science.gov (United States)

    Berchio, Cristina; Piguet, Camille; Gentsch, Kornelia; Küng, Anne-Lise; Rihs, Tonia A; Hasler, Roland; Aubry, Jean-Michel; Dayer, Alexandre; Michel, Christoph M; Perroud, Nader

    2017-11-30

    Humans are sensitive to gaze direction from early life, and gaze has social and affective values. Borderline personality disorder (BPD) is a clinical condition characterized by emotional dysregulation and enhanced sensitivity to affective and social cues. In this study we wanted to investigate the temporal-spatial dynamics of spontaneous gaze processing in BPD. We used a 2-back-working-memory task, in which neutral faces with direct and averted gaze were presented. Gaze was used as an emotional modulator of event-related-potentials to faces. High density EEG data were acquired in 19 females with BPD and 19 healthy women, and analyzed with a spatio-temporal microstates analysis approach. Independently of gaze direction, BPD patients showed altered N170 and P200 topographies for neutral faces. Source localization revealed that the anterior cingulate and other prefrontal regions were abnormally activated during the N170 component related to face encoding, while middle temporal deactivations were observed during the P200 component. Post-task affective ratings showed that BPD patients had difficulty to disambiguate neutral gaze. This study provides first evidence for an early neural bias toward neutral faces in BPD independent of gaze direction and also suggests the importance of considering basic aspects of social cognition in identifying biological risk factors of BPD. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. For your eyes only: Effect of confederate's eye level on reach-to-grasp action

    Directory of Open Access Journals (Sweden)

    Francois eQuesque

    2014-12-01

    Full Text Available Previous studies have shown that the spatio-temporal parameters of reach-to-grasp movement are influenced by the social context in which the motor action is performed. In particular, when interacting with a confederate, movements are slower, with longer initiation times and more ample trajectories, which has been interpreted as implicit communicative information emerging through voluntary movement to catch the partner’s attention and optimize cooperation (Quesque et al., 2013. Because gaze is a crucial component of social interactions, the present study evaluated the role of a confederate's eye level on the social modulation of trajectory curvature. An actor and a partner facing each other took part in a cooperative task consisting, for one of them, of grasping and moving a wooden dowel under time constraints. Before this Main action, the actor performed a Preparatory action, which consisted of placing the wooden dowel on a central marking. The partner's eye level was unnoticeably varied using an adjustable seat that matched or was higher than the actor’s seat. Our data confirmed the previous effects of social intention on motor responses. Furthermore, we observed an effect of the partner's eye level on the Preparatory action, leading the actors to exaggerate unconsciously the trajectory curvature in relation to their partner's eye level. No interaction was found between the actor's social intention and their partner's eye level. These results suggest that other bodies are implicitly taken into account when a reach-to-grasp movement is produced in a social context.

  12. Mouse cursor movement and eye tracking data as an indicator of pathologists′ attention when viewing digital whole slide images

    Directory of Open Access Journals (Sweden)

    Vignesh Raghunath

    2012-01-01

    Full Text Available Context: Digital pathology has the potential to dramatically alter the way pathologists work, yet little is known about pathologists′ viewing behavior while interpreting digital whole slide images. While tracking pathologist eye movements when viewing digital slides may be the most direct method of capturing pathologists′ viewing strategies, this technique is cumbersome and technically challenging to use in remote settings. Tracking pathologist mouse cursor movements may serve as a practical method of studying digital slide interpretation, and mouse cursor data may illuminate pathologists′ viewing strategies and time expenditures in their interpretive workflow. Aims: To evaluate the utility of mouse cursor movement data, in addition to eye-tracking data, in studying pathologists′ attention and viewing behavior. Settings and Design: Pathologists (N = 7 viewed 10 digital whole slide images of breast tissue that were selected using a random stratified sampling technique to include a range of breast pathology diagnoses (benign/atypia, carcinoma in situ, and invasive breast cancer. A panel of three expert breast pathologists established a consensus diagnosis for each case using a modified Delphi approach. Materials and Methods: Participants′ foveal vision was tracked using SensoMotoric Instruments RED 60 Hz eye-tracking system. Mouse cursor movement was tracked using a custom MATLAB script. Statistical Analysis Used: Data on eye-gaze and mouse cursor position were gathered at fixed intervals and analyzed using distance comparisons and regression analyses by slide diagnosis and pathologist expertise. Pathologists′ accuracy (defined as percent agreement with the expert consensus diagnoses and efficiency (accuracy and speed were also analyzed. Results: Mean viewing time per slide was 75.2 seconds (SD = 38.42. Accuracy (percent agreement with expert consensus by diagnosis type was: 83% (benign/atypia; 48% (carcinoma in situ; and 93% (invasive

  13. Mouse cursor movement and eye tracking data as an indicator of pathologists’ attention when viewing digital whole slide images

    Science.gov (United States)

    Raghunath, Vignesh; Braxton, Melissa O.; Gagnon, Stephanie A.; Brunyé, Tad T.; Allison, Kimberly H.; Reisch, Lisa M.; Weaver, Donald L.; Elmore, Joann G.; Shapiro, Linda G.

    2012-01-01

    Context: Digital pathology has the potential to dramatically alter the way pathologists work, yet little is known about pathologists’ viewing behavior while interpreting digital whole slide images. While tracking pathologist eye movements when viewing digital slides may be the most direct method of capturing pathologists’ viewing strategies, this technique is cumbersome and technically challenging to use in remote settings. Tracking pathologist mouse cursor movements may serve as a practical method of studying digital slide interpretation, and mouse cursor data may illuminate pathologists’ viewing strategies and time expenditures in their interpretive workflow. Aims: To evaluate the utility of mouse cursor movement data, in addition to eye-tracking data, in studying pathologists’ attention and viewing behavior. Settings and Design: Pathologists (N = 7) viewed 10 digital whole slide images of breast tissue that were selected using a random stratified sampling technique to include a range of breast pathology diagnoses (benign/atypia, carcinoma in situ, and invasive breast cancer). A panel of three expert breast pathologists established a consensus diagnosis for each case using a modified Delphi approach. Materials and Methods: Participants’ foveal vision was tracked using SensoMotoric Instruments RED 60 Hz eye-tracking system. Mouse cursor movement was tracked using a custom MATLAB script. Statistical Analysis Used: Data on eye-gaze and mouse cursor position were gathered at fixed intervals and analyzed using distance comparisons and regression analyses by slide diagnosis and pathologist expertise. Pathologists’ accuracy (defined as percent agreement with the expert consensus diagnoses) and efficiency (accuracy and speed) were also analyzed. Results: Mean viewing time per slide was 75.2 seconds (SD = 38.42). Accuracy (percent agreement with expert consensus) by diagnosis type was: 83% (benign/atypia); 48% (carcinoma in situ); and 93% (invasive). Spatial

  14. Conflict Tasks of Different Types Divergently Affect the Attentional Processing of Gaze and Arrow.

    Science.gov (United States)

    Fan, Lingxia; Yu, Huan; Zhang, Xuemin; Feng, Qing; Sun, Mengdan; Xu, Mengsi

    2018-01-01

    The present study explored the attentional processing mechanisms of gaze and arrow cues in two different types of conflict tasks. In Experiment 1, participants performed a flanker task in which gaze and arrow cues were presented as central targets or bilateral distractors. The congruency between the direction of the target and the distractors was manipulated. Results showed that arrow distractors greatly interfered with the attentional processing of gaze, while the processing of arrow direction was immune to conflict from gaze distractors. Using a spatial compatibility task, Experiment 2 explored the conflict effects exerted on gaze and arrow processing by their relative spatial locations. When the direction of the arrow was in conflict with its spatial layout on screen, response times were slowed; however, the encoding of gaze was unaffected by spatial location. In general, processing to an arrow cue is less influenced by bilateral gaze cues but is affected by irrelevant spatial information, while processing to a gaze cue is greatly disturbed by bilateral arrows but is unaffected by irrelevant spatial information. Different effects on gaze and arrow cues by different types of conflicts may reflect two relatively distinct specific modes of the attentional process.

  15. Strange-face illusions during inter-subjective gazing.

    Science.gov (United States)

    Caputo, Giovanni B

    2013-03-01

    In normal observers, gazing at one's own face in the mirror for a few minutes, at a low illumination level, triggers the perception of strange faces, a new visual illusion that has been named 'strange-face in the mirror'. Individuals see huge distortions of their own faces, but they often see monstrous beings, archetypal faces, faces of relatives and deceased, and animals. In the experiment described here, strange-face illusions were perceived when two individuals, in a dimly lit room, gazed at each other in the face. Inter-subjective gazing compared to mirror-gazing produced a higher number of different strange-faces. Inter-subjective strange-face illusions were always dissociative of the subject's self and supported moderate feeling of their reality, indicating a temporary lost of self-agency. Unconscious synchronization of event-related responses to illusions was found between members in some pairs. Synchrony of illusions may indicate that unconscious response-coordination is caused by the illusion-conjunction of crossed dissociative strange-faces, which are perceived as projections into each other's visual face of reciprocal embodied representations within the pair. Inter-subjective strange-face illusions may be explained by the subject's embodied representations (somaesthetic, kinaesthetic and motor facial pattern) and the other's visual face binding. Unconscious facial mimicry may promote inter-subjective illusion-conjunction, then unconscious joint-action and response-coordination. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Three-Dimensional Eye Tracking in a Surgical Scenario.

    Science.gov (United States)

    Bogdanova, Rositsa; Boulanger, Pierre; Zheng, Bin

    2015-10-01

    Eye tracking has been widely used in studying the eye behavior of surgeons in the past decade. Most eye-tracking data are reported in a 2-dimensional (2D) fashion, and data for describing surgeons' behaviors on stereoperception are often missed. With the introduction of stereoscopes in laparoscopic procedures, there is an increasing need for studying the depth perception of surgeons under 3D image-guided surgery. We developed a new algorithm for the computation of convergence points in stereovision by measuring surgeons' interpupillary distance, the distance to the view target, and the difference between gaze locations of the 2 eyes. To test the feasibility of our new algorithm, we recruited 10 individuals to watch stereograms using binocular disparity and asked them to develop stereoperception using a cross-eyed viewing technique. Participants' eye motions were recorded by the Tobii eye tracker while they performed the trials. Convergence points between normal and stereo-viewing conditions were computed using the developed algorithm. All 10 participants were able to develop stereovision after a short period of training. During stereovision, participants' eye convergence points were 14 ± 1 cm in front of their eyes, which was significantly closer than the convergence points under the normal viewing condition (77 ± 20 cm). By applying our method of calculating convergence points using eye tracking, we were able to elicit the eye movement patterns of human operators between the normal and stereovision conditions. Knowledge from this study can be applied to the design of surgical visual systems, with the goal of improving surgical performance and patient safety. © The Author(s) 2015.

  17. Wearable Gaze Trackers: Mapping Visual Attention in 3D

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Stets, Jonathan Dyssel; Suurmets, Seidi

    2017-01-01

    gaze trackers allows respondents to move freely in any real world 3D environment, removing the previous restrictions. In this paper we propose a novel approach for processing visual attention of respondents using mobile wearable gaze trackers in a 3D environment. The pipeline consists of 3 steps...

  18. Cross-coupling between accommodation and convergence is optimized for a broad range of directions and distances of gaze.

    Science.gov (United States)

    Nguyen, Dorothy; Vedamurthy, Indu; Schor, Clifton

    2008-03-01

    Accommodation and convergence systems are cross-coupled so that stimulation of one system produces responses by both systems. Ideally, the cross-coupled responses of accommodation and convergence match their respective stimuli. When expressed in diopters and meter angles, respectively, stimuli for accommodation and convergence are equal in the mid-sagittal plane when viewed with symmetrical convergence, where historically, the gains of the cross coupling (AC/A and CA/C ratios) have been quantified. However, targets at non-zero azimuth angles, when viewed with asymmetric convergence, present unequal stimuli for accommodation and convergence. Are the cross-links between the two systems calibrated to compensate for stimulus mismatches that increase with gaze-azimuth? We measured the response AC/A and stimulus CA/C ratios at zero azimuth, 17.5 and 30 deg of rightward gaze eccentricities with a Badal Optometer and Wheatstone-mirror haploscope. AC/A ratios were measured under open-loop convergence conditions along the iso-accommodation circle (locus of points that stimulate approximately equal amounts of accommodation to the two eyes at all azimuth angles). CA/C ratios were measured under open-loop accommodation conditions along the iso-vergence circle (locus of points that stimulate constant convergence at all azimuth angles). Our results show that the gain of accommodative-convergence (AC/A ratio) decreased and the bias of convergence-accommodation increased at the 30 deg gaze eccentricity. These changes are in directions that compensate for stimulus mismatches caused by spatial-viewing geometry during asymmetric convergence.

  19. Gaze and power. A post-structuralist interpretation on Perseus’ myth

    Directory of Open Access Journals (Sweden)

    Olaya Fernández Guerrero

    2015-12-01

    Full Text Available Gaze hierarchizes, manages and labels reality. Then, according to Foucault, gaze can be understood as a practice of power. This paper is inspired by his theories, and it applies them to one of the most powerful symbolic spheres of Western culture: Greek Myths. Notions such as visibility, invisibility and panopticism bring new light into the story of Perseus and Medusa, and they enable a re-reading of this Myth focused on the different ways of power that emerge from the gaze.

  20. Route planning with transportation network maps: an eye-tracking study.

    Science.gov (United States)

    Grison, Elise; Gyselinck, Valérie; Burkhardt, Jean-Marie; Wiener, Jan Malte

    2017-09-01

    Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning.