WorldWideScience

Sample records for eye gaze interface

  1. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  2. Remote control of mobile robots through human eye gaze: the design and evaluation of an interface

    Science.gov (United States)

    Latif, Hemin Omer; Sherkat, Nasser; Lotfi, Ahmad

    2008-10-01

    Controlling mobile robots remotely requires the operator to monitor the status of the robot through some sort of feedback. Assuming a vision based feedback system is used the operator is required to closely monitor the images while navigating the robot in real time. This will engage the eyes and the hands of the operator. Since the eyes are engaged in the monitoring task anyway, their gaze can be used to navigate the robot in order to free the hands of the operator. However, the challenge here lies in developing an interaction interface that enables an intuitive distinction to be made between monitoring and commanding. This paper presents a novel means of constructing a user interface to meet this challenge. A range of solutions are constructed by augmenting the visual feedback with command regions to investigate the extent to which a user can intuitively control the robot. An experimental platform comprising a mobile robot together with cameras and eye-gaze system is constructed. The design of the system allows control of the robot, control of onboard cameras and control of the interface through eye-gaze. A number of tasks are designed to evaluate the proposed solutions. This paper presents the design considerations and the results of the evaluation. Overall it is found that the proposed solutions provide effective means of successfully navigating the robot for a range of tasks.

  3. Eye-gaze control of the computer interface: Discrimination of zoom intent

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, J.H. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Industrial Engineering; Schryver, J.C. [Oak Ridge National Lab., TN (United States)

    1993-10-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at a statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered.

  4. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    Science.gov (United States)

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  5. EYE GAZE TRACKING

    DEFF Research Database (Denmark)

    2017-01-01

    This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of: detecting the position of at least three reflections on said eye, transforming said positions to spanning...... a normalized coordinate system spanning a frame of reference, wherein said transformation is performed based on a bilinear transformation or a non linear transformation e.g. a möbius transformation or a homographic transformation, detecting the position of said center of the eye relative to the position...... of said reflections and transforming this position to said normalized coordinate system, tracking the eye gaze by tracking the movement of said eye in said normalized coordinate system. Thereby calibration of a camera, such as knowledge of the exact position and zoom level of the camera, is avoided...

  6. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements: fix...

  7. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements...

  8. Eye-gaze independent EEG-based brain-computer interfaces for communication

    Science.gov (United States)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  9. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  10. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  11. Eye gaze as relational evaluation: averted eye gaze leads to feelings of ostracism and relational devaluation.

    Science.gov (United States)

    Wirth, James H; Sacco, Donald F; Hugenberg, Kurt; Williams, Kipling D

    2010-07-01

    Eye gaze is often a signal of interest and, when noticed by others, leads to mutual and directional gaze. However, averting one's eye gaze toward an individual has the potential to convey a strong interpersonal evaluation. The averting of eye gaze is the most frequently used nonverbal cue to indicate the silent treatment, a form of ostracism. The authors argue that eye gaze can signal the relational value felt toward another person. In three studies, participants visualized interacting with an individual displaying averted or direct eye gaze. Compared to receiving direct eye contact, participants receiving averted eye gaze felt ostracized, signaled by thwarted basic need satisfaction, reduced explicit and implicit self-esteem, lowered relational value, and increased temptations to act aggressively toward the interaction partner.

  12. Ubiquitous gaze: using gaze at the interface

    NARCIS (Netherlands)

    Heylen, Dirk; Aghajan, Hamid; López-Cózar Delgado, Ramón; Augusto, Juan Carlos

    2010-01-01

    In the quest for more natural forms of interaction between humans and machines, information on where someone is looking and how (for how long, with long or shorter gaze periods) plays a prominent part. The importance of gaze in social interaction, its manifold functions and expressive force, and the

  13. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye regi...... extraction based on active appearance modeling and eye tracking using a new fast and robust heuristic. The eye tracker is shown to perform well for low resolution image segments, hence, making it feasible to estimate gaze using a single generic camera.......Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  14. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye regi...... extraction based on active appearance modeling and eye tracking using a new fast and robust heuristic. The eye tracker is shown to perform well for low resolution image segments, hence, making it feasible to estimate gaze using a single generic camera.......Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  15. A Gaze Interactive Textual Smartwatch Interface

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Biermann, Florian; Askø Madsen, Janus

    2015-01-01

    , but not where on the screen they are looking. To counter the motor noise we present a word-by-word textual UI that shows temporary command options to be executed by gaze-strokes. Twenty-seven participants conducted a simulated smartwatch task and were able to reliably perform commands that would adjust...... the speed of word presentation or make regressions. We discuss future design and usage options for a textual smartwatch gaze interface....

  16. Does the 'P300' speller depend on eye gaze?

    Science.gov (United States)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  17. Gaze interaction with textual user interface

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Lund, Haakon; Madsen, Janus Askø

    2015-01-01

    ” option for text navigation. People readily understood how to execute RSVP command prompts and a majority of them preferred gaze input to a pen pointer. We present the concept of a smartwatch that can track eye movements and mediate command options whenever in proximity of intelligent devices...

  18. Eye Gaze Assistance for a Game-Like Interactive Task

    Directory of Open Access Journals (Sweden)

    Tamás (Tom D. Gedeon

    2008-01-01

    Full Text Available Human beings communicate in abbreviated ways dependent on prior interactions and shared knowledge. Furthermore, humans share information about intentions and future actions using eye gaze. Among primates, humans are unique in the whiteness of the sclera and amount of sclera shown, essential for communication via interpretation of eye gaze. This paper extends our previous work in a game-like interactive task by the use of computerised recognition of eye gaze and fuzzy signature-based interpretation of possible intentions. This extends our notion of robot instinctive behaviour to intentional behaviour. We show a good improvement of speed of response in a simple use of eye gaze information. We also show a significant and more sophisticated use of the eye gaze information, which eliminates the need for control actions on the user's part. We also make a suggestion as to returning visibility of control to the user in these cases.

  19. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    Science.gov (United States)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  20. Non-intrusive eye gaze tracking under natural head movements.

    Science.gov (United States)

    Kim, S; Sked, M; Ji, Q

    2004-01-01

    We propose an eye gaze tracking system under natural head movements. The system consists of one CCD camera and two mirrors. Based on geometric and linear algebra calculations, the mirrors rotate to follow head movements in order to keep the eyes within the view of the camera. Our system allows the subjects head to move 30 cm horizontally and 20 cm vertically, with spatial gaze resolutions about 6 degree and 7 degree, respectively and a frame rate about 10 Hz. We also introduce a hierarchical generalized regression neural networks (H-GRNN) scheme to map eye and mirror parameters to gaze, achieving a gaze estimation accuracy of 92% under head movements. The use of H-GRNN also eliminates the need for personal calibration for new subjects since H-GRNN can generalize. Preliminary experiments show our system is accurate and robust in gaze tracking under large head movements.

  1. The Development of Emotional Face and Eye Gaze Processing

    Science.gov (United States)

    Hoehl, Stefanie; Striano, Tricia

    2010-01-01

    Recent research has demonstrated that infants' attention towards novel objects is affected by an adult's emotional expression and eye gaze toward the object. The current event-related potential (ERP) study investigated how infants at 3, 6, and 9 months of age process fearful compared to neutral faces looking toward objects or averting gaze away…

  2. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    Science.gov (United States)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  3. Combining head pose and eye location information for gaze estimation

    NARCIS (Netherlands)

    Valenti, R.; Sebe, N.; Gevers, T.

    2012-01-01

    Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye

  4. Application of head flexion detection for enhancing eye gaze direction classification.

    Science.gov (United States)

    Al-Rahayfeh, Amer; Faezipour, Miad

    2014-01-01

    Extensive research has been conducted on the tracking and detection of the eye gaze and head movement detection as these aspects of technology can be applied as alternative approaches for various interfacing devices. This paper proposes enhancements to the classification of the eye gaze direction. Viola Jones face detector is applied to first declare the region of the eye. Circular Hough Transform is then used to detect the iris location. Support Vector Machine (SVM) is applied to classify the eye gaze direction. Accuracy of the system is enhanced by calculating the flexion angle of the head through the utilization of a microcontroller and flex sensors. In case of rotated face images, the face can be rotated back to zero degrees through the flexion angle calculation. This is while Viola Jones face detector is limited to face images with very little or no rotation angle. Accuracy is initiated by enhancing the effectiveness of the system in the overall procedure of classifying the direction of the eye gaze. Therefore, the head direction is a main determinant in enhancing the control method. Different control signals are enhanced by the eye gaze direction classification and the head direction detection.

  5. Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants

    Science.gov (United States)

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…

  6. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    Science.gov (United States)

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception.

  7. An eye model for uncalibrated eye gaze estimation under variable head pose

    Science.gov (United States)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  8. Visual Foraging With Fingers and Eye Gaze.

    Science.gov (United States)

    Jóhannesson, Ómar I; Thornton, Ian M; Smith, Irene J; Chetverikov, Andrey; Kristjánsson, Árni

    2016-03-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  9. Integration Model of Eye—Gaze,Voice and Manual Response in Multimodal User Interface

    Institute of Scientific and Technical Information of China (English)

    王坚

    1996-01-01

    This paper reports the utility of eye-gaze,voice and manual response in the design of multimodal user interface.A device-and application-independent user interface model(VisualMan)of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task.The multimodal inpus are integrated in the prototype interface based on the priority of modalities and interaction context.The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.

  10. Gaze Stripes: Image-Based Visualization of Eye Tracking Data.

    Science.gov (United States)

    Kurzhals, Kuno; Hlawatsch, Marcel; Heimerl, Florian; Burch, Michael; Ertl, Thomas; Weiskopf, Daniel

    2016-01-01

    We present a new visualization approach for displaying eye tracking data from multiple participants. We aim to show the spatio-temporal data of the gaze points in the context of the underlying image or video stimulus without occlusion. Our technique, denoted as gaze stripes, does not require the explicit definition of areas of interest but directly uses the image data around the gaze points, similar to thumbnails for images. A gaze stripe consists of a sequence of such gaze point images, oriented along a horizontal timeline. By displaying multiple aligned gaze stripes, it is possible to analyze and compare the viewing behavior of the participants over time. Since the analysis is carried out directly on the image data, expensive post-processing or manual annotation are not required. Therefore, not only patterns and outliers in the participants' scanpaths can be detected, but the context of the stimulus is available as well. Furthermore, our approach is especially well suited for dynamic stimuli due to the non-aggregated temporal mapping. Complementary views, i.e., markers, notes, screenshots, histograms, and results from automatic clustering, can be added to the visualization to display analysis results. We illustrate the usefulness of our technique on static and dynamic stimuli. Furthermore, we discuss the limitations and scalability of our approach in comparison to established visualization techniques.

  11. Gaze and eye-tracking solutions for psychological research.

    Science.gov (United States)

    Mele, Maria Laura; Federici, Stefano

    2012-08-01

    Eye-tracking technology is a growing field used to detect eye movements and analyze human processing of visual information for interactive and diagnostic applications. Different domains in scientific research such as neuroscience, experimental psychology, computer science and human factors can benefit from eye-tracking methods and techniques to unobtrusively investigate the quantitative evidence underlying visual processes. In order to meet the experimental requirements concerning the variety of application fields, different gaze- and eye-tracking solutions using high-speed cameras are being developed (e.g., eye-tracking glasses, head-mounted or desk-mounted systems), which are also compatible with other analysis devices such as magnetic resonance imaging. This work presents an overview of the main application fields of eye-tracking methodology in psychological research. In particular, two innovative solutions will be shown: (1) the SMI RED-M eye-tracker, a high performance portable remote eye-tracker suitable for different settings, that requires maximum mobility and flexibility; (2) a wearable mobile gaze-tracking device--the SMI eye-tracking glasses--which is suitable for real-world and virtual environment research. For each kind of technology, the functions and different possibilities of application in experimental psychology will be described by focusing on some examples of experimental tasks (i.e., visual search, reading, natural tasks, scene viewing and other information processing) and theoretical approaches (e.g., embodied cognition).

  12. Gazing-detection of human eyes based on SVM

    Institute of Scientific and Technical Information of China (English)

    LI Su-mei; ZHANG Yan-xin; CHANG Sheng-jiang; SHEN Jin-yuan

    2005-01-01

    A method for gazing-detection of human eyes using Support Vector Machine (SVM) based on statistic learning theory (SLT) is proposed.According to the criteria of structural risk minimization of SVM,the errors between sample-data and model-data are minimized and the upper bound of predicting error of the model is also reduced.As a result,the generalization ability of the model is much improved.The simulation results show that,when limited training samples are used,the correct recognition rate of the tested samples can be as high as 100%,which is much better than some previous results obtained by other methods.The higher processing speed enables the system to distinguish gazing or not-gazing in real-time.

  13. Eye gaze estimation from the elliptical features of one iris

    Science.gov (United States)

    Zhang, Wen; Zhang, Tai-Ning; Chang, Sheng-Jiang

    2011-04-01

    The accuracy of eye gaze estimation using image information is affected by several factors which include image resolution, anatomical structure of the eye, and posture changes. The irregular movements of the head and eye create issues that are currently being researched to enable better use of this key technology. In this paper, we describe an effective way of estimating eye gaze from the elliptical features of one iris under the conditions of not using an auxiliary light source, a head fixing equipment, or multiple cameras. First, we provide preliminary estimation of the gaze direction, and then we obtain the vectors which describe the translation and rotation of the eyeball, by applying a central projection method on the plane which passes through the line-of-sight. This helps us avoid the complex computations involved in previous methods. We also disambiguate the solution based on experimental findings. Second, error correction is conducted on a back propagation neural network trained by a sample collection of translation and rotation vectors. Extensive experimental studies are conducted to assess the efficiency, and robustness of our method. Results reveal that our method has a better performance compared to a typical previous method.

  14. Compact near-to-eye display with integrated gaze tracker

    Science.gov (United States)

    Järvenpää, Toni; Aaltonen, Viljakaisa

    2008-04-01

    Near-to-Eye Display (NED) offers a big screen experience to the user anywhere, anytime. It provides a way to perceive a larger image than the physical device itself is. Commercially available NEDs tend to be quite bulky and uncomfortable to wear. However, by using very thin plastic light guides with diffractive structures on the surfaces, many of the known deficiencies can be notably reduced. These Exit Pupil Expander (EPE) light guides enable a thin, light, user friendly and high performing see-through NED, which we have demonstrated. To be able to interact with the displayed UI efficiently, we have also integrated a video-based gaze tracker into the NED. The narrow light beam of an infrared light source is divided and expanded inside the same EPEs to produce wide collimated beams out from the EPE towards the eyes. Miniature video camera images the cornea and eye gaze direction is accurately calculated by locating the pupil and the glints of the infrared beams. After a simple and robust per-user calibration, the data from the highly integrated gaze tracker reflects the user focus point in the displayed image which can be used as an input device for the NED system. Realizable applications go from eye typing to playing games, and far beyond.

  15. What Do Eye Gaze Metrics Tell Us about Motor Imagery?

    Directory of Open Access Journals (Sweden)

    Elodie Poiroux

    Full Text Available Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI, target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males, were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system. Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks. Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.

  16. Eye gaze estimation from a video

    OpenAIRE

    Merad, Djamel; Mailles-Viard Metz, Stéphanie; Miguet, Serge

    2006-01-01

    International audience; Our work focuses on the interdisciplinary field of detailed analysisof behaviors exhibited by individuals during sessions of distributedcollaboration. With a particular focus on ergonomics, wepropose new mechanisms to be integrated into existing tools toenable increased productivity in distributed learning and working.Our technique is to record ocular movements (eye tracking) to analyzevarious scenarios of distributed collaboration in the contextof computer-based train...

  17. Reduced eye gaze explains "fear blindness" in childhood psychopathic traits.

    Science.gov (United States)

    Dadds, Mark R; El Masry, Yasmeen; Wimalaweera, Subodha; Guastella, Adam J

    2008-04-01

    Damage to the amygdala produces deficits in the ability to recognize fear due to attentional neglect of other people's eyes. Interestingly, children with high psychopathic traits also show problems recognizing fear; however, the reasons for this are not known. This study tested whether psychopathic traits are associated with reduced attention to the eye region of other people's faces. Adolescent males (N = 100; age mean 12.4 years, SD 2.2) were stratified by psychopathic traits and assessed using a Tobii eye tracker to measure primacy, number, and duration of fixations to the eye and mouth regions of emotional faces presented via the UNSW Facial Emotion Task. High psychopathic traits predicted poor fear recognition (1.21 versus 1.35; p eye fixations, and fewer first foci to the eye region (1.01 versus 2.01; p region. All indices of gaze to the eye region correlated positively with accurate recognition of fear for the high psychopathy group, especially the number of times that subjects looked at the eyes first (r = .50; p eyes is reduced in young people with high psychopathic traits, thus accounting for their problems with fear recognition, and is consistent with amygdala dysfunction failing to promote attention to emotional salience in the environment.

  18. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua

    2013-01-01

    In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment the langu......In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment...

  19. "Gaze Leading": Initiating Simulated Joint Attention Influences Eye Movements and Choice Behavior

    Science.gov (United States)

    Bayliss, Andrew P.; Murphy, Emily; Naughtin, Claire K.; Kritikos, Ada; Schilbach, Leonhard; Becker, Stefanie I.

    2013-01-01

    Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator--the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon…

  20. Noncontact binocular eye-gaze tracking for point-of-gaze estimation in three dimensions.

    Science.gov (United States)

    Hennessey, Craig; Lawrence, Peter

    2009-03-01

    Binocular eye-gaze tracking can be used to estimate the point-of-gaze (POG) of a subject in real-world 3-D space using the vergence of the eyes. In this paper, a novel noncontact model-based technique for 3-D POG estimation is presented. The noncontact system allows people to select real-world objects in 3-D physical space using their eyes, without the need for head-mounted equipment. Remote 3-D POG estimation may be especially useful for persons with quadriplegia or Amyotrophic Lateral Sclerosis. It would also enable a user to select 3-D points in space generated by 3-D volumetric displays, with potential applications to medical imaging and telesurgery. Using a model-based POG estimation algorithm allows for free head motion and a single stage of calibration. It is shown that an average accuracy of 3.93 cm was achieved over a workspace volume of 30 x 23 x 25 cm (W x H x D) with a maximum latency of 1.5 s due to the digital filtering employed. The users were free to naturally move and reorient their heads while operating the system, within an allowable headspace of 3 cm x 9 cm x 14 cm.

  1. All eyes on me?! Social anxiety and self-directed perception of eye gaze.

    Science.gov (United States)

    Schulze, Lars; Lobmaier, Janek S; Arnold, Manuel; Renneberg, Babette

    2013-01-01

    To date, only little is known about the self-directed perception and processing of subtle gaze cues in social anxiety that might however contribute to excessive feelings of being looked at by others. Using a web-based approach, participants (n=174) were asked whether or not briefly (300 ms) presented facial expressions modulated in gaze direction (0°, 2°, 4°, 6°, 8°) and valence (angry, fearful, happy, neutral) were directed at them. The results demonstrate a positive, linear relationship between self-reported social anxiety and stronger self-directed perception of others' gaze directions, particularly for negative (angry, fearful) and neutral expressions. Furthermore, faster responding was found for gaze more clearly directed at socially anxious individuals (0°, 2°, and 4°) suggesting a tendency to avoid direct gaze. In sum, the results illustrate an altered self-directed perception of subtle gaze cues. The possibly amplifying effects of social stress on biased self-directed perception of eye gaze are discussed.

  2. Eye Gaze Patterns in Conversations: There is More to Conversational Agents Than Meets the Eyes

    NARCIS (Netherlands)

    Vertegaal, R.P.H.; Slagter, R.; Veer, van der G.C.; Nijholt, A.; Jacko, J.; Sears, A.; Beaudouin-Lafon, M.; Jacob, R.J.K.

    2001-01-01

    In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject ga

  3. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    Science.gov (United States)

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  4. Experimental test of spatial updating models for monkey eye-head gaze shifts.

    Directory of Open Access Journals (Sweden)

    Tom J Van Grootel

    Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.

  5. Cheap and Easy PIN Entering Using Eye Gaze

    Directory of Open Access Journals (Sweden)

    Kasprowski Pawel

    2014-03-01

    Full Text Available PINs are one of the most popular methods to perform simple and fast user authentication. PIN stands for Personal Identification Number, which may have any number of digits or even letters. Nevertheless, 4-digit PIN is the most common and is used for instance in ATMs or cellular phones. The main advantage of the PIN is that it is easy to remember and fast to enter. There are, however, some drawbacks. One of them - addressed in this paper - is a possibility to steal PIN by a technique called `shoulder surfing'. To avoid such problems a novel method of the PIN entering was proposed. Instead of using a numerical keyboard, the PIN may be entered by eye gazes, which is a hands-free, easy and robust technique. References:

  6. Coordinated Flexibility: How Initial Gaze Position Modulates Eye-Hand Coordination and Reaching

    Science.gov (United States)

    Adam, Jos J.; Buetti, Simona; Kerzel, Dirk

    2012-01-01

    Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags…

  7. Follow my eyes: the gaze of politicians reflexively captures the gaze of ingroup voters.

    Directory of Open Access Journals (Sweden)

    Marco Tullio Liuzza

    Full Text Available Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention.

  8. Eye can see what you want: Posterior Intraparietal Sulcus encodes the object of an actor's gaze

    NARCIS (Netherlands)

    Ramsey, R.; Cross, E.S.; Hamilton, A.F.D.C.

    2011-01-01

    In a social setting, seeing Sally look at a clock means something different to seeing her gaze longingly at a slice of chocolate cake. In both cases, her eyes and face might be turned rightward, but the information conveyed is markedly different, depending on the object of her gaze. Numerous studies

  9. Do as eye say: gaze cueing and language in a real-world social interaction.

    Science.gov (United States)

    Macdonald, Ross G; Tatler, Benjamin W

    2013-03-11

    Gaze cues are important in communication. In social interactions gaze cues usually occur with spoken language, yet most previous research has used artificial paradigms without dialogue. The present study investigates the interaction between gaze and language using a real-world paradigm. Each participant followed instructions to build a series of abstract structures out of building blocks, while their eye movements were recorded. The instructor varied the specificity of the instructions (unambiguous or ambiguous) and the presence of gaze cues (present or absent) between participants. Fixations to the blocks were recorded and task performance was measured. The presence of gaze cues led to more accurate performance, more accurate visual selection of the target block and more fixations towards the instructor when ambiguous instructions were given, but not when unambiguous instructions were given. We conclude that people only utilize the gaze cues of others when the cues provide useful information.

  10. Specificity of Age-Related Differences in Eye-Gaze Following: Evidence From Social and Nonsocial Stimuli.

    Science.gov (United States)

    Slessor, Gillian; Venturini, Cristina; Bonny, Emily J; Insch, Pauline M; Rokaszewicz, Anna; Finnerty, Ailbhe N

    2016-01-01

    Eye-gaze following is a fundamental social skill, facilitating communication. The present series of studies explored adult age-related differences in this key social-cognitive ability. In Study 1 younger and older adult participants completed a cueing task in which eye-gaze cues were predictive or non-predictive of target location. Another eye-gaze cueing task, assessing the influence of congruent and incongruent eye-gaze cues relative to trials which provided no cue to target location, was administered in Study 2. Finally, in Study 3 the eye-gaze cue was replaced by an arrow. In Study 1 older adults showed less evidence of gaze following than younger participants when required to strategically follow predictive eye-gaze cues and when making automatic shifts of attention to non-predictive eye-gaze cues. Findings from Study 2 suggested that, unlike younger adults, older participants showed no facilitation effect and thus did not follow congruent eye-gaze cues. They also had significantly weaker attentional costs than their younger counterparts. These age-related differences were not found in the non-social arrow cueing task. Taken together these findings suggest older adults do not use eye-gaze cues to engage in joint attention, and have specific social difficulties decoding critical information from the eye region. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Predictive gaze cues and personality judgments: Should eye trust you?

    Science.gov (United States)

    Bayliss, Andrew P; Tipper, Steven P

    2006-06-01

    Although following another person's gaze is essential in fluent social interactions, the reflexive nature of this gaze-cuing effect means that gaze can be used to deceive. In a gaze-cuing procedure, participants were presented with several faces that looked to the left or right. Some faces always looked to the target (predictive-valid), some never looked to the target (predictive-invalid), and others looked toward and away from the target in equal proportions (nonpredictive). The standard gaze-cuing effects appeared to be unaffected by these contingencies. Nevertheless, participants tended to choose the predictive-valid faces as appearing more trustworthy than the predictive-invalid faces. This effect was negatively related to scores on a scale assessing autistic-like traits. Further, we present tentative evidence that the "deceptive" faces were encoded more strongly in memory than the "cooperative" faces. These data demonstrate the important interactions among attention, gaze perception, facial identity recognition, and personality judgments.

  12. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Science.gov (United States)

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  13. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Directory of Open Access Journals (Sweden)

    Simon Ho

    Full Text Available Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials, which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1 validate the overall results from earlier aggregated analyses and 2 provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1 speakers end their turn with direct gaze at the listener and 2 the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  14. "Avoiding or approaching eyes"? Introversion/extraversion affects the gaze-cueing effect.

    Science.gov (United States)

    Ponari, Marta; Trojano, Luigi; Grossi, Dario; Conson, Massimiliano

    2013-08-01

    We investigated whether the extra-/introversion personality dimension can influence processing of others' eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli's eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals.

  15. Visual Gaze Estimation by Joint Head and Eye Information

    NARCIS (Netherlands)

    Valenti, R.; Lablack, A.; Sebe, N.; Djeraba, C.; Gevers, T.

    2010-01-01

    In this paper, we present an unconstrained visual gaze estimation system. The proposed method extracts the visual field of view of a person looking at a target scene in order to estimate the approximate location of interest (visual gaze). The novelty of the system is the joint use of head pose and e

  16. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    Science.gov (United States)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  17. Quantifying naturalistic social gaze in fragile X syndrome using a novel eye tracking paradigm.

    Science.gov (United States)

    Hall, Scott S; Frank, Michael C; Pusiol, Guido T; Farzin, Faraz; Lightbody, Amy A; Reiss, Allan L

    2015-10-01

    A hallmark behavioral feature of fragile X syndrome (FXS) is the propensity for individuals with the syndrome to exhibit significant impairments in social gaze during interactions with others. However, previous studies employing eye tracking methodology to investigate this phenomenon have been limited to presenting static photographs or videos of social interactions rather than employing a real-life social partner. To improve upon previous studies, we used a customized eye tracking configuration to quantify the social gaze of 51 individuals with FXS and 19 controls, aged 14-28 years, while they engaged in a naturalistic face-to-face social interaction with a female experimenter. Importantly, our control group was matched to the FXS group on age, developmental functioning, and degree of autistic symptomatology. Results showed that participants with FXS spent significantly less time looking at the face and had shorter episodes (and longer inter-episodes) of social gaze than controls. Regression analyses indicated that communication ability predicted higher levels of social gaze in individuals with FXS, but not in controls. Conversely, degree of autistic symptoms predicted lower levels of social gaze in controls, but not in individuals with FXS. Taken together, these data indicate that naturalistic social gaze in FXS can be measured objectively using existing eye tracking technology during face-to-face social interactions. Given that impairments in social gaze were specific to FXS, this paradigm could be employed as an objective and ecologically valid outcome measure in ongoing Phase II/Phase III clinical trials of FXS-specific interventions.

  18. Evaluating gaze-based interface tools to facilitate point-and-select tasks with small targets

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. ... when using zoom, but total pointing times were shorter using zoom. Furthermore, participants perceived magnification as more fatiguing than zoom. The higher accuracy of magnification makes it preferable when interacting with small targets. Our findings may guide the development of interface tools...

  19. Real-time inference of word relevance from electroencephalogram and eye gaze

    Science.gov (United States)

    Wenzel, M. A.; Bogojeski, M.; Blankertz, B.

    2017-10-01

    Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.

  20. A real-time gaze position estimation method based on a 3-D eye model.

    Science.gov (United States)

    Park, Kang Ryoung

    2007-02-01

    This paper proposes a new gaze-detection method based on a 3-D eye position and the gaze vector of the human eyeball. Seven new developments compared to previous works are presented. First, a method of using three camera systems, i.e., one wide-view camera and two narrow-view cameras, is proposed. The narrow-view cameras use autozooming, focusing, panning, and tilting procedures (based on the detected 3-D eye feature position) for gaze detection. This allows for natural head and eye movement by users. Second, in previous conventional gaze-detection research, one or multiple illuminators were used. These studies did not consider specular reflection (SR) problems, which were caused by the illuminators when working with users who wore glasses. To solve this problem, a method based on dual illuminators is proposed in this paper. Third, the proposed method does not require user-dependent calibration, so all procedures for detecting gaze position operate automatically without human intervention. Fourth, the intrinsic characteristics of the human eye, such as the disparity between the pupillary and the visual axes in order to obtain accurate gaze positions, are considered. Fifth, all the coordinates obtained by the left and right narrow-view cameras, as well as the wide-view camera coordinates and the monitor coordinates, are unified. This simplifies the complex 3-D converting calculation and allows for calculation of the 3-D feature position and gaze position on the monitor. Sixth, to upgrade eye-detection performance when using a wide-view camera, the adaptive-selection method is used. This involves an IR-LED on/off scheme, an AdaBoost classifier, and a principle component analysis method based on the number of SR elements. Finally, the proposed method uses an eigenvector matrix (instead of simply averaging six gaze vectors) in order to obtain a more accurate final gaze vector that can compensate for noise. Experimental results show that the root mean square error of

  1. The duality of gaze: Eyes extract and signal social information during sustained cooperative and competitive dyadic gaze

    Directory of Open Access Journals (Sweden)

    Michelle eJarick

    2015-09-01

    Full Text Available In contrast to nonhuman primate eyes, which have a dark sclera surrounding a dark iris, human eyes have a white sclera that surrounds a dark iris. This high contrast morphology allows humans to determine quickly and easily where others are looking and infer what they are attending to. In recent years an enormous body of work has used photos and schematic images of faces to study these aspects of social attention, e.g., the selection of the eyes of others and the shift of attention to where those eyes are directed. However, evolutionary theory holds that humans did not develop a high contrast morphology simply to use the eyes of others as attentional cues; rather they sacrificed camouflage for communication, that is, to signal their thoughts and intentions to others. In the present study we demonstrate the importance of this by taking as our starting point the hypothesis that a cornerstone of nonverbal communication is the eye contact between individuals and the time that it is held. In a single simple study we show experimentally that the effect of eye contact can be quickly and profoundly altered merely by having participants, who had never met before, play a game in a cooperative or competitive manner. After the game participants were asked to make eye contact for a prolonged period of time (10 minutes. Those who had played the game cooperatively found this terribly difficult to do, repeatedly talking and breaking gaze. In contrast, those who had played the game competitively were able to stare quietly at each other for a sustained period. Collectively these data demonstrate that when looking at the eyes of a real person one both acquires and signals information to the other person. This duality of gaze is critical to nonverbal communication, with the nature of that communication shaped by the relationship between individuals, e.g., cooperative or competitive.

  2. Inspection time as mental speed in mildly mentally retarded adults: analysis of eye gaze, eye movement, and orientation.

    Science.gov (United States)

    Nettelbeck, T; Robson, L; Walwyn, T; Downing, A; Jones, N

    1986-07-01

    The effect of eye movements away from a target on accuracy of visual discrimination was examined. In Experiment I inspection time was measured for 10 mildly mentally retarded and 10 nonretarded adults under two conditions, with each trial initiated by the subject or under experimental control. Retarded subjects did not gain any advantage from controlling trial onset. Video records of eye movements revealed that retarded subjects glanced off-target more than did nonretarded controls, but this was not sufficient to explain appreciably slower inspection time of the retarded group. Experiment 2 supported this conclusion; the same subjects completed a letter-discrimination task with direction of gaze monitored automatically. Although retarded subjects' eye gaze was more scattered early during a trial, gaze was appropriately directed by the time that the target appeared. Results from both experiments supported the hypothesis that speed of central, perceptual processing is slower among retarded persons, over and above the influence of distractibility. Results from three experiments in Part II were consistent with this interpretation. Experiment 3 was designed to eradicate trials among retarded subjects in which gaze was not properly directed, but results showed that too few such events occurred to influence accuracy. Experiment 4 demonstrated that the preparatory procedure in the previous studies resulted in efficient eye gaze among retarded subjects. Experiment 5 confirmed that lower discriminative accuracy among 10 retarded adults (compared with 10 nonretarded controls) was not due to less-efficient orientation prior to discrimination.

  3. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs we...... evaluated on two eye tracking devices (Tobii/QuickGlance (QG)). The main findings show that there is a significant difference in selection times between long and short SGGs, between vertical and horizontal selections, as well as between the different tracking systems....

  4. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Directory of Open Access Journals (Sweden)

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  5. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction.

    Science.gov (United States)

    Rossi, Alejandra; Parada, Francisco J; Latinus, Marianne; Puce, Aina

    2015-01-01

    Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs) to viewing gaze aversions vs. direct gaze in real faces (Puce et al., 2000). In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014), suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards) while continuous electroencephalographic (EEG) activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from Event-Related Spectral Perturbations (ERSPs) were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion. Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  6. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    This article is based on fieldwork with young Muslims in Copenhagen and focuses on how they navigate the visibility and embodiment of Muslim otherness in their everyday life. The concept of panoptical gazes are developed in regard to the young Muslims’ narratives of being looked upon, and the dif......This article is based on fieldwork with young Muslims in Copenhagen and focuses on how they navigate the visibility and embodiment of Muslim otherness in their everyday life. The concept of panoptical gazes are developed in regard to the young Muslims’ narratives of being looked upon......, and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...

  7. PENGENDALI POINTER DENGAN GAZE TRACKING MENGGUNAKAN METODE HAAR CLASSIFIER SEBAGAI ALAT BANTU PRESENTASI (EYE POINTER

    Directory of Open Access Journals (Sweden)

    Edi Satriyanto

    2013-03-01

    Full Text Available The application that builded in this research is a pointer controller using eye movement (eye pointer. This application is one of image processing applications, where the users just have to move their eye to control the computer pointer. This eye pointer is expected able to assist the usage of manual pointer during the presentation. Since the title of this research is using gaze tracking that follow the eye movement, so that is important to detect the center of the pupil. To track the gaze, it is necessary to detect the center of the pupil if the eye image is from the input camera. The gaze tracking is detected using the three-step hierarchy system. First, motion detection, object (eye detection, and then pupil detection. For motion detection, the used method is identify the movement by dynamic compare the pixel ago by current pixel at t time. The eye region is detected using the Haar-Like Feature Classifier, where the sistem must be trained first to get the cascade classifier that allow the sistem to detect the object in each frame that captured by camera. The center of pupil is detect using integral projection.The final step is mapping the position of center of pupil to the screen of monitor using comparison scale between eye resolution with screen resolution. When detecting the eye gaze on the screen, the information (the distance and angle between eyes and a screen is necessary to compute pointing coordinates on the screen. In this research, the accuracy of this application is equal to 80% at eye movement with speed 1-2 second. And the optimum mean value is between 5 and 10. The optimum distance of user and the webcam is 40 cm from webcam.

  8. Gaze estimation for off-angle iris recognition based on the biometric eye model

    Science.gov (United States)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Thompson, Joseph; Bolme, David; Boehnen, Christopher

    2013-05-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  9. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  10. Teachers' Experiences of Using Eye Gaze-Controlled Computers for Pupils with Severe Motor Impairments and without Speech

    Science.gov (United States)

    Rytterström, Patrik; Borgestig, Maria; Hemmingsson, Helena

    2016-01-01

    The purpose of this study is to explore teachers' experiences of using eye gaze-controlled computers with pupils with severe disabilities. Technology to control a computer with eye gaze is a fast growing field and has promising implications for people with severe disabilities. This is a new assistive technology and a new learning situation for…

  11. The EyeHarp: A Gaze-Controlled Digital Musical Instrument

    OpenAIRE

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities ...

  12. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces.

    Science.gov (United States)

    Abbott, W W; Faisal, A A

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s(-1), more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark--the control of the video arcade game 'Pong'.

  13. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    Science.gov (United States)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  14. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    , and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...... are related to narratives on belonging, and the embodied experience of home which points towards new avenues of understanding the process of othering and the possibilities of negotiating the position as Other....

  15. EDITORIAL: Special section on gaze-independent brain-computer interfaces Special section on gaze-independent brain-computer interfaces

    Science.gov (United States)

    Treder, Matthias S.

    2012-08-01

    Restoring the ability to communicate and interact with the environment in patients with severe motor disabilities is a vision that has been the main catalyst of early brain-computer interface (BCI) research. The past decade has brought a diversification of the field. BCIs have been examined as a tool for motor rehabilitation and their benefit in non-medical applications such as mental-state monitoring for improved human-computer interaction and gaming has been confirmed. At the same time, the weaknesses of some approaches have been pointed out. One of these weaknesses is gaze-dependence, that is, the requirement that the user of a BCI system voluntarily directs his or her eye gaze towards a visual target in order to efficiently operate a BCI. This not only contradicts the main doctrine of BCI research, namely that BCIs should be independent of muscle activity, but it can also limit its real-world applicability both in clinical and non-medical settings. It is only in a scenario devoid of any motor activity that a BCI solution is without alternative. Gaze-dependencies have surfaced at two different points in the BCI loop. Firstly, a BCI that relies on visual stimulation may require users to fixate on the target location. Secondly, feedback is often presented visually, which implies that the user may have to move his or her eyes in order to perceive the feedback. This special section was borne out of a BCI workshop on gaze-independent BCIs held at the 2011 Society for Applied Neurosciences (SAN) Conference and has then been extended with additional contributions from other research groups. It compiles experimental and methodological work that aims toward gaze-independent communication and mental-state monitoring. Riccio et al review the current state-of-the-art in research on gaze-independent BCIs [1]. Van der Waal et al present a tactile speller that builds on the stimulation of the fingers of the right and left hand [2]. H¨ohne et al analyze the ergonomic aspects

  16. The More You Look the More You Get: Intention-Based Interface Using Gaze-Tracking.

    Science.gov (United States)

    Milekic, Slavko

    Only a decade ago eye- and gaze-tracking technologies using cumbersome and expensive equipment were confined to university research labs. However, rapid technological advancements (increased processor speed, advanced digital video processing) and mass production have both lowered the cost and dramatically increased the efficacy of eye- and…

  17. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu;

    Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunately...

  18. Tell-Tale Eyes: Children's Attribution of Gaze Aversion as a Lying Cue

    Science.gov (United States)

    Einav, Shiri; Hood, Bruce M.

    2008-01-01

    This study examined whether the well-documented adult tendency to perceive gaze aversion as a lying cue is also evident in children. In Experiment 1, 6-year-olds, 9-year-olds, and adults were shown video vignettes of speakers who either maintained or avoided eye contact while answering an interviewer's questions. Participants evaluated whether the…

  19. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu

    2013-01-01

    Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunatel...

  20. Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC

    CERN Document Server

    Pallez, Denis; Baccino, Thierry

    2008-01-01

    In this paper, we firstly present what is Interactive Evolutionary Computation (IEC) and rapidly how we have combined this artificial intelligence technique with an eye-tracker for visual optimization. Next, in order to correctly parameterize our application, we present results from applying data mining techniques on gaze information coming from experiments conducted on about 80 human individuals.

  1. A free geometry model-independent neural eye-gaze tracking system

    Directory of Open Access Journals (Sweden)

    Gneo Massimo

    2012-11-01

    Full Text Available Abstract Background Eye Gaze Tracking Systems (EGTSs estimate the Point Of Gaze (POG of a user. In diagnostic applications EGTSs are used to study oculomotor characteristics and abnormalities, whereas in interactive applications EGTSs are proposed as input devices for human computer interfaces (HCI, e.g. to move a cursor on the screen when mouse control is not possible, such as in the case of assistive devices for people suffering from locked-in syndrome. If the user’s head remains still and the cornea rotates around its fixed centre, the pupil follows the eye in the images captured from one or more cameras, whereas the outer corneal reflection generated by an IR light source, i.e. glint, can be assumed as a fixed reference point. According to the so-called pupil centre corneal reflection method (PCCR, the POG can be thus estimated from the pupil-glint vector. Methods A new model-independent EGTS based on the PCCR is proposed. The mapping function based on artificial neural networks allows to avoid any specific model assumption and approximation either for the user’s eye physiology or for the system initial setup admitting a free geometry positioning for the user and the system components. The robustness of the proposed EGTS is proven by assessing its accuracy when tested on real data coming from: i different healthy users; ii different geometric settings of the camera and the light sources; iii different protocols based on the observation of points on a calibration grid and halfway points of a test grid. Results The achieved accuracy is approximately 0.49°, 0.41°, and 0.62° for respectively the horizontal, vertical and radial error of the POG. Conclusions The results prove the validity of the proposed approach as the proposed system performs better than EGTSs designed for HCI which, even if equipped with superior hardware, show accuracy values in the range 0.6°-1°.

  2. A 2D eye gaze estimation system with low-resolution webcam images

    Directory of Open Access Journals (Sweden)

    Kim Jin

    2011-01-01

    Full Text Available Abstract In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI algorithm. Deformable template-based 2D gaze estimation (DTBGE algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  3. EyeDroid: An Open Source Mobile Gaze Tracker on Android for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbeigi, Diako; Sintos, Ioannis

    2015-01-01

    In this paper we report on development and evaluation of a video-based mobile gaze tracker for eyewear computers. Unlike most of the previous work, our system performs all its processing workload on an Android device and sends the coordinates of the gaze point to an eyewear device through wireless...... connection. We propose a lightweight software architecture for Android to increase the efficiency of image processing needed for eye tracking. The evaluation of the system indicated an accuracy of 1:06 and a battery lifetime of approximate 4.5 hours....

  4. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    OpenAIRE

    Serchi, V.; Peruzzi, A; A. Cereatti; Della Croce, U.

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study a...

  5. Love is in the gaze: an eye-tracking study of love and sexual desire.

    Science.gov (United States)

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports.

  6. Brief Report: Using a Point-of-View Camera to Measure Eye Gaze in Young Children with Autism Spectrum Disorder during Naturalistic Social Interactions--A Pilot Study

    Science.gov (United States)

    Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.

    2017-01-01

    Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…

  7. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behavior in Dealing with Uncertainty.

    Science.gov (United States)

    Wittek, Peter; Liu, Ying-Hsang; Darányi, Sándor; Gedeon, Tom; Lim, Ik Soo

    2016-01-01

    Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Our findings from an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain suggest that user traits of cognitive styles and perceived search task difficulty are significantly correlated with eye gaze and search behavior. We also demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favor of ambiguity minimization. Since the pattern of behavior in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.

  8. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behaviour in Dealing with Uncertainty

    Directory of Open Access Journals (Sweden)

    Peter Wittek

    2016-11-01

    Full Text Available Information foraging connects optimal foraging theory in ecology withhow humans search for information. The theory suggests that, followingan information scent, the information seeker must optimize the tradeoffbetween exploration by repeated steps in the search space vs.exploitation, using the resources encountered. We conjecture that thistradeoff characterizes how a user deals with uncertainty and its twoaspects, risk and ambiguity in economic theory. Risk is related to theperceived quality of the actually visited patch of information, and canbe reduced by exploiting and understanding the patch to a better extent.Ambiguity, on the other hand, is the opportunity cost of having higherquality patches elsewhere in the search space. The aforementionedtradeoff depends on many attributes, including traits of the user: atthe two extreme ends of the spectrum, analytic and wholistic searchersemploy entirely different strategies. The former type focuses onexploitation first, interspersed with bouts of exploration, whereas thelatter type prefers to explore the search space first and consume later.Our findings from an eye-tracking study of experts' interactions withnovel search interfaces in the biomedical domain suggest that usertraits of cognitive styles and perceived search task difficultyare significantly correlated with eye gaze and search behaviour. Wealso demonstrate that perceived risk shifts the balance betweenexploration and exploitation in either type of users, tilting it againstvs. in favour of ambiguity minimization. Since the pattern of behaviourin information foraging is quintessentially sequential, risk andambiguity minimization cannot happen simultaneously, leading to afundamental limit on how good such a tradeoff can be. This in turnconnects information seeking with the emergent field of quantum decisiontheory.

  9. Differential gaze patterns on eyes and mouth during audiovisual speech segmentation

    Directory of Open Access Journals (Sweden)

    Laina G. Lusk

    2016-02-01

    Full Text Available Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel & Weiss, 2014 established that adults can utilize facial cues (i.e. visual prosody to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014. Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

  10. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation.

    Science.gov (United States)

    Lusk, Laina G; Mitchel, Aaron D

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

  11. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  12. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition.

    Science.gov (United States)

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  13. Eye gaze during observation of static faces in deaf people.

    Directory of Open Access Journals (Sweden)

    Katsumi Watanabe

    Full Text Available Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female and 23 (11 male and 12 female normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people.

  14. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    Science.gov (United States)

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.

  15. Keeping your eye on the rail: gaze behaviour of horse riders approaching a jump.

    Directory of Open Access Journals (Sweden)

    Carol Hall

    Full Text Available The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye. The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10 was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (p<0.05. Jump 1 was fixated on significantly earlier and more frequently than jump 2 or 3 (p<0.05. Significantly more errors were made with jump 3 than with jump 1 (p = 0.01 but there was no difference in errors made between rounds. Although no significant correlations between gaze behaviour and skill scores were found, the riders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07, when the horse was further from the jump (p = 0.09 and their first fixation on the jump was of a longer duration (p = 0.06. Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports.

  16. Eye Tracking: A Perceptual Interface for Content Based Image Retrieval

    OpenAIRE

    Oyekoya, Oyekoya

    2007-01-01

    In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits...

  17. Eye Gaze during Observation of Static Faces in Deaf People

    OpenAIRE

    Katsumi Watanabe; Tetsuya Matsuda; Tomoyuki Nishioka; Miki Namatame

    2011-01-01

    Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the e...

  18. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.

  19. Eye-gaze patterns as students study worked-out examples in mechanics

    Directory of Open Access Journals (Sweden)

    Brian H. Ross

    2010-10-01

    Full Text Available This study explores what introductory physics students actually look at when studying worked-out examples. Our classroom experiences indicate that introductory physics students neither discuss nor refer to the conceptual information contained in the text of worked-out examples. This study is an effort to determine to what extent students incorporate the textual information into the way they study. Student eye-gaze patterns were recorded as they studied the examples to aid them in solving a target problem. Contrary to our expectations from classroom interactions, students spent 40±3% of their gaze time reading the textual information. Their gaze patterns were also characterized by numerous jumps between corresponding mathematical and textual information, implying that they were combining information from both sources. Despite this large fraction of time spent reading the text, student recall of the conceptual information contained therein remained very poor. We also found that having a particular problem in mind had no significant effects on the gaze-patterns or conceptual information retention.

  20. Driver fatigue alarm based on eye detection and gaze estimation

    Science.gov (United States)

    Sun, Xinghua; Xu, Lu; Yang, Jingyu

    2007-11-01

    The driver assistant system has attracted much attention as an essential component of intelligent transportation systems. One task of driver assistant system is to prevent the drivers from fatigue. For the fatigue detection it is natural that the information about eyes should be utilized. The driver fatigue can be divided into two types, one is the sleep with eyes close and another is the sleep with eyes open. Considering that the fatigue detection is related with the prior knowledge and probabilistic statistics, the dynamic Bayesian network is used as the analysis tool to perform the reasoning of fatigue. Two kinds of experiments are performed to verify the system effectiveness, one is based on the video got from the laboratory and another is based on the video got from the real driving situation. Ten persons participate in the test and the experimental result is that, in the laboratory all the fatigue events can be detected, and in the practical vehicle the detection ratio is about 85%. Experiments show that in most of situations the proposed system works and the corresponding performance is satisfying.

  1. Human eye-head gaze shifts in a distractor task. II. Reduced threshold for initiation of early head movements.

    Science.gov (United States)

    Corneil, B D; Munoz, D P

    1999-09-01

    This study was motivated by the observation of early head movements (EHMs) occasionally generated before gaze shifts. Human subjects were presented with a visual or auditory target, along with an accompanying stimulus of the other modality, that either appeared at the same location as the target (enhancer condition) or at the diametrically opposite location (distractor condition). Gaze shifts generated to the target in the distractor condition sometimes were preceded by EHMs directed either to the side of the target (correct EHMs) or the side of the distractor (incorrect EHMs). During EHMs, the eyes performed compensatory eye movements to keep gaze stable. Incorrect EHMs were usually between 1 and 5 degrees in amplitude and reached peak velocities generally EHMs initially followed a trajectory typical of much larger head movements. These results suggest that incorrect EHMs are head movements that initially were planned to orient to the peripheral distractor. Furthermore gaze shifts preceded by incorrect EHMs had longer reaction latencies than gaze shifts not preceded by incorrect EHMs, suggesting that the processes leading to incorrect EHMs also serve to delay gaze-shift initiation. These results demonstrate a form of distraction analogous to the incorrect gaze shifts (IGSs) described in the previous paper and suggest that a motor program encoding a gaze shift to a distractor is capable of initiating either an IGS or an incorrect EHM. A neural program not strong enough to initiate an IGS nevertheless can initiate an incorrect EHM.

  2. Complicating Eroticism and the Male Gaze: Feminism and Georges Bataille’s Story of the Eye

    Directory of Open Access Journals (Sweden)

    Chris Vanderwees

    2014-01-01

    Full Text Available This article explores the relationship between feminist criticism and Georges Bataille’s Story of the Eye . Much of the critical work on Bataille assimilates his psychosocial theories in Erotism with the manifestation of those theories in his fiction without acknowledging potential contradictions between the two bodies of work. The conflation of important distinctions between representations of sex and death in Story of the Eye and the writings of Erotism forecloses the possibility of reading Bataille’s novel as a critique of gender relations. This article unravels some of the distinctions between Erotism and Story of the Eye in order to complicate the assumption that the novel simply reproduces phallogocentric sexual fantasies of transgression. Drawing from the work of Angela Carter and Laura Mulvey, the author proposes the possibility of reading Story of the Eye as a pornographic critique of gender relations through an analysis of the novel’s displacement and destruction of the male gaze.

  3. Observing Third-Party Attentional Relationships Affects Infants' Gaze Following: An Eye-Tracking Study

    Science.gov (United States)

    Meng, Xianwei; Uto, Yusuke; Hashiya, Kazuhide

    2017-01-01

    Not only responding to direct social actions toward themselves, infants also pay attention to relevant information from third-party interactions. However, it is unclear whether and how infants recognize the structure of these interactions. The current study aimed to investigate how infants' observation of third-party attentional relationships influence their subsequent gaze following. Nine-month-old, 1-year-old, and 1.5-year-old infants (N = 72, 37 girls) observed video clips in which a female actor gazed at one of two toys after she and her partner either silently faced each other (face-to-face condition) or looked in opposite directions (back-to-back condition). An eye tracker was used to record the infants' looking behavior (e.g., looking time, looking frequency). The analyses revealed that younger infants followed the actor's gaze toward the target object in both conditions, but this was not the case for the 1.5-year-old infants in the back-to-back condition. Furthermore, we found that infants' gaze following could be negatively predicted by their expectation of the partner's response to the actor's head turn (i.e., they shift their gaze toward the partner immediately after they realize that the actor's head will turn). These findings suggested that the sensitivity to the difference in knowledge and attentional states in the second year of human life could be extended to third-party interactions, even without any direct involvement in the situation. Additionally, a spontaneous concern with the epistemic gap between self and other, as well as between others, develops by this age. These processes might be considered part of the fundamental basis for human communication.

  4. Testing the dual-route model of perceived gaze direction: Linear combination of eye and head cues.

    Science.gov (United States)

    Otsuka, Yumiko; Mareschal, Isabelle; Clifford, Colin W G

    2016-06-01

    We have recently proposed a dual-route model of the effect of head orientation on perceived gaze direction (Otsuka, Mareschal, Calder, & Clifford, 2014; Otsuka, Mareschal, & Clifford, 2015), which computes perceived gaze direction as a linear combination of eye orientation and head orientation. By parametrically manipulating eye orientation and head orientation, we tested the adequacy of a linear model to account for the effect of horizontal head orientation on perceived direction of gaze. Here, participants adjusted an on-screen pointer toward the perceived gaze direction in two image conditions: Normal condition and Wollaston condition. Images in the Normal condition included a change in the visible part of the eye along with the change in head orientation, while images in the Wollaston condition were manipulated to have identical eye regions across head orientations. Multiple regression analysis with explanatory variables of eye orientation and head orientation revealed that linear models account for most of the variance both in the Normal condition and in the Wollaston condition. Further, we found no evidence that the model with a nonlinear term explains significantly more variance. Thus, the current study supports the dual-route model that computes the perceived gaze direction as a linear combination of eye orientation and head orientation.

  5. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  6. Keeping your eye on the rail: gaze behaviour of horse riders approaching a jump.

    Science.gov (United States)

    Hall, Carol; Varley, Ian; Kay, Rachel; Crundall, David

    2014-01-01

    The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye). The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG) was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10) was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (priders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07), when the horse was further from the jump (p = 0.09) and their first fixation on the jump was of a longer duration (p = 0.06). Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports.

  7. In the Eye of the Beholder-  A Survey of Models for Eyes and Gaze

    DEFF Research Database (Denmark)

    Witzner Hansen, Dan; Ji, Qiang

    2010-01-01

    Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications...

  8. Kinematic property of target motion conditions gaze behavior and eye-hand synergy during manual tracking.

    Science.gov (United States)

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2013-12-01

    This study investigated how frequency demand and motion feedback influenced composite ocular movements and eye-hand synergy during manual tracking. Fourteen volunteers conducted slow and fast force-tracking in which targets were displayed in either line-mode or wave-mode to guide manual tracking with target movement of direct position or velocity nature. The results showed that eye-hand synergy was a selective response of spatiotemporal coupling conditional on target rate and feedback mode. Slow and line-mode tracking exhibited stronger eye-hand coupling than fast and wave-mode tracking. Both eye movement and manual action led the target signal during fast-tracking, while the latency of ocular navigation during slow-tracking depended on the feedback mode. Slow-tracking resulted in more saccadic responses and larger pursuit gains than fast-tracking. Line-mode tracking led to larger pursuit gains but fewer and shorter gaze fixations than wave-mode tracking. During slow-tracking, incidences of saccade and gaze fixation fluctuated across a target cycle, peaking at velocity maximum and the maximal curvature of target displacement, respectively. For line-mode tracking, the incidence of smooth pursuit was phase-dependent, peaking at velocity maximum as well. Manual behavior of slow or line-mode tracking was better predicted by composite eye movements than that of fast or wave-mode tracking. In conclusion, manual tracking relied on versatile visual strategies to perceive target movements of different kinematic properties, which suggested a flexible coordinative control for the ocular and manual sensorimotor systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    Science.gov (United States)

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  10. Estimating 3D gaze in physical environment: a geometric approach on consumer-level remote eye tracker

    Science.gov (United States)

    Wibirama, Sunu; Mahesa, Rizki R.; Nugroho, Hanung A.; Hamamoto, Kazuhiko

    2017-02-01

    Remote eye trackers with consumer price have been used for various applications on flat computer screen. On the other hand, 3D gaze tracking in physical environment has been useful for visualizing gaze behavior, robots controller, and assistive technology. Instead of using affordable remote eye trackers, 3D gaze tracking in physical environment has been performed using corporate-level head mounted eye trackers, limiting its practical usage to niche user. In this research, we propose a novel method to estimate 3D gaze using consumer-level remote eye tracker. We implement geometric approach to obtain 3D point of gaze from binocular lines-of-sight. Experimental results show that the proposed method yielded low errors of 3.47+/-3.02 cm, 3.02+/-1.34 cm, and 2.57+/-1.85 cm in X, Y , and Z dimensions, respectively. The proposed approach may be used as a starting point for designing interaction method in 3D physical environment.

  11. Eye'm talking to you: speakers' gaze direction modulates co-speech gesture processing in the right MTG.

    Science.gov (United States)

    Holler, Judith; Kokal, Idil; Toni, Ivan; Hagoort, Peter; Kelly, Spencer D; Özyürek, Aslı

    2015-02-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech-gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Time course of superior temporal sulcus activity in response to eye gaze: a combined fMRI and MEG study

    Science.gov (United States)

    Kochiyama, Takanori; Uono, Shota; Yoshikawa, Sakiko

    2008-01-01

    The human superior temporal sulcus (STS) has been suggested to be involved in gaze processing, but temporal data regarding this issue are lacking. We investigated this topic by combining fMRI and MEG in four normal subjects. Photographs of faces with either averted or straight eye gazes were presented and subjects passively viewed the stimuli. First, we analyzed the brain areas involved using fMRI. A group analysis revealed activation of the STS for averted compared to straight gazes, which was confirmed in all subjects. We then measured brain activity using MEG, and conducted a 3D spatial filter analysis. The STS showed higher activity in response to averted versus straight gazes during the 150–200 ms period, peaking at around 170 ms, after stimulus onset. In contrast, the fusiform gyrus, which was detected by the main effect of stimulus presentations in fMRI analysis, exhibited comparable activity across straight and averted gazes at about 170 ms. These results indicate involvement of the human STS in rapid processing of the eye gaze of another individual. PMID:19015114

  13. In the eye of the beholder: reduced threat-bias and increased gaze-imitation towards reward in relation to trait anger.

    Directory of Open Access Journals (Sweden)

    David Terburg

    Full Text Available The gaze of a fearful face silently signals a potential threat's location, while the happy-gaze communicates the location of impending reward. Imitating such gaze-shifts is an automatic form of social interaction that promotes survival of individual and group. Evidence from gaze-cueing studies suggests that covert allocation of attention to another individual's gaze-direction is facilitated when threat is communicated and further enhanced by trait anxiety. We used novel eye-tracking techniques to assess whether dynamic fearful and happy facial expressions actually facilitate automatic gaze-imitation. We show that this actual gaze-imitation effect is stronger when threat is signaled, but not further enhanced by trait anxiety. Instead, trait anger predicts facilitated gaze-imitation to reward, and to reward compared to threat. These results agree with an increasing body of evidence on trait anger sensitivity to reward.

  14. In the Eye of the Beholder: Reduced Threat-Bias and Increased Gaze-Imitation towards Reward in Relation to Trait Anger

    Science.gov (United States)

    Terburg, David; Aarts, Henk; Putman, Peter; van Honk, Jack

    2012-01-01

    The gaze of a fearful face silently signals a potential threat's location, while the happy-gaze communicates the location of impending reward. Imitating such gaze-shifts is an automatic form of social interaction that promotes survival of individual and group. Evidence from gaze-cueing studies suggests that covert allocation of attention to another individual's gaze-direction is facilitated when threat is communicated and further enhanced by trait anxiety. We used novel eye-tracking techniques to assess whether dynamic fearful and happy facial expressions actually facilitate automatic gaze-imitation. We show that this actual gaze-imitation effect is stronger when threat is signaled, but not further enhanced by trait anxiety. Instead, trait anger predicts facilitated gaze-imitation to reward, and to reward compared to threat. These results agree with an increasing body of evidence on trait anger sensitivity to reward. PMID:22363632

  15. A comparison study of visually stimulated brain-computer and eye-tracking interfaces

    Science.gov (United States)

    Suefusa, Kaori; Tanaka, Toshihisa

    2017-06-01

    Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.

  16. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Science.gov (United States)

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa

    2014-01-01

    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  17. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Directory of Open Access Journals (Sweden)

    Eero Ahtola

    Full Text Available OBJECTIVE: To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm. METHOD: Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus. RESULTS: The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. CONCLUSION: The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. SIGNIFICANCE: Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  18. Discriminating between intentional and unintentional gaze fixation using multimodal-based fuzzy logic algorithm for gaze tracking system with NIR camera sensor

    Science.gov (United States)

    Naqvi, Rizwan Ali; Park, Kang Ryoung

    2016-06-01

    Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.

  19. Inhibition of Return in Response to Eye Gaze and Peripheral Cues in Young People with Asperger's Syndrome

    Science.gov (United States)

    Marotta, Andrea; Pasini, Augusto; Ruggiero, Sabrina; Maccari, Lisa; Rosa, Caterina; Lupianez, Juan; Casagrande, Maria

    2013-01-01

    Inhibition of return (IOR) reflects slower reaction times to stimuli presented in previously attended locations. In this study, we examined this inhibitory after-effect using two different cue types, eye-gaze and standard peripheral cues, in individuals with Asperger's syndrome and typically developing individuals. Typically developing…

  20. Inhibition of Return in Response to Eye Gaze and Peripheral Cues in Young People with Asperger's Syndrome

    Science.gov (United States)

    Marotta, Andrea; Pasini, Augusto; Ruggiero, Sabrina; Maccari, Lisa; Rosa, Caterina; Lupianez, Juan; Casagrande, Maria

    2013-01-01

    Inhibition of return (IOR) reflects slower reaction times to stimuli presented in previously attended locations. In this study, we examined this inhibitory after-effect using two different cue types, eye-gaze and standard peripheral cues, in individuals with Asperger's syndrome and typically developing individuals. Typically developing…

  1. Remote eye-gaze tracking method robust to the device rotation

    Science.gov (United States)

    Kim, Sung-Tae; Choi, Kang-A.; Shin, Yong-Goo; Kang, Mun-Cheon; Ko, Sung-Jea

    2016-08-01

    A remote eye-gaze tracking (REGT) method is presented, which compensates for the error caused by device rotation. The proposed method is based on the state-of-the-art homography normalization (HN) method. Conventional REGT methods, including the HN method, suffer from a large estimation error in the presence of device rotation. However, little effort has been made to clarify the relation between the device rotation and its subsequent error. This paper introduces two factors inducing device rotation error, the discrepancy between the optical and visual axis, called angle kappa, and the change in camera location. On the basis of these factors, an efficient method for compensating for the REGT error is proposed. While the device undergoes a 360-deg rotation, a series of erroneous points of gaze (POGs) are obtained on the screen and modeled as an ellipse, and then the center of the ellipse is exploited to estimate the rotation-invariant POG. Experimental results demonstrate that the proposed REGT method can estimate the POG accurately in spite of the rotational movement of the device.

  2. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Science.gov (United States)

    Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  3. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  4. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-01-01

    Objective The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Background Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. Methods A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients’ and physicians’ gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor- technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. Conclusion This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. PMID:24380671

  5. THE EFFECT OF GAZE ANGLE ON THE EVALUATIONS OF SAR AND TEMPERATURE RISE IN HUMAN EYE UNDER PLANE-WAVE EXPOSURES FROM 0.9 TO 10 GHZ.

    Science.gov (United States)

    Diao, Yinliang; Leung, Sai-Wing; Chan, Kwok Hung; Sun, Weinong; Siu, Yun-Ming; Kong, Richard

    2016-12-01

    This article investigates the effect of gaze angle on the specific absorption rate (SAR) and temperature rise in human eye under electromagnetic exposures from 0.9 to 10 GHz. Eye models in different gaze angles are developed based on biometric data. The spatial-average SARs in eyes are investigated using the finite-difference time-domain method, and the corresponding maximum temperature rises in lens are calculated by the finite-difference method. It is found that the changes in the gaze angle produce a maximum variation of 35, 12 and 20 % in the eye-averaged SAR, peak 10 g average SAR and temperature rise, respectively. Results also reveal that the eye-averaged SAR is more sensitive to the changes in the gaze angle than peak 10 g average SAR, especially at higher frequencies. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Different but complementary roles of action and gaze in action observation priming: Insights from eye- and motion-tracking measures

    Directory of Open Access Journals (Sweden)

    Clement eLetesson

    2015-05-01

    Full Text Available Action priming following action observation is thought to be caused by the observed action kinematics being represented in the same brain areas as those used for action execution. But, action priming can also be explained by shared goal representations, with compatibility between observation of the agent’s gaze and the intended action of the observer. To assess the contribution of action kinematics and eye gaze cues in the prediction of an agent’s action goal and action priming, participants observed actions where the availability of both cues was manipulated. Action observation was followed by action execution, and the congruency between the target of the agent’s and observer’s actions, and the congruency between the observed and executed action spatial location were manipulated. Eye movements were recorded during the observation phase, and the action priming was assessed using motion analysis. The results showed that the observation of gaze information influenced the observer’s prediction speed to attend to the target, and that observation of action kinematic information influenced the accuracy of these predictions. Motion analysis results showed that observed action cues alone primed both spatial incongruent and object congruent actions, consistent with the idea that the prime effect was driven by similarity between goals and kinematics. The observation of action and eye gaze cues together induced a prime effect complementarily sensitive to object and spatial congruency. While observation of the agent’s action kinematics triggered an object-centered and kinematic-centered action representation, independently, the complementary observation of eye gaze triggered a more fine-grained representation illustrating a specification of action kinematics towards the selected goal. Even though both cues differentially contributed to action priming, their complementary integration led to a more refined pattern of action priming.

  7. Gaze-based interaction with public displays using off-the-shelf components

    DEFF Research Database (Denmark)

    San Agustin, Javier; Hansen, John Paulin; Tall, Martin Henrik

    Eye gaze can be used to interact with high-density information presented on large displays. We have built a system employing off-the-shelf hardware components and open-source gaze tracking software that enables users to interact with an interface displayed on a 55” screen using their eye movement...

  8. Toddlers' gaze following through attention modulation : Intention is in the eye of the beholder

    NARCIS (Netherlands)

    de Bordes, Pieter F.; Cox, Ralf F. A.; Hasselman, Fred; Cillessen, Antonius H. N.

    2013-01-01

    We investigated 20-month-olds' (N = 56) gaze following by presenting toddlers with a female model that displayed either ostensive or no ostensive cues before shifting her gaze laterally toward an object. The results indicated that toddlers reliably followed the model's gaze redirection after mutual

  9. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Directory of Open Access Journals (Sweden)

    Christoforos eChristoforou

    2015-05-01

    Full Text Available Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e. advertising, shelf testing, and website usability. However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewer’s lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e. movie, narrative content, emotional content, etc., and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group and individual differences in the study of attention-deficit disorders and, the study of desensitization to media violence.

  10. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Science.gov (United States)

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.

  11. The role of emotion in learning trustworthiness from eye-gaze: Evidence from facial electromyography.

    Science.gov (United States)

    Manssuer, Luis R; Pawling, Ralph; Hayes, Amy E; Tipper, Steven P

    2016-01-01

    Gaze direction can be used to rapidly and reflexively lead or mislead others' attention as to the location of important stimuli. When perception of gaze direction is congruent with the location of a target, responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged more trustworthy than faces that consistently gaze incongruently. However, it's unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting.

  12. Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation.

    Science.gov (United States)

    McLaughlin, Laura; Bond, Raymond; Hughes, Ciara; McConnell, Jonathan; McFadden, Sonyia

    2017-09-01

    To investigate image interpretation performance by diagnostic radiography students, diagnostic radiographers and reporting radiographers by computing eye gaze metrics using eye tracking technology. Three groups of participants were studied during their interpretation of 8 digital radiographic images including the axial and appendicular skeleton, and chest (prevalence of normal images was 12.5%). A total of 464 image interpretations were collected. Participants consisted of 21 radiography students, 19 qualified radiographers and 18 qualified reporting radiographers who were further qualified to report on the musculoskeletal (MSK) system. Eye tracking data was collected using the Tobii X60 eye tracker and subsequently eye gaze metrics were computed. Voice recordings, confidence levels and diagnoses provided a clear demonstration of the image interpretation and the cognitive processes undertaken by each participant. A questionnaire afforded the participants an opportunity to offer information on their experience in image interpretation and their opinion on the eye tracking technology. Reporting radiographers demonstrated a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took a mean of 2.4s longer to clinically decide on all features compared to students. Reporting radiographers also had a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took longer to clinically decide on an image diagnosis (p=0.02) than radiographers. Reporting radiographers had a greater mean fixation duration (p=0.01), mean fixation count (p=0.04) and mean visit count (p=0.04) within the areas of pathology compared to students. Eye tracking patterns, presented within heat maps, were a good reflection of group expertise and search strategies. Eye gaze metrics such as time to first fixate, fixation count, fixation duration and visit count within the areas of pathology were indicative of the radiographer's competency. The accuracy and confidence of

  13. Recognition of Emotion from Facial Expressions with Direct or Averted Eye Gaze and Varying Expression Intensities in Children with Autism Disorder and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Dina Tell

    2014-01-01

    Full Text Available Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.

  14. A new high-speed visual stimulation method for gaze-contingent eye movement and brain activity studies

    Directory of Open Access Journals (Sweden)

    Fabio eRichlan

    2013-07-01

    Full Text Available Approaches using eye movements as markers of ongoing brain activity to investigate perceptual and cognitive processes were able to implement highly sophisticated paradigms driven by eye movement recordings. Crucially, these paradigms involve display changes that have to occur during the time of saccadic blindness, when the subject is unaware of the change. Therefore, a combination of high-speed eye tracking and high-speed visual stimulation is required in these paradigms. For combined eye movement and brain activity studies (e.g., fMRI, EEG, MEG, fast and exact timing of display changes is especially important, because of the high susceptibility of these methods to visual stimulation. Eye tracking systems already achieve sampling rates up to 2000 Hz, but recent LCD technologies for computer screens reduced the temporal resolution to mostly 60 Hz, which is too slow for gaze-contingent display changes. We developed a high-speed video projection system, which is capable of reliably delivering display changes within the time frame of < 5 ms. This could not be achieved even with the fastest CRT monitors available (< 16 ms. The present video projection system facilitates the realization of cutting-edge eye movement research requiring reliable high-speed visual stimulation (e.g., gaze-contingent display changes, short-time presentation, masked priming. Moreover, this system can be used for fast visual presentation in order to assess brain activity using various methods, such as electroencephalography (EEG and functional magnetic resonance imaging (fMRI. The latter technique was previously excluded from high-speed visual stimulation, because it is not possible to operate conventional CRT monitors in the strong magnetic field of an MRI scanner. Therefore, the present video projection system offers new possibilities for studying eye movement-related brain activity using a combination of eye tracking and fMRI.

  15. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    Science.gov (United States)

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  17. Loneliness and the social monitoring system: Emotion recognition and eye gaze in a real-life conversation.

    Science.gov (United States)

    Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike

    2016-02-01

    Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed.

  18. Noise Challenges in Monomodal Gaze Interaction

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik

    Modern graphical user interfaces (GUIs) are designed with able-bodied users in mind. Operating these interfaces can be impossible for some users who are unable to control the conventional mouse and keyboard. An eye tracking system offers possibilities for independent use and improved quality...... of life via dedicated interface tools especially tailored to the users’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eye-tracking systems and there are many available on the market. Even...... stream are most wanted. The work in this thesis presents three contributions that may advance the use of low-cost monomodal gaze tracking and research in the field: - An assessment of a low-cost open-source gaze tracker and two eye tracking systems through an accuracy and precision test and a performance...

  19. A Wide-View Parallax-Free Eye-Mark Recorder with a Hyperboloidal Half-Silvered Mirror and Appearance-Based Gaze Estimation.

    Science.gov (United States)

    Mori, Hiroki; Sumiya, Erika; Mashita, Tomohiro; Kiyokawa, Kiyoshi; Takemura, Haruo

    2011-07-01

    In this paper, we propose a wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror and a gaze estimation method suitable for the device. Our eye-mark recorder provides a wide field-of-view video recording of the user's exact view by positioning the focal point of the mirror at the user's viewpoint. The vertical angle of view of the prototype is 122 degree (elevation and depression angles are 38 and 84 degree, respectively) and its horizontal view angle is 116 degree (nasal and temporal view angles are 38 and 78 degree, respectively). We implemented and evaluated a gaze estimation method for our eye-mark recorder. We use an appearance-based approach for our eye-mark recorder to support a wide field-of-view. We apply principal component analysis (PCA) and multiple regression analysis (MRA) to determine the relationship between the captured images and their corresponding gaze points. Experimental results verify that our eye-mark recorder successfully captures a wide field-of-view of a user and estimates gaze direction with an angular accuracy of around 2 to 4 degree.

  20. Stabilization of gaze during circular locomotion in darkness. II. Contribution of velocity storage to compensatory eye and head nystagmus in the running monkey

    Science.gov (United States)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. Yaw eye in head (Eh) and head on body velocities (Hb) were measured in two monkeys that ran around the perimeter of a circular platform in darkness. The platform was stationary or could be counterrotated to reduce body velocity in space (Bs) while increasing gait velocity on the platform (Bp). The animals were also rotated while seated in a primate chair at eccentric locations to provide linear and angular accelerations similar to those experienced while running. 2. Both animals had head and eye nystagmus while running in darkness during which slow phase gaze velocity on the body (Gb) partially compensated for body velocity in space (Bs). The eyes, driven by the vestibuloocular reflex (VOR), supplied high-frequency characteristics, bringing Gb up to compensatory levels at the beginning and end of the slow phases. The head provided substantial gaze compensation during the slow phases, probably through the vestibulocollic reflex (VCR). Synchronous eye and head quick phases moved gaze in the direction of running. Head movements occurred consistently only when animals were running. This indicates that active body and limb motion may be essential for inducing the head-eye gaze synergy. 3. Gaze compensation was good when running in both directions in one animal and in one direction in the other animal. The animals had long VOR time constants in these directions. The VOR time constant was short to one side in one animal, and it had poor gaze compensation in this direction. Postlocomotory nystagmus was weaker after running in directions with a long VOR time constant than when the animals were passively rotated in darkness. We infer that velocity storage in the vestibular system had been activated to produce continuous Eh and Hb during running and to counteract postrotatory afterresponses. 4. Continuous compensatory gaze nystagmus was not produced by passive eccentric rotation with the head stabilized or free. This indicates that an aspect of active locomotion, most

  1. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans

    Science.gov (United States)

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach–avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans. PMID:26466351

  2. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    Science.gov (United States)

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  3. Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls.

    Science.gov (United States)

    Panchuk, Derek; Vickers, Joan N

    2011-08-01

    We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.

  4. Mother-infant mutual eye gaze supports emotion regulation in infancy during the Still-Face paradigm.

    Science.gov (United States)

    MacLean, Peggy C; Rynes, Kristina N; Aragón, Crystal; Caprihan, Arvind; Phillips, John P; Lowe, Jean R

    2014-11-01

    This study was designed to examine the sequential relationship between mother-infant synchrony and infant affect using multilevel modeling during the Still Face paradigm. We also examined self-regulatory behaviors that infants use during the Still-Face paradigm to modulate their affect, particularly during stressors where their mothers are not available to help them co-regulate. There were 84 mother-infant dyads, of healthy full term 4 month old infants. Second-by-second coding of infant self-regulation and infant affect was done, in addition to mother-infant mutual eye gaze. Using multilevel modeling, we found that infant affect became more positive when mutual gaze had occurred the previous second, suggesting that the experience of synchronicity was associated with observable shifts in affect. We also found a positive association between self-regulatory behaviors and increases in positive affect only during the Still-Face episode (episode 2). Our study provides support for the role of mother-infant synchronicity in emotion regulation as well as support for the role of self-regulatory behaviors in emotion regulation that can have important implication for intervention.

  5. A Novel Eye Gaze Tracking Method Based on Saliency Maps%一种新的基于显著图的视线跟踪方法

    Institute of Scientific and Technical Information of China (English)

    黄生辉; 宋鸿陟; 吴广发; 司国东; 彭红星

    2015-01-01

    针对现有视线跟踪系统设备复杂、标定过程繁琐等方面的不足,提出了一种新的基于显著图的视线跟踪方法。通过红外光源设备在人眼角膜上产生的光斑中心与瞳孔中心建立瞳孔-角膜反射向量,然后将该向量作为视觉特征重构了基于显著图的视线跟踪算法。实验结果证明,提出的方法不仅缓解了视线跟踪系统标定过程繁琐的问题,而且对提高系统的精度和健壮性有一定的促进作用,这为面向人机交互的视线跟踪研究提供了可行的低成本解决方案。%For the deficiencies that existing eye gaze tracking devices are complex and calibration procedures are tedious, a novel eye gaze tracking method using saliency maps is proposed. With pupil center and reflection center on corneal generated by IR light device, a pupil-corneal reflection vector is constructed, which then acts as a kind of vision feature to reconstruct the eye gaze tracking algorithm based on saliency maps. The experiment result demonstrates that the proposed method not only can alleviate the tedious calibration of eye gaze tracking, but also has a little improvement in system accuracy and robustness, which provides a feasible low-cost eye gaze tracking research for human computer interaction.

  6. The late positive potential indexes a role for emotion during learning of trust from eye-gaze cues. : The LPP and learning of trust from gaze

    OpenAIRE

    Manssuer, Luis; Roberts, Mark; Tipper, Steven Paul

    2015-01-01

    Gaze direction perception triggers rapid visuospatial orienting to the location observed by others. When this is congruent with the location of a target, reaction times are faster than when incongruent. Functional magnetic resonance imaging studies suggest that the non-joint attention induced by incongruent cues are experienced as more emotionally negative and this could relate to less favorable trust judgments of the faces when gaze-cues are contingent with identity. Here, we provide further...

  7. Beyond eye gaze: What else can eyetracking reveal about cognition and cognitive development?

    Science.gov (United States)

    Eckstein, Maria K; Guerra-Carrillo, Belén; Miller Singley, Alison T; Bunge, Silvia A

    2016-11-11

    This review provides an introduction to two eyetracking measures that can be used to study cognitive development and plasticity: pupil dilation and spontaneous blink rate. We begin by outlining the rich history of gaze analysis, which can reveal the current focus of attention as well as cognitive strategies. We then turn to the two lesser-utilized ocular measures. Pupil dilation is modulated by the brain's locus coeruleus-norepinephrine system, which controls physiological arousal and attention, and has been used as a measure of subjective task difficulty, mental effort, and neural gain. Spontaneous eyeblink rate correlates with levels of dopamine in the central nervous system, and can reveal processes underlying learning and goal-directed behavior. Taken together, gaze, pupil dilation, and blink rate are three non-invasive and complementary measures of cognition with high temporal resolution and well-understood neural foundations. Here we review the neural foundations of pupil dilation and blink rate, provide examples of their usage, describe analytic methods and methodological considerations, and discuss their potential for research on learning, cognitive development, and plasticity.

  8. Alternative Indices of Performance: An Exploration of Eye Gaze Metrics in a Visual Puzzle Task

    Science.gov (United States)

    2014-07-01

    complimentary information about operator state. 15. SUBJECT TERMS Workload, Eye Tracking, Eye Movements , Nonlinear Dynamics 16. SECURITY CLASSIFICATION OF: 17...15 10. Image pair 1 (Mountain Lake, Left; Sunflowers , Right...human performance. For example, qualitative shifts in movement (e.g., from walking to running), can be measured by the variability patterns in the

  9. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders.

    Science.gov (United States)

    Grynszpan, Ouriel; Nadel, Jacqueline

    2014-01-01

    The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g., think, believe, know) in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant's gaze. Eleven typical adults and eleven high functioning adults with Autism Spectrum Disorders (ASD) were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  10. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2015-01-01

    Full Text Available The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g. think, believe, know in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant’s gaze. Eleven typical adults and eleven high functioning adults with ASD were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  11. Sonification of in-vehicle interface reduces gaze movements under dual-task condition.

    Science.gov (United States)

    Tardieu, Julien; Misdariis, Nicolas; Langlois, Sabine; Gaillard, Pascal; Lemercier, Céline

    2015-09-01

    In-car infotainment systems (ICIS) often degrade driving performances since they divert the driver's gaze from the driving scene. Sonification of hierarchical menus (such as those found in most ICIS) is examined in this paper as one possible solution to reduce gaze movements towards the visual display. In a dual-task experiment in the laboratory, 46 participants were requested to prioritize a primary task (a continuous target detection task) and to simultaneously navigate in a realistic mock-up of an ICIS, either sonified or not. Results indicated that sonification significantly increased the time spent looking at the primary task, and significantly decreased the number and the duration of gaze saccades towards the ICIS. In other words, the sonified ICIS could be used nearly exclusively by ear. On the other hand, the reaction times in the primary task were increased in both silent and sonified conditions. This study suggests that sonification of secondary tasks while driving could improve the driver's visual attention of the driving scene.

  12. How does the topic of conversation affect verbal exchange and eye gaze? A comparison between typical development and high-functioning autism.

    Science.gov (United States)

    Nadig, Aparna; Lee, Iris; Singh, Leher; Bosshart, Kyle; Ozonoff, Sally

    2010-07-01

    Conversation is a primary area of difficulty for individuals with high-functioning autism (HFA) although they have unimpaired formal language abilities. This likely stems from the unstructured nature of face-to-face conversation as well as the need to coordinate other modes of communication (e.g. eye gaze) with speech. We conducted a quantitative analysis of both verbal exchange and gaze data obtained from conversations between children with HFA and an adult, compared with those of typically developing children matched on language level. We examined a new question: how does speaking about a topic of interest affect reciprocity of verbal exchange and eye gaze? Conversations on generic topics were compared with those on individuals' circumscribed interests, particularly intense interests characteristic of HFA. Two opposing hypotheses were evaluated. Speaking about a topic of interest may improve reciprocity in conversation by increasing participants' motivation and engagement. Alternatively, it could engender more one-sided interaction, given the engrossing nature of circumscribed interests. In their verbal exchanges HFA participants demonstrated decreased reciprocity during the interest topic, evidenced by fewer contingent utterances and more monologue-style speech. Moreover, a measure of stereotyped behaviour and restricted interest symptoms was inversely related to reciprocal verbal exchange. However, both the HFA and comparison groups looked significantly more to their partner's face during the interest than generic topic. Our interpretation of results across modalities is that circumscribed interests led HFA participants to be less adaptive to their partner verbally, but speaking about a highly practiced topic allowed for increased gaze to the partner. The function of this increased gaze to partner may differ for the HFA and comparison groups.

  13. Trends and Techniques in Visual Gaze Analysis

    CERN Document Server

    Stellmach, Sophie; Dachselt, Raimund; Lindley, Craig A

    2010-01-01

    Visualizing gaze data is an effective way for the quick interpretation of eye tracking results. This paper presents a study investigation benefits and limitations of visual gaze analysis among eye tracking professionals and researchers. The results were used to create a tool for visual gaze analysis within a Master's project.

  14. Eye Gaze Reveals a Fast, Parallel Extraction of the Syntax of Arithmetic Formulas

    Science.gov (United States)

    Schneider, Elisa; Maruyama, Masaki; Dehaene, Stanislas; Sigman, Mariano

    2012-01-01

    Mathematics shares with language an essential reliance on the human capacity for recursion, permitting the generation of an infinite range of embedded expressions from a finite set of symbols. We studied the role of syntax in arithmetic thinking, a neglected component of numerical cognition, by examining eye movement sequences during the…

  15. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    Science.gov (United States)

    Ivanov, Iliya V; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X; Trauzettel-Klosinski, Susanne

    2016-01-01

    Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition

  16. I spy with my little eye: Analysis of airline pilots' gaze patterns in a manual instrument flight scenario.

    Science.gov (United States)

    Haslbeck, Andreas; Zhang, Bo

    2017-09-01

    The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Co-development of manner and path concepts in language, action, and eye-gaze behavior.

    Science.gov (United States)

    Lohan, Katrin S; Griffiths, Sascha S; Sciutti, Alessandra; Partmann, Tim C; Rohlfing, Katharina J

    2014-07-01

    In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate children's behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human

  18. Mechanisms of Empathic Behavior in Children with Callous-Unemotional Traits: Eye Gaze and Emotion Recognition

    OpenAIRE

    Delk, Lauren Annabel

    2016-01-01

    The presence of callous-unemotional (CU) traits (e.g., shallow affect, lack of empathy) in children predicts reduced prosocial behavior. Similarly, CU traits relate to emotion recognition deficits, which may be related to deficits in visual attention to the eye region of others. Notably, recognition of others' distress necessarily precedes sympathy, and sympathy is a key predictor in prosocial outcomes. Thus, visual attention and emotion recognition may mediate the relationship between CU tra...

  19. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    Directory of Open Access Journals (Sweden)

    Iliya V Ivanov

    Full Text Available Degenerative retinal diseases, especially retinitis pigmentosa (RP, lead to severe peripheral visual field loss (tunnel vision, which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST.Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental and reading (control training. All subjects who first performed reading training underwent experimental training later (waiting list control group. To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT during exploratory saccade training, Percent Preferred Walking Speed (PPWS, the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field.In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training

  20. Objects capture perceived gaze direction.

    Science.gov (United States)

    Lobmaier, Janek S; Fischer, Martin H; Schwaninger, Adrian

    2006-01-01

    The interpretation of another person's eye gaze is a key element of social cognition. Previous research has established that this ability develops early in life and is influenced by the person's head orientation, as well as local features of the person's eyes. Here we show that the presence of objects in the attended space also has an impact on gaze interpretation. Eleven normal adults identified the fixation points of photographed faces with a mouse cursor. Their responses were systematically biased toward the locations of nearby objects. This capture of perceived gaze direction probably reflects the attribution of intentionality and has methodological implications for research on gaze perception.

  1. Gaze-controlled Driving

    DEFF Research Database (Denmark)

    Tall, Martin; Alapetite, Alexandre; San Agustin, Javier

    2009-01-01

    We investigate if the gaze (point of regard) can control a remote vehicle driving on a racing track. Five different input devices (on-screen buttons, mouse-pointing low-cost webcam eye tracker and two commercial eye tracking systems) provide heading and speed control on the scene view transmitted...

  2. 基于标记点检测的视线跟踪注视点估计%Eye Tracking Gaze Estimation Based on Marker Detection

    Institute of Scientific and Technical Information of China (English)

    龚秀锋; 李斌; 邓宏平; 张文聪

    2011-01-01

    For head-mounted eye gaze tracking, additional head position sensors is needed to determine the gaze direction, a new method based on marker detection is proposed to estimate the gaze of point for head-mounted system in this paper.The markers are detected by computer vision method, and the relationship between scene image and computer screen is constructed with point correspondences in two views.The point of gaze in the scene image is translated to computer screen coordinate.Experimental result shows that this method can estimate the point of gaze in real scene easily irrespective of user's head position.%传统的头戴式视线跟踪系统需要借助额外的头部位置跟踪器或其他辅助设备才能定位视线方向.针对该问题,提出一种基于标记点检测的注视点估计方法.该方法通过计算机视觉的方法检测标记点,建立场景图像与真实场景中计算机屏幕之间的空间关系,将场景图像中的注视点坐标映射到计算机屏幕中.实验结果表明,该方法简单易行,可以较好地估计出用户在真实场景中的注视点坐标.

  3. Rapid target foraging with reach or gaze: The hand looks further ahead than the eye.

    Science.gov (United States)

    Diamond, Jonathan S; Wolpert, Daniel M; Flanagan, J Randall

    2017-07-01

    Real-world tasks typically consist of a series of target-directed actions and often require choices about which targets to act on and in what order. Such choice behavior can be assessed from an optimal foraging perspective whereby target selection is shaped by a balance between rewards and costs. Here we evaluated such decision-making in a rapid movement foraging task. On a given trial, participants were presented with 15 targets of varying size and value and were instructed to harvest as much reward as possible by either moving a handle to the targets (hand task) or by briefly fixating them (eye task). The short trial duration enabled participants to harvest about half the targets, ensuring that total reward was due to choice behavior. We developed a probabilistic model to predict target-by-target harvesting choices that considered the rewards and movement-related costs (i.e., target distance and size) associated with the current target as well as future targets. In the hand task, in comparison to the eye task, target choice was more strongly influenced by movement-related costs and took into account a greater number of future targets, consistent with the greater costs associated with arm movement. In both tasks, participants exhibited near-optimal behaviour and in a constrained version of the hand task in which choices could only be based on target positions, participants consistently chose among the shortest movement paths. Our results demonstrate that people can rapidly and effectively integrate values and movement-related costs associated with current and future targets when sequentially harvesting targets.

  4. 视线追踪系统中注视点估计算法研究%The Research on Gaze Estimation for Eye Tracking System

    Institute of Scientific and Technical Information of China (English)

    金纯; 李娅萍; 高奇; 曾伟

    2016-01-01

    针对非接触式视线跟踪系统中注视点估计算法鲁棒性差的问题,提出一种基于角度映射的注视点估计算法。首先,根据人眼特性及视觉成像原理,瞳孔中心相对于角膜反射光斑的位置与注视点相对于红外光源的位置之间具有一定的角度映射关系,据此估计出注视点近似位置。然后,在眼球模型结构的基础上分析了眼球角膜曲面对注视点造成的偏差,通过弧长对误差进行补偿。最后,利用非线性多项式模型对眼球视轴和光轴之间的偏差进行拟合,得到最终的视线落点。实验证明,该系统具有较高的精度和自由度,在水平和垂直方向上最大误差均小于1 cm。%Aiming at the problem of poor robustness of gaze point estimation algorithm of un-contact eye tracking system, a novel gaze point estimation algorithm is proposed based on angle mapping. First, according to human eyes characteristics and visual imaging theory, the gaze points are estimated with the angle mapping relationship be-tween the correspond positions of pupil center and corneal reflection points and the correspond positions of gaze point and infrared lights. Then the gaze point error of ocular cornea surface is aualyzed based on eyeball structure and compensation for it with arc length. Finally the nonlinear polynomial model is also used to fit the error of eye visual axis and optical axis. In the experiment, the deviation of this algorithm is less than 1 centimeter both in hori-zontal and vertical directions. The result shows that the system proposed has higher precision and degree of free-dom.

  5. The late positive potential indexes a role for emotion during learning of trust from eye-gaze cues

    OpenAIRE

    Manssuer, Luis R.; Roberts, Mark V.; Tipper, Steven P.

    2015-01-01

    Gaze direction perception triggers rapid visuospatial orienting to the location observed by others. When this is congruent with the location of a target, reaction times are faster than when incongruent. Functional magnetic resonance imaging studies suggest that the non-joint attention induced by incongruent cues are experienced as more emotionally negative and this could relate to less favorable trust judgments of the faces when gaze-cues are contingent with identity. Here, we provide further...

  6. 视觉搜索任务中直视探测优势的眼动研究%The Detection Superiority of Perceived Direct Gaze in Visual Search Task:Evidence from Eye Movements

    Institute of Scientific and Technical Information of China (English)

    胡中华; 赵光; 刘强; 李红

    2012-01-01

    Previous studies have reported that a straight gaze target embedded in averted gaze distracters was detected faster and more accurate than an averted gaze target among straight gaze distracters. The phenomenon of detection superiority of perceived direct gaze was termed as "the stare-in-the-crowd effect". "The stare-in-the-crowd effect" could be explained as that a straight gaze captures visual-spatial attention more effectively than an averted gaze. However, it is also possible that the stimulus items matching process under the direct gaze condition is faster and easier than that under the averted gaze condition. This explanation has not been tested in previous studies. In addition, head orientation was found to be able to affect the detection of gaze direction. However, it is not clear how head orientation affectsthe detection of gaze direction. In view of this, we used eye tracking approach and divided the detection of gaze direction into three behavioral epochs: the preparation, search and response epoch. To investigate: (1) in which epoch the detection advantage of the direct gaze occurred, and whether the more effectiveness of stimulus items matching process under the direct gaze condition contributed to the-stare-in-the-crowd effect, along with the capture visual-spatial attention of direct gaze. (2) How head orientation affected the detection of gaze direction, and in which visual search epoch this effect was mainly manifested. We used a visual search task. The experiment consisted of two factors: gaze direction (direct gaze; averted gaze) and head orientation (frontal head; deviated head). Subjects were instructed to detect as accurately and quickly as possible whether the target gaze direction was present or not. Sixteen volunteers participated in the experiment (6 males and 10 females). Behavioral results showed that the direct gaze targets were detected more rapidly and accurately than the averted gaze targets; Eye movement analysis found: the detection

  7. Gaze as a biometric

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  8. The late positive potential indexes a role for emotion during learning of trust from eye-gaze cues.

    Science.gov (United States)

    Manssuer, Luis R; Roberts, Mark V; Tipper, Steven P

    2015-01-01

    Gaze direction perception triggers rapid visuospatial orienting to the location observed by others. When this is congruent with the location of a target, reaction times are faster than when incongruent. Functional magnetic resonance imaging studies suggest that the non-joint attention induced by incongruent cues are experienced as more emotionally negative and this could relate to less favorable trust judgments of the faces when gaze-cues are contingent with identity. Here, we provide further support for these findings using time-resolved event-related potentials. In addition to replicating the effects of identity-contingent gaze-cues on reaction times and trust judgments, we discovered that the emotion-related late positive potential increased across blocks to incongruent compared to congruent faces before, during and after the gaze-cue, suggesting both learning and retrieval of emotion states associated with the face. We also discovered that the face-recognition-related N250 component appeared to localize to sources in anterior temporal areas. Our findings provide unique electrophysiological evidence for the role of emotion in learning trust from gaze-cues, suggesting that the retrieval of face evaluations during interaction may take around 1000 ms and that the N250 originates from anterior temporal face patches.

  9. Usability and Workload of Access Technology for People With Severe Motor Impairment: A Comparison of Brain-Computer Interfacing and Eye Tracking.

    Science.gov (United States)

    Pasqualotto, Emanuele; Matuz, Tamara; Federici, Stefano; Ruf, Carolin A; Bartl, Mathias; Olivetti Belardinelli, Marta; Birbaumer, Niels; Halder, Sebastian

    2015-01-01

    Eye trackers are widely used among people with amyotrophic lateral sclerosis, and their benefits to quality of life have been previously shown. On the contrary, Brain-computer interfaces (BCIs) are still quite a novel technology, which also serves as an access technology for people with severe motor impairment. To compare a visual P300-based BCI and an eye tracker in terms of information transfer rate (ITR), usability, and cognitive workload in users with motor impairments. Each participant performed 3 spelling tasks, over 4 total sessions, using an Internet browser, which was controlled by a spelling interface that was suitable for use with either the BCI or the eye tracker. At the end of each session, participants evaluated usability and cognitive workload of the system. ITR and System Usability Scale (SUS) score were higher for the eye tracker (Wilcoxon signed-rank test: ITR T = 9, P = .016; SUS T = 12.50, P = .035). Cognitive workload was higher for the BCI (T = 4; P = .003). Although BCIs could be potentially useful for people with severe physical disabilities, we showed that the usability of BCIs based on the visual P300 remains inferior to eye tracking. We suggest that future research on visual BCIs should use eye tracking-based control as a comparison to evaluate performance or focus on nonvisual paradigms for persons who have lost gaze control. © The Author(s) 2015.

  10. Gaze categorization under uncertainty: psychophysics and modeling.

    Science.gov (United States)

    Mareschal, Isabelle; Calder, Andrew J; Dadds, Mark R; Clifford, Colin W G

    2013-04-22

    The accurate perception of another person's gaze direction underlies most social interactions and provides important information about his or her future intentions. As a first step to measuring gaze perception, most experiments determine the range of gaze directions that observers judge as being direct: the cone of direct gaze. This measurement has revealed the flexibility of observers' perception of gaze and provides a useful benchmark against which to test clinical populations with abnormal gaze behavior. Here, we manipulated effective signal strength by adding noise to the eyes of synthetic face stimuli or removing face information. We sought to move beyond a descriptive account of gaze categorization by fitting a model to the data that relies on changing the uncertainty associated with an estimate of gaze direction as a function of the signal strength. This model accounts for all the data and provides useful insight into the visual processes underlying normal gaze perception.

  11. Eye Gaze Tracking Method Based on Pupil Center Cornea Reflection Technique%基于瞳孔-角膜反射技术的视线跟踪方法

    Institute of Scientific and Technical Information of China (English)

    吴广发; 宋鸿陟; 黄生辉

    2014-01-01

    视线跟踪是基于多通道的人机交互技术的重要研究内容,而基于瞳孔-角膜反射技术的视线方向是目前应用最广泛的视线跟踪技术之一。瞳孔-角膜反射技术的主要目的是提取人眼图像中瞳孔-角膜反射向量作为视线方向计算模型所需的视觉信息,通过搭建红外光源设备提取瞳孔-角膜反射向量构建基于瞳孔-角膜反射技术的视线跟踪系统,为面向人机交互的视线跟踪研究提供可行的低成本解决方案。%Eye gaze tracking is an important research content of human-computer interaction technology based on multiple channels, and the gaze estimation based on pupil center cornea reflection technique is one of eye gaze tracking technologies with the widest application. The pri-mary purpose of pupil center cornea reflection technique is to extract the pupil center cornea reflection vector in human eye image as the vision information of gaze estimation model. Constructs an infrared light device to extract the pupil center cornea reflection vector and builds up an eye gaze tracking system based on pupil center cornea reflection technique, provides a feasible and low-cost solution for the eye gaze tracking research of human-computer interaction.

  12. Attentional bias modification in depression through gaze contingencies and regulatory control using a new eye-tracking intervention paradigm: study protocol for a placebo-controlled trial.

    Science.gov (United States)

    Vazquez, Carmelo; Blanco, Ivan; Sanchez, Alvaro; McNally, Richard J

    2016-12-08

    Attentional biases, namely difficulties both to disengage attention from negative information and to maintain it on positive information, play an important role in the onset and maintenance of the disorder. Recently, researchers have developed specific attentional bias modification (ABM) techniques aimed to modify these maladaptive attentional patterns. However, the application of current ABM procedures has yielded, so far, scarce results in depression due, in part, to some methodological shortcomings. The aim of our protocol is the application of a new ABM technique, based on eye-tracker technology, designed to objectively train the specific attentional components involved in depression and, eventually, to reduce depressive symptoms. Based on sample size calculations, 32 dysphoric (BDI ≥13) participants will be allocated to either an active attentional bias training group or a yoked-control group. Attentional training will be individually administered on two sessions in two consecutive days at the lab. In the training task series of pairs of faces (i.e. neutral vs. sad; neutral vs. happy; happy vs. sad) will be displayed. Participants in the training group will be asked to localize as quickly as possible the most positive face of the pair (e.g., the neutral face in neutral vs. sad trials) and maintain their gaze on it for 750 ms or 1500 ms, in two different blocks, to advance to the next trial. Participants' maintenance of gaze will be measured by an eye-tracking apparatus. Participants in the yoked-control group will be exposed to the same stimuli and the same average amount of time than the experimental participants but without any instruction to maintain their gaze or any feedback on their performance. Pre and post training measures will be obtained to assess cognitive and emotional changes after the training. The findings from this research will provide a proof-of-principle of the efficacy of eye-tracking paradigms to modify attentional biases and

  13. Gaze Interactive Building Instructions

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Ahmed, Zaheer; Mardanbeigi, Diako

    We combine eye tracking technology and mobile tablets to support hands-free interaction with digital building instructions. As a proof-of-concept we have developed a small interactive 3D environment where one can interact with digital blocks by gaze, keystroke and head gestures. Blocks may be moved...

  14. Gaze shifts and fixations dominate gaze behavior of walking cats

    Science.gov (United States)

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  15. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    Science.gov (United States)

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  16. Fix your eyes in the space you could reach: neurons in the macaque medial parietal cortex prefer gaze positions in peripersonal space.

    Directory of Open Access Journals (Sweden)

    Kostas Hadjidimitrakis

    Full Text Available Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.

  17. Children with ASD Can Use Gaze to Map New Words

    Science.gov (United States)

    Bean Ellawadi, Allison; McGregor, Karla K.

    2016-01-01

    Background: The conclusion that children with autism spectrum disorders (ASD) do not use eye gaze in the service of word learning is based on one-trial studies. Aims: To determine whether children with ASD come to use gaze in the service of word learning when given multiple trials with highly reliable eye-gaze cues. Methods & Procedures:…

  18. No Evidence of Emotional Dysregulation or Aversion to Mutual Gaze in Preschoolers with Autism Spectrum Disorder: An Eye-Tracking Pupillometry Study

    Science.gov (United States)

    Nuske, Heather J.; Vivanti, Giacomo; Dissanayake, Cheryl

    2015-01-01

    The "gaze aversion hypothesis", suggests that people with Autism Spectrum Disorder (ASD) avoid mutual gaze because they experience it as hyper-arousing. To test this hypothesis we showed mutual and averted gaze stimuli to 23 mixed-ability preschoolers with ASD ("M" Mullen DQ = 68) and 21 typically-developing preschoolers, aged…

  19. Context-sensitivity in Conversation. Eye gaze and the German Repair Initiator ‘bitte?’ (´pardon?´)

    DEFF Research Database (Denmark)

    Egbert, Maria

    1996-01-01

    Just as turn-taking has been found to be both context-free and context-sensitive (Sacks, Schegloff & Jefferson 1974), the organization of repair is also shown here to be both context-free and context-sensitive. In a comparison of American and German conversation, repair can be shown to be context......-free in that, basically, the same mechanism can be found across these two languages. However, repair is also sensitive to the linguistic inventory of a given language; in German, morphological marking, syntactic constraints, and grammatical congruity across turns are used as interactional resources....... In addition, repair is sensitive to certain characteristics of social situations. The selection of a particular repair initiator, German bitte? ‘pardon?’, indexes that there is no mutual gaze between interlocutors; i.e., there is no common course of action. The selection of bitte? not only initiates repair...

  20. Off-the-Shelf Gaze Interaction

    DEFF Research Database (Denmark)

    San Agustin, Javier

    People with severe motor-skill disabilities are often unable to use standard input devices such as a mouse or a keyboard to control a computer and they are, therefore, in strong need for alternative input devices. Gaze tracking offers them the possibility to use the movements of their eyes...... of the challenges introduced by the use of low-cost and off-the-shelf components for gaze interaction. The main contributions are: - Development and performance evaluation of the ITU Gaze Tracker, an off-the-shelf gaze tracker that uses an inexpensive webcam or video camera to track the user’s eye. The software...... is readily available as open source, offering the possibility to try out gaze interaction for a low price and to analyze, improve and extend the software by modifying the source code. - A novel gaze estimation method based on homographic mappings between planes. No knowledge about the hardware configuration...

  1. GIBS block speller: toward a gaze-independent P300-based BCI.

    Science.gov (United States)

    Pires, Gabriel; Nunes, Urbano; Castelo-Branco, Miguel

    2011-01-01

    Brain-computer interface (BCI) opens a new communication channel for individuals with severe motor disorders. In P300-based BCIs, gazing the target event plays an important role in the BCI performance. Individuals who have their eye movements affected may lose the ability to gaze targets that are in the visual periphery. This paper presents a novel P300-based paradigm called gaze independent block speller (GIBS), and compares its performance with that of the standard row-column (RC) speller. GIBS paradigm requires extra selections of blocks of letters. The online experiments made with able-bodied participants show that the users can effectively control GIBS without moving the eyes (covert attention), while this task is not possible with RC speller. Furthermore, with overt attention, the results show that the improved classification accuracy of GIBS over RC speller compensates the extra selections, thereby achieving similar practical bit rates.

  2. Enhanced Perception of User Intention by Combining EEG and Gaze-Tracking for Brain-Computer Interfaces (BCIs

    Directory of Open Access Journals (Sweden)

    Mincheol Whang

    2013-03-01

    Full Text Available Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable devices are combined to simultaneously measure both the EEG signal and the gaze position. Second, the speller UI system usually has a 6 × 6 matrix of alphanumeric characters, which has disadvantage in that the number of characters is limited to 36. Thus, a 12 × 12 matrix that includes 144 characters is used. Third, in order to reduce the highlighting time of each of the 12 × 12 rows and columns, only the three rows and three columns (which are determined on the basis of the 3 × 3 area centered on the user’s gaze position are highlighted. Fourth, by analyzing the P300 EEG signal that is obtained only when each of the 3 × 3 rows and columns is highlighted, the accuracy of selecting the correct character is enhanced. The experimental results showed that the accuracy of proposed method was higher than the other methods.

  3. Enhanced perception of user intention by combining EEG and gaze-tracking for brain-computer interfaces (BCIs).

    Science.gov (United States)

    Choi, Jong-Suk; Bang, Jae Won; Park, Kang Ryoung; Whang, Mincheol

    2013-03-13

    Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable devices are combined to simultaneously measure both the EEG signal and the gaze position. Second, the speller UI system usually has a 6 × 6 matrix of alphanumeric characters, which has disadvantage in that the number of characters is limited to 36. Thus, a 12 × 12 matrix that includes 144 characters is used. Third, in order to reduce the highlighting time of each of the 12 × 12 rows and columns, only the three rows and three columns (which are determined on the basis of the 3 × 3 area centered on the user's gaze position) are highlighted. Fourth, by analyzing the P300 EEG signal that is obtained only when each of the 3 × 3 rows and columns is highlighted, the accuracy of selecting the correct character is enhanced. The experimental results showed that the accuracy of proposed method was higher than the other methods.

  4. The Mona Lisa effect: neural correlates of centered and off-centered gaze.

    Science.gov (United States)

    Boyarskaya, Evgenia; Sebastian, Alexandra; Bauermann, Thomas; Hecht, Heiko; Tüscher, Oliver

    2015-02-01

    The Mona Lisa effect describes the phenomenon when the eyes of a portrait appear to look at the observer regardless of the observer's position. Recently, the metaphor of a cone of gaze has been proposed to describe the range of gaze directions within which a person feels looked at. The width of the gaze cone is about five degrees of visual angle to either side of a given gaze direction. We used functional magnetic resonance imaging to investigate how the brain regions involved in gaze direction discrimination would differ between centered and decentered presentation positions of a portrait exhibiting eye contact. Subjects observed a given portrait's eyes. By presenting portraits with varying gaze directions-eye contact (0°), gaze at the edge of the gaze cone (5°), and clearly averted gaze (10°), we revealed that brain response to gaze at the edge of the gaze cone was similar to that produced by eye contact and different from that produced by averted gaze. Right fusiform gyrus and right superior temporal sulcus showed stronger activation when the gaze was averted as compared to eye contact. Gaze sensitive areas, however, were not affected by the portrait's presentation location. In sum, although the brain clearly distinguishes averted from centered gaze, a substantial change of vantage point does not alter neural activity, thus providing a possible explanation why the feeling of eye contact is upheld even in decentered stimulus positions. © 2014 Wiley Periodicals, Inc.

  5. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  6. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  7. From the "Eye of History" to "A Second Gaze": The Visual Archive and the Marginalized in the History of Education

    Science.gov (United States)

    Grosvenor, Ian

    2007-01-01

    This paper has several concerns. It is about both the stories we tell and the images we place with those stories; it is also about historical practice and the power of the image to generate new research approaches. The paper is organized into three sections: the "eye of history" and historians and the visual archive; histories of black…

  8. Remote Gaze Tracking System on a Large Display

    Directory of Open Access Journals (Sweden)

    Jihun Cha

    2013-10-01

    Full Text Available We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC for detecting eye position and an auto-focusing narrow view camera (NVC for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user’s facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  9. Gaze perception in social anxiety and social anxiety disorder.

    Science.gov (United States)

    Schulze, Lars; Renneberg, Babette; Lobmaier, Janek S

    2013-12-16

    Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  10. Gameplay experience in a gaze interaction game

    CERN Document Server

    Nacke, Lennart E; Sasse, Dennis; Lindley, Craig A

    2010-01-01

    Assessing gameplay experience for gaze interaction games is a challenging task. For this study, a gaze interaction Half-Life 2 game modification was created that allowed eye tracking control. The mod was deployed during an experiment at Dreamhack 2007, where participants had to play with gaze navigation and afterwards rate their gameplay experience. The results show low tension and negative affects scores on the gameplay experience questionnaire as well as high positive challenge, immersion and flow ratings. The correlation between spatial presence and immersion for gaze interaction was high and yields further investigation. It is concluded that gameplay experience can be correctly assessed with the methodology presented in this paper.

  11. 基于眼电和稳态视觉诱发电位分析的目光跟踪方法%Eye Gaze Tracking Based on Analysis of Electrooculography and Steady-State Visual Evoked Potentials

    Institute of Scientific and Technical Information of China (English)

    郭琛; 高小榕

    2012-01-01

    Eye-tracking technology as a means of human-computer interaction ( HCI) and eye behavior measurement has been widely used in psychology and cognitive scientific research. Meanwhile brain-computer interface ( BCI) based on steady-state visual evoked potential (SSVEP) is also a concerned method for disabled patients. A novel method of eye-tracking using combined analysis of the two signals was proposed in this paper. Two kinds of electrophysiological signals,EOG and EEG,were simultaneously analyzed with this method. The procedure of EOG analysis contained modules of detrend,denoising,angular transformation and calibration. The analysis of SSVEP was based on algorithm of canonical correlation analysis (CCA) ,and with this algorithm one frequency was selected as the target of which the canonical correlation was maximal and over threshold. The screen coordinates of the target could be used as benchmark parameters for calibration of gaze point tracking. The results showed that gaze points could be identified by EOG every 0. 5 s and targets spotted by EEG every 2 s. Both could run independently or work together and the time and accuracy of detection would be improved when cooperation.%眼动跟踪技术作为人机交互手段和行为检测方法已广泛应用于心理学和认知科学领域的研究,基于稳态视觉诱发电位的脑-机接口也是一种备受关注的人机交互方法.本研究提出一种结合眼电和稳态视觉诱发电位同步分析的眼睛注视点位置跟踪方法,通过同步检测两种电生理信号:眼电信号(EOG)和脑电信号(EEG)来实现.主要的处理算法有:基于EOG的人机交互算法,包括基线去除、去噪声、角度变换、基准校正等;基于SSVEP的脑-机接口算法,通过典型相关分析法实现.由SSVEP判断出的目标对应的屏幕坐标可以作为眼动分析中基准校正的输入参数.实验结果表明:每0.5 sEOG-HCI可以对注视点位置进行一次识别;每2 sSSVEP-BCI可以

  12. Gaze disorders: A clinical approach

    Directory of Open Access Journals (Sweden)

    Pulikottil Wilson Vinny

    2016-01-01

    Full Text Available A single clear binocular vision is made possible by the nature through the oculomotor system along with inputs from the cortical areas as well their descending pathways to the brainstem. Six systems of supranuclear control mechanisms play a crucial role in this regard. These are the saccadic system, the smooth pursuit system, the vestibular system, the optokinetic system, the fixation system, and the vergence system. In gaze disorders, lesions at different levels of the brain spare some of the eye movement systems while affecting others. The resulting pattern of eye movements helps clinicians to localize lesions accurately in the central nervous system. Common lesions causing gaze palsies include cerebral infarcts, demyelinating lesions, multiple sclerosis, tumors, Wernicke's encephalopathy, metabolic disorders, and neurodegenerative disorders such as progressive supranuclear palsy. Evaluation of the different gaze disorders is a bane of most budding neurologists and neurosurgeons. However, a simple and systematic clinical approach to this problem can make their early diagnosis rather easy.

  13. Sphero-cylindrical error for oblique gaze as a function of the position of the centre of rotation of the eye.

    Science.gov (United States)

    Perches, Sara; Ares, Jorge; Collados, Victoria; Palos, Fernando

    2013-07-01

    New designs of ophthalmic lenses customised for particular wearing conditions (e.g., vertex distance or wrap tilt angle) have emerged during the last few years. However, there is limited information about the extent of any improvement in visual quality of these products. The aim of this work was to determine whether customisation according to the centre of rotation of the eye (CRE) improves visual quality for oblique gaze in monofocal spherical lenses. Conventional spherical lenses were designed by numerical ray tracing with back vertex powers (BVP) ranging from +8 to -8 dioptres (D) and base curves from 0 to 8 D. The wavefront error at oblique gaze (40°) was computed for each design with CRE positions from 20 to 35 mm. Sphero-cylindrical (SC) error was calculated using wavefront Zernike coefficients, considering only monochromatic aberrations. Visual acuity in logMAR was estimated following the Raasch empirical regression model. SC error and visual acuity maps were calculated for each BVP as a function of base curves and CRE in a graded colour scale. From SC error maps maximum spherical and cylindrical errors (MSE and MCE) of 1.49 D and -1.24 D respectively were found for BVP from 0 to -2 D, 2.27 D and -1.90 D for BVP from -2 D to -4 D, 2.59 D and -2.20 D for lenses from -4 D to -6 D and 2.63 D and -2.28 D for lenses from -6 D to -8 D. Concerning positive lenses, we obtained MSE and MCE of 0.37 D and -1.35 D respectively for lenses from 0 D to +2 D, 0.39 D and -2.23 D for lenses from +2 D to +4 D and 0.36 D and -2.73 D for lenses from +4 D to +6 D. Regarding visual acuity maps for 40° oblique gaze, significant loss of visual acuity (>0.30 logMAR, Snellen 6/12, 20/40, decimal 0.50) was found for BVP as low as -2 D. Clinically negligible high order aberration levels (equivalent spherical power high SC error when they were designed with low bases. However, high BVP negative lenses with low SC error were found for medium bases and low

  14. Under Moroccan Gaze: Dis/(Reorienting Orientalism American Style in Abdellatif Akbib’s Tangier’s Eyes on America

    Directory of Open Access Journals (Sweden)

    Lhoussain Simour

    2010-06-01

    Full Text Available This article engages with travel literature and is mostly concerned with the image of America in Abdellatif Akbib’s travel-inspired narrative, Tangier’s Eyes on America (2001. It is devoted to examining a number of patterns of representation especially as they pertain to the notion of counter discourse and counter-hegemonic modalities of resistance and subversion. It also inspects the discursive mechanisms Akbib employs to represent America and highlights how Western cultural prejudices and stereotypes are destabilised, and how the discursively-inflected distortions of the Orientalist mindset are disturbed in his work. The choice of this text is determined by a strong desire to discover how the Other of the Orientalist ideology examines and understands the Western Self and modernity and how he/she dismantles “the Centre/Margin binarism of imperial discourse.”

  15. 基于暗瞳图像的人眼视线估计*%Eye gaze tracking based on dark pupil image∗

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    The accurate localization of iris center is difficult since the outer boundary of iris is often occluded significantly by the eyelids. In order to solve this problem, an infrared light source un-coaxial with the camera is used to produce dark pupil image for pupil center estimation. Firstly, the 3D position of the center of cornea curvature, which is used as translational movement information of eyeball, is computed using two cameras and the coordinates of two cornea reflections on the cameras’ imaging planes. Then, the relative displacement of pupil center from the projection of the cornea curvature center on 2D image is extracted, describing the rotational movement of the eyeball. Finally, the feature vector is mapped into coordinates of gazing point on the screen using artificial neural network. As for the eye region detection problem, two wide-view webcams are used, and adaptive boosting + active appearance model algorithm is adopted to limit the region of interest within a small area. The result of our experiment shows that the average root-mean-square error is 0.62◦ in horizontal direction and 1.05◦ in vertical direction, which demonstrates the effectiveness of our solution in eye gaze tracking.%  虹膜外边缘受眼睑遮挡较为严重时,会给虹膜中心的准确提取造成很大的困难.为此,提出利用放置在相机轴外的红外光源产生的暗瞳图像估计瞳孔中心,该方法避免了提取虹膜外边缘遇到的遮挡问题.首先利用角膜反射光斑在相机像面中的位置估计角膜所在球体中心的三维空间坐标,作为眼球的平动信息;然后考察瞳孔中心与角膜球体中心在相机成像面投影位置的相对偏移,作为眼球的转动信息;最后利用人工神经网络完成视线特征向量与注视点坐标间的映射.在人眼区域定位的问题上,利用两部大视场相机,采用自适应增强算法和主动表观模型算法实现眼部区域的准确定位

  16. Gaze-based rehearsal in children under 7: a developmental investigation of eye movements during a serial spatial memory task.

    Science.gov (United States)

    Morey, Candice C; Mareva, Silvana; Lelonkiewicz, Jaroslaw R; Chevalier, Nicolas

    2017-03-12

    The emergence of strategic verbal rehearsal at around 7 years of age is widely considered a major milestone in descriptions of the development of short-term memory across childhood. Likewise, rehearsal is believed by many to be a crucial factor in explaining why memory improves with age. This apparent qualitative shift in mnemonic processes has also been characterized as a shift from passive visual to more active verbal mnemonic strategy use, but no investigation of the development of overt spatial rehearsal has informed this explanation. We measured serial spatial order reconstruction in adults and groups of children 5-7 years old and 8-11 years old, while recording their eye movements. Children, particularly the youngest children, overtly fixated late-list spatial positions longer than adults, suggesting that younger children are less likely to engage in covert rehearsal during stimulus presentation than older children and adults. However, during retention the youngest children overtly fixated more of the to-be-remembered sequences than any other group, which is inconsistent with the idea that children do nothing to try to remember. Altogether, these data are inconsistent with the notion that children under 7 do not engage in any attempts to remember. They are most consistent with proposals that children's style of remembering shifts around age 7 from reactive cue-driven methods to proactive, covert methods, which may include cumulative rehearsal.

  17. Cultural and Species Differences in Gazing Patterns for Marked and Decorated Objects: A Comparative Eye-Tracking Study

    Science.gov (United States)

    Mühlenbeck, Cordelia; Jacobsen, Thomas; Pritsch, Carla; Liebal, Katja

    2017-01-01

    Objects from the Middle Paleolithic period colored with ochre and marked with incisions represent the beginning of non-utilitarian object manipulation in different species of the Homo genus. To investigate the visual effects caused by these markings, we compared humans who have different cultural backgrounds (Namibian hunter–gatherers and German city dwellers) to one species of non-human great apes (orangutans) with respect to their perceptions of markings on objects. We used eye-tracking to analyze their fixation patterns and the durations of their fixations on marked and unmarked stones and sticks. In an additional test, humans evaluated the objects regarding their aesthetic preferences. Our hypotheses were that colorful markings help an individual to structure the surrounding world by making certain features of the environment salient, and that aesthetic appreciation should be associated with this structuring. Our results showed that humans fixated on the marked objects longer and used them in the structural processing of the objects and their background, but did not consistently report finding them more beautiful. Orangutans, in contrast, did not distinguish between object and background in their visual processing and did not clearly fixate longer on the markings. Our results suggest that marking behavior is characteristic for humans and evolved as an attention-directing rather than aesthetic benefit. PMID:28167923

  18. Cultural and Species Differences in Gazing Patterns for Marked and Decorated Objects: A Comparative Eye-Tracking Study.

    Science.gov (United States)

    Mühlenbeck, Cordelia; Jacobsen, Thomas; Pritsch, Carla; Liebal, Katja

    2017-01-01

    Objects from the Middle Paleolithic period colored with ochre and marked with incisions represent the beginning of non-utilitarian object manipulation in different species of the Homo genus. To investigate the visual effects caused by these markings, we compared humans who have different cultural backgrounds (Namibian hunter-gatherers and German city dwellers) to one species of non-human great apes (orangutans) with respect to their perceptions of markings on objects. We used eye-tracking to analyze their fixation patterns and the durations of their fixations on marked and unmarked stones and sticks. In an additional test, humans evaluated the objects regarding their aesthetic preferences. Our hypotheses were that colorful markings help an individual to structure the surrounding world by making certain features of the environment salient, and that aesthetic appreciation should be associated with this structuring. Our results showed that humans fixated on the marked objects longer and used them in the structural processing of the objects and their background, but did not consistently report finding them more beautiful. Orangutans, in contrast, did not distinguish between object and background in their visual processing and did not clearly fixate longer on the markings. Our results suggest that marking behavior is characteristic for humans and evolved as an attention-directing rather than aesthetic benefit.

  19. Classification of binary intentions for individuals with impaired oculomotor function: ‘eyes-closed’ SSVEP-based brain-computer interface (BCI)

    Science.gov (United States)

    Lim, Jeong-Hwan; Hwang, Han-Jeong; Han, Chang-Hee; Jung, Ki-Young; Im, Chang-Hwan

    2013-04-01

    Objective. Some patients suffering from severe neuromuscular diseases have difficulty controlling not only their bodies but also their eyes. Since these patients have difficulty gazing at specific visual stimuli or keeping their eyes open for a long time, they are unable to use the typical steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems. In this study, we introduce a new paradigm for SSVEP-based BCI, which can be potentially suitable for disabled individuals with impaired oculomotor function. Approach. The proposed electroencephalography (EEG)-based BCI system allows users to express their binary intentions without needing to open their eyes. A pair of glasses with two light emitting diodes flickering at different frequencies was used to present visual stimuli to participants with their eyes closed, and we classified the recorded EEG patterns in the online experiments conducted with five healthy participants and one patient with severe amyotrophic lateral sclerosis (ALS). Main results. Through offline experiments performed with 11 participants, we confirmed that human SSVEP could be modulated by visual selective attention to a specific light stimulus penetrating through the eyelids. Furthermore, the recorded EEG patterns could be classified with accuracy high enough for use in a practical BCI system. After customizing the parameters of the proposed SSVEP-based BCI paradigm based on the offline analysis results, binary intentions of five healthy participants were classified in real time. The average information transfer rate of our online experiments reached 10.83 bits min-1. A preliminary online experiment conducted with an ALS patient showed a classification accuracy of 80%. Significance. The results of our offline and online experiments demonstrated the feasibility of our proposed SSVEP-based BCI paradigm. It is expected that our ‘eyes-closed’ SSVEP-based BCI system can be potentially used for communication of

  20. Improvement of design of a surgical interface using an eye tracking device.

    Science.gov (United States)

    Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz

    2014-05-07

    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The

  1. Assessing Self-Awareness through Gaze Agency.

    Science.gov (United States)

    Gregori Grgič, Regina; Crespi, Sofia Allegra; de'Sperati, Claudio

    2016-01-01

    We define gaze agency as the awareness of the causal effect of one's own eye movements in gaze-contingent environments, which might soon become a widespread reality with the diffusion of gaze-operated devices. Here we propose a method for measuring gaze agency based on self-monitoring propensity and sensitivity. In one task, naïf observers watched bouncing balls on a computer monitor with the goal of discovering the cause of concurrently presented beeps, which were generated in real-time by their saccades or by other events (Discovery Task). We manipulated observers' self-awareness by pre-exposing them to a condition in which beeps depended on gaze direction or by focusing their attention to their own eyes. These manipulations increased propensity to agency discovery. In a second task, which served to monitor agency sensitivity at the sensori-motor level, observers were explicitly asked to detect gaze agency (Detection Task). Both tasks turned out to be well suited to measure both increases and decreases of gaze agency. We did not find evident oculomotor correlates of agency discovery or detection. A strength of our approach is that it probes self-monitoring propensity-difficult to evaluate with traditional tasks based on bodily agency. In addition to putting a lens on this novel cognitive function, measuring gaze agency could reveal subtle self-awareness deficits in pathological conditions and during development.

  2. Interacting with Objects in the Environment by Gaze and Hand Gestures

    DEFF Research Database (Denmark)

    Hales, Jeremy; Mardanbeigi, Diako; Rozado, David

    2013-01-01

    A head-mounted wireless gaze tracker in the form of gaze tracking glasses is used here for continuous and mobile monitoring of a subject's point of regard on the surrounding environment. We combine gaze tracking and hand gesture recognition to allow a subject to interact with objects in the envir......A head-mounted wireless gaze tracker in the form of gaze tracking glasses is used here for continuous and mobile monitoring of a subject's point of regard on the surrounding environment. We combine gaze tracking and hand gesture recognition to allow a subject to interact with objects...... in the environment by gazing at them, and controlling the object using hand gesture commands. The gaze tracking glasses was made from low-cost hardware consisting of a safety glasses' frame and wireless eye tracking and scene cameras. An open source gaze estimation algorithm is used for eye tracking and user's gaze...... in smart environments....

  3. Gaze Following Is Modulated by Expectations Regarding Others' Action Goals.

    Directory of Open Access Journals (Sweden)

    Jairo Perez-Osorio

    Full Text Available Humans attend to social cues in order to understand and predict others' behavior. Facial expressions and gaze direction provide valuable information to infer others' mental states and intentions. The present study examined the mechanism of gaze following in the context of participants' expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal. In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements. The essential pattern of results of Experiment 1 was replicated, with the gaze-cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent.

  4. Concepts of Interface Usability and the Enhancement of Design through Eye Tracking and Psychophysiology

    Science.gov (United States)

    2008-09-01

    interface evaluation using eye movements: methods and constructs. International Journal of Industrial Ergonomics , 24, 631-645. Goldberg, J., Kotval...combined measure based on physiological indices during a dual task of tracking and mental arithmetic. International Journal of Industrial Ergonomics , 35

  5. Olhar e contato ocular: desenvolvimento típico e comparação na Síndrome de Down Gaze and eye contact: typical development and comparison in Down syndrome

    Directory of Open Access Journals (Sweden)

    Aline Elise Gerbelli Belini

    2008-03-01

    Full Text Available OBJETIVO: Investigar o desenvolvimento do olhar e do contato ocular em bebê portadora de síndrome de Down, comparando a freqüência de seu olhar para diferentes alvos ao comportamento visual de bebês em desenvolvimento típico. MÉTODOS: Um bebê, do gênero feminino, portador de Síndrome de Down, sem distúrbios visuais diagnosticados até a conclusão da coleta, e 17 bebês em desenvolvimento típico, foram filmados mensal e domiciliarmente, em interação livre com suas mães, do primeiro ao quinto mês de vida. Foi contabilizada a freqüência do olhar dirigido a 11 alvos, entre eles "olhar para os olhos da mãe". RESULTADOS: Os bebês em desenvolvimento típico apresentaram evolução estatisticamente significante, ao longo do período, nas freqüências de "olhos fechados" e de seu olhar para "objetos", "a pesquisadora", "o ambiente", "o próprio corpo", "o rosto da mãe" e "os olhos da mãe". Houve estabilidade estatística da amostra em "olhar para outra pessoa", "olhar para o corpo da mãe" e "abrir e fechar os olhos". O desenvolvimento do olhar e do contato ocular ocorreu de forma estatisticamente muito semelhante no bebê com síndrome de Down, em comparação com as médias dos demais bebês (teste qui-quadrado e com sua variabilidade individual (análise por aglomerados significativos. CONCLUSÕES: A interação precoce entre o bebê e sua mãe parece interferir mais na comunicação não-verbal da dupla do que limitações geneticamente influenciadas. Isto pode ter refletido nas semelhanças encontradas entre o desenvolvimento do comportamento e do contato visuais no bebê com síndrome de Down e nas crianças sem alterações de desenvolvimento.PURPOSE: To assess gaze and eye contact development of a baby girl with Down syndrome and to compare the frequency of gaze directed to different targets to that of babies with normal development. METHODS: A female baby with Down syndrome, without any detected eye conditions and 17

  6. Integrating Service Design and Eye Tracking Insight for Designing Smart TV User Interfaces

    Directory of Open Access Journals (Sweden)

    Sheng-Ming Wang

    2015-07-01

    Full Text Available This research proposes a process that integrate service design method and eye tracking insight for designing a Smart TV user interface. The Service Design method, which is utilized for leading the combination of the quality function deployment (QFD and the analytic hierarchy process (AHP, is used to analyze the features of three Smart TV user interface design mockups. Scientific evidences, which include the effectiveness and efficiency testing data obtained from eye tracking experiments with six participants, are provided the information for analysing the affordance of these design mockups. The results of this research demonstrate a comprehensive methodology that can be used iteratively for redesigning, redefining and evaluating of Smart TV user interfaces. It can also help to make the design of Smart TV user interfaces relate to users' behaviors and needs. So that to improve the affordance of design. Future studies may analyse the data that are derived from eye tracking experiments to improve our understanding of the spatial relationship between designed elements in a Smart TV user interface.

  7. SD OCT Features of Macula and Silicon Oil–Retinal Interface in Eyes Status Post Vitrectomy for RRD

    OpenAIRE

    Manish Nagpal; Navneet Mehrotra; Rituraj Videkar; Rajen Mehta

    2015-01-01

    Aim: To objectively document findings at the Silicon oil-Retinal interface, macular status and tamponade effect in Silicon Oil (SO) filled eyes using SD OCT. Methods: 104 eyes of 104 patients underwent SD OCT examination, horizontal and vertical macular scans, in silicone oil filled eyes which underwent silicone oil injection post vitrectomy for rhegmatogenous retinal detachment. Findings were divided into 3 Groups; Group A: Findings at silicon oil retinal interface, Group B: Macular patholog...

  8. The Development of Joint Visual Attention: A Longitudinal Study of Gaze following during Interactions with Mothers and Strangers

    Science.gov (United States)

    Gredeback, Gustaf; Fikke, Linn; Melinder, Annika

    2010-01-01

    Two- to 8-month-old infants interacted with their mother or a stranger in a prospective longitudinal gaze following study. Gaze following, as assessed by eye tracking, emerged between 2 and 4 months and stabilized between 6 and 8 months of age. Overall, infants followed the gaze of a stranger more than they followed the gaze of their mothers,…

  9. The Expressive Gaze Model: Using Gaze to Express Emotion

    Science.gov (United States)

    2010-07-01

    Thomas and O. Johnston, Disney Animation : The Illusion of Life, Abbeville Press, 1981. 2. B. Lance and S.C. Marsella, “Emotionally Expressive Head and...Thomas and Ollie Johnston Early animators realized that the eyes are an important aspect of the human face regarding the communication and expression...of emotion. They also found that properly animating believ- able, emotionally expressive gaze is extremely dif- ficult. If it’s done improperly

  10. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.|info:eu-repo/dai/nl/29880977X

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and

  11. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexib

  12. Multi-initialized States Referred Work Parameter Calibration for Gaze Tracking Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Qijie Zhao

    2012-09-01

    Full Text Available In order to adaptively calibrate the work parameters in the infrared‐TV based eye gaze tracking Human‐Robot Interaction (HRI system, a kind of gaze direction sensing model has been provided for detecting the eye gaze identified parameters. We paid more attention to situations where the user’s head was in a different position to the interaction interface. Furthermore, the algorithm for automatically correcting work parameters of the system has also been put up by defining certain initial reference system states and analysing the historical information of the interaction between a user and the system. Moreover, considering some application cases and factors, and relying on minimum error rate Bayesian decision‐making theory, a mechanism for identifying system state and adaptively calibrating parameters has been proposed. Finally, some experiments have been done with the established system and the results suggest that the proposed mechanism and algorithm can identify the system work state in multi‐ situations, and can automatically correct the work parameters to meet the demands of a gaze tracking HRI system.

  13. Eye-hand Hybrid Gesture Recognition System for Human Machine Interface

    Directory of Open Access Journals (Sweden)

    N. R. Raajan

    2013-04-01

    Full Text Available Gesture Recognition has become a way for computers to recognise and understand human body language. They bridge the gap between machines and human beings and make the primitive interfaces like keyboards and mice redundant. This paper suggests a hybrid gesture recognition system for computer interface and wireless robot control. The real-time eye-hand gesture recognition system can be used for computer drawing, navigating cursors and simulating mouse clicks, playing games, controlling a wireless robot with commands and more. The robot illustrated in this paper is controlled by RF module. Playing a PING-PONG game has also been demonstrated using the gestures. The Haar cascade classifiers and template matching are used to detect eye gestures and convex hull for finding the defects and counting the number of fingers in the given region.

  14. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  15. Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research.

    Science.gov (United States)

    Gibaldi, Agostino; Vanegas, Mauricio; Bex, Peter J; Maiello, Guido

    2016-07-11

    The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.

  16. Direct Gaze Modulates Face Recognition in Young Infants

    Science.gov (United States)

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  17. Discussion and Future Directions for Eye Tracker Development

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Mulvey, Fiona; Mardanbegi, Diako

    2011-01-01

    Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking.......Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking....

  18. The Use of Gaze to Control Drones

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Alapetite, Alexandre; MacKenzie, I. Scott

    2014-01-01

    This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or “drones”). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty, reliabil...... was considered significantly more reliable than the others. We discuss design and performance issues for the gaze-plus-manual split of controls when drones are operated using gaze in conjunction with tablets, near-eye displays (glasses), or monitors.......This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or “drones”). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty...

  19. Cognitive assessment of executive functions using brain computer interface and eye-tracking

    Directory of Open Access Journals (Sweden)

    P. Cipresso

    2013-03-01

    Full Text Available New technologies to enable augmentative and alternative communication in Amyotrophic Lateral Sclerosis (ALS have been recently used in several studies. However, a comprehensive battery for cognitive assessment has not been implemented yet. Brain computer interfaces are innovative systems able to generate a control signal from brain responses conveying messages directly to a computer. Another available technology for communication purposes is the Eye-tracker system, that conveys messages from eye-movement to a computer. In this study we explored the use of these two technologies for the cognitive assessment of executive functions in a healthy population and in a ALS patient, also verifying usability, pleasantness, fatigue, and emotional aspects related to the setting. Our preliminary results may have interesting implications for both clinical practice (the availability of an effective tool for neuropsychological evaluation of ALS patients and ethical issues.

  20. A non-verbal turing test: Differentiating mind from machine in gaze-based social interaction

    OpenAIRE

    Ulrich J Pfeiffer; Bert Timmermans; Gary Bente; Kai Vogeley; Leonhard Schilbach

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons' gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to ...

  1. Gaze perception in social anxiety and social anxiety disorder

    Directory of Open Access Journals (Sweden)

    Lars eSchulze

    2013-12-01

    Full Text Available Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD. Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  2. Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis

    Directory of Open Access Journals (Sweden)

    Sean eAndrist

    2015-07-01

    Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.

  3. Evaluation of a low-cost open-source gaze tracker

    DEFF Research Database (Denmark)

    San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner; Møllenbach, Emilie;

    2010-01-01

    This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user's eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending...

  4. Coding gaze tracking data with chromatic gradients for VR Exposure Therapy

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Grillon, Helena; De Heras Ciechomski, Pablo

    2007-01-01

    This article presents a simple and intuitive way to represent the eye-tracking data gathered during immersive virtual reality exposure therapy sessions. Eye-tracking technology is used to observe gaze movements during vir- tual reality sessions and the gaze-map chromatic gradient coding allows...

  5. Children's Knowledge of Deceptive Gaze Cues and Its Relation to Their Actual Lying Behavior

    Science.gov (United States)

    McCarthy, Anjanie; Lee, Kang

    2009-01-01

    Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children's gaze behavior when lying is consistent with this belief. In our study, 7- to 15-year-olds and adults answered questions truthfully ("Truth" questions)…

  6. The effect of gaze direction on three-dimensional face recognition in infants.

    Science.gov (United States)

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2012-09-01

    Eye gaze is an important tool for social contact. In this study, we investigated whether direct gaze facilitates the recognition of three-dimensional face images in infants. We presented artificially produced face images in rotation to 6-8 month-old infants. The eye gaze of the face images was either direct or averted. Sixty-one sequential images of each face were created by rotating the vertical axis of the face from frontal view to ± 30°. The recognition performances of the infants were then compared between faces with direct gaze and faces with averted gaze. Infants showed evidence that they were able to discriminate the novel from familiarized face by 8 months of age and only when gaze is direct. These results suggest that gaze direction may affect three-dimensional face recognition in infants.

  7. Perceptual and not physical eye contact elicits pupillary dilation.

    Science.gov (United States)

    Honma, Motoyasu; Tanaka, Yasuto; Osada, Yoshihisa; Kuriyama, Kenichi

    2012-01-01

    Eye contact is important to share communication during social interactions. However, how accurately humans can perceive the gaze direction of others toward themselves and whether pupils dilate when humans consciously or unconsciously perceive own eyes are looked by others remain unclear. In this study, we examined the relationship between the explicit perception of looking into each other's eyes and the implicit physiological response of pupillary dilation by using an original face-to-face method. We found that humans do not correctly detect the gaze direction of others. Furthermore, one's pupils dilated when one gazed at others' eyes. Awareness of others' gaze on one's eyes, rather than the actual focusing of other's gaze on one's eyes, enhanced pupillary dilation. Therefore, physiological responses are caused not when people actually look into each other's gaze, but when the consciousness of other's gaze is activated, which suggests that eye contact often involves one-way communication.

  8. Gaze beats mouse

    DEFF Research Database (Denmark)

    Mateo, Julio C.; San Agustin, Javier; Hansen, John Paulin

    2008-01-01

    Facial EMG for selection is fast, easy and, combined with gaze pointing, it can provide completely hands-free interaction. In this pilot study, 5 participants performed a simple point-and-select task using mouse or gaze for pointing and a mouse button or a facial-EMG switch for selection. Gaze...... pointing was faster than mouse pointing, while maintaining a similar error rate. EMG and mouse-button selection had a comparable performance. From analyses of completion time, throughput and error rates, we concluded that the combination of gaze and facial EMG holds potential for outperforming the mouse....

  9. Gaze Tracking Through Smartphones

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Hansen, John Paulin; Møllenbach, Emilie

    Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest.......Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest....

  10. Fractal fluctuations in gaze speed visual search.

    Science.gov (United States)

    Stephen, Damian G; Anastas, Jason

    2011-04-01

    Visual search involves a subtle coordination of visual memory and lower-order perceptual mechanisms. Specifically, the fluctuations in gaze may provide support for visual search above and beyond what may be attributed to memory. Prior research indicates that gaze during search exhibits fractal fluctuations, which allow for a wide sampling of the field of view. Fractal fluctuations constitute a case of fast diffusion that may provide an advantage in exploration. We present reanalyses of eye-tracking data collected by Stephen and Mirman (Cognition, 115, 154-165, 2010) for single-feature and conjunction search tasks. Fluctuations in gaze during these search tasks were indeed fractal. Furthermore, the degree of fractality predicted decreases in reaction time on a trial-by-trial basis. We propose that fractality may play a key role in explaining the efficacy of perceptual exploration.

  11. Multifunctional ZnO interfaces with hierarchical micro- and nanostructures: bio-inspiration from the compound eyes of butterflies

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Sha; Yang, Yefeng; Jin, Yizheng; Huang, Jingyun; Zhao, Binghui; Ye, Zhizhen [Zhejiang University, State Key Laboratory of Silicon Materials, Department of Materials Science and Engineering, Hangzhou (China)

    2010-07-15

    Multifunctional zinc oxide (ZnO) interfaces were fabricated by utilizing the technique of low-temperature metal-organic chemical vapor deposition (MOCVD). The ZnO interfacial material exhibit antiwetting, antireflectance, and photonic properties derived from the unique hierarchical micro- and nanostructures of the compound eye of the butterflies. We demonstrate that the fabrication of the multifunctional interfaces by using biotemplates can be applied to other materials, such as Pt. Our study provides an excellent example to obtain multifunctional interfaces by learning from nature. (orig.)

  12. Gaze interaction from bed

    DEFF Research Database (Denmark)

    Hansen, John Paulin; San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner

    2011-01-01

    This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assist...

  13. Gazes and Performances

    DEFF Research Database (Denmark)

    Larsen, Jonas

    Abstract: Recent literature has critiqued this notion of the 'tourist gaze' for reducing tourism to visual experiences 'sightseeing' and neglecting other senses and bodily experiences of doing tourism. A so-called 'performance turn' within tourist studies highlights how tourists experience places...... revised and expanded Tourist Gaze 3.0 that I am writing with John Urry at the moment....

  14. The Relationship between Children's Gaze Reporting and Theory of Mind

    Science.gov (United States)

    D'Entremont, Barbara; Seamans, Elizabeth; Boudreau, Elyse

    2012-01-01

    Seventy-nine 3- and 4-year-old children were tested on gaze-reporting ability and Wellman and Liu's (2004) continuous measure of theory of mind (ToM). Children were better able to report where someone was looking when eye and head direction were provided as a cue compared with when only eye direction cues were provided. With the exception of…

  15. Learning to Interact with a Computer by Gaze

    Science.gov (United States)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users' eye movement data during typing of 110 sentences. The experiment revealed that inefficient eye movements was dramatically reduced…

  16. Gaze interaction in UAS video exploitation

    Science.gov (United States)

    Hild, Jutta; Brüstle, Stefan; Heinze, Norbert; Peinsipp-Byma, Elisabeth

    2013-05-01

    A frequently occurring interaction task in UAS video exploitation is the marking or selection of objects of interest in the video. If an object of interest is visually detected by the image analyst, its selection/marking for further exploitation, documentation and communication with the team is a necessary task. Today object selection is usually performed by mouse interaction. As due to sensor motion all objects in the video move, object selection can be rather challenging, especially if strong and fast and ego-motions are present, e.g., with small airborne sensor platforms. In addition to that, objects of interest are sometimes too shortly visible to be selected by the analyst using mouse interaction. To address this issue we propose an eye tracker as input device for object selection. As the eye tracker continuously provides the gaze position of the analyst on the monitor, it is intuitive to use the gaze position for pointing at an object. The selection is then actuated by pressing a button. We integrated this gaze-based "gaze + key press" object selection into Fraunhofer IOSB's exploitation station ABUL using a Tobii X60 eye tracker and a standard keyboard for the button press. Representing the object selections in a spatial relational database, ABUL enables the image analyst to efficiently query the video data in a post processing step for selected objects of interest with respect to their geographical and other properties. An experimental evaluation is presented, comparing gaze-based interaction with mouse interaction in the context of object selection in UAS videos.

  17. Culture, gaze and the neural processing of fear expressions

    OpenAIRE

    2009-01-01

    The direction of others’ eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direc...

  18. Just one look: Direct gaze briefly disrupts visual working memory.

    Science.gov (United States)

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  19. Novel automatic eye detection and tracking algorithm

    Science.gov (United States)

    Ghazali, Kamarul Hawari; Jadin, Mohd Shawal; Jie, Ma; Xiao, Rui

    2015-04-01

    The eye is not only one of the most complex but also the most important sensory organ of the human body. Eye detection and eye tracking are basement and hot issue in image processing. A non-invasive eye location and eye tracking is promising for hands-off gaze-based human-computer interface, fatigue detection, instrument control by paraplegic patients and so on. For this purpose, an innovation work frame is proposed to detect and tracking eye in video sequence in this paper. The contributions of this work can be divided into two parts. The first contribution is that eye filters were trained which can detect eye location efficiently and accurately without constraints on the background and skin colour. The second contribution is that a framework of tracker based on sparse representation and LK optic tracker were built which can track eye without constraint on eye status. The experimental results demonstrate the accuracy aspects and the real-time applicability of the proposed approach.

  20. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    Science.gov (United States)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  1. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    Directory of Open Access Journals (Sweden)

    Helmut Leder

    2016-08-01

    Full Text Available We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces.

  2. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    Science.gov (United States)

    Mitrovic, Aleksandra; Goller, Jürgen

    2016-01-01

    We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. PMID:27698984

  3. Gazing and Performing

    DEFF Research Database (Denmark)

    Larsen, Jonas; Urry, John

    2011-01-01

    The Tourist Gaze [Urry J, 1990 (Sage, London)] is one of the most discussed and cited tourism books (with about 4000 citations on Google scholar). Whilst wide ranging in scope, the book is known for the Foucault-inspired concept of the tourist gaze that brings out the fundamentally visual and image...... that the doings of tourism are physical or corporeal and not merely visual, and it is necessary to regard ‘performing’ rather than ‘gazing’ as the dominant tourist research paradigm. Yet we argue here that there are, in fact, many similarities between the paradigms of gaze and of performance. They should ‘dance...

  4. Camera Mouse Including “Ctrl-Alt-Del” Key Operation Using Gaze, Blink, and Mouth Shape

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-04-01

    Full Text Available This paper presents camera mouse system with additional feature: "CTRL - ALT - DEL" key. The previous gaze-based camera mouse systems are only considering how to obtain gaze and making selection. We proposed gaze-based camera mouse with "CTRL - ALT - DEL" key. Infrared camera is put on top of display while user looking ahead. User gaze is estimated based on eye gaze and head pose. Blinking and mouth detections are used to create "CTR - ALT - DEL" key. Pupil knowledge is used to improve robustness of eye gaze estimation against different users. Also, Gabor filter is used to extract face features. Skin color information and face features are used to estimate head pose. The experiments of each method have done and the results show that all methods work perfectly. By implemented this system, troubleshooting of camera mouse can be done by user itself and makes camera mouse be more sophisticated.

  5. Tracking Eyes using Shape and Appearance

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nielsen, Mads; Hansen, John Paulin

    2002-01-01

    We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...... to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes. We use a Gaussian Process interpolation method for gaze determination, which facilitates stability feedback from the system. The use of a learning method for gaze estimation gives more flexibility...

  6. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus.

    Directory of Open Access Journals (Sweden)

    Sayoko Ueda

    Full Text Available As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear, B-type (only the eye position is clear, and C-type (both the pupil and eye position are unclear. A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  7. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    Science.gov (United States)

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  8. Vestibular and cerebellar contribution to gaze optimality.

    Science.gov (United States)

    Sağlam, Murat; Glasauer, Stefan; Lehnen, Nadine

    2014-04-01

    Patients with chronic bilateral vestibular loss have large gaze variability and experience disturbing oscillopsia, which impacts physical and social functioning, and quality of life. Gaze variability and oscillopsia in these patients are attributed to a deficient vestibulo-ocular reflex, i.e. impaired online feedback motor control. Here, we assessed whether the lack of vestibular input also affects feed-forward motor learning, i.e. the ability to choose optimal movement parameters that minimize variability during active movements such as combined eye-head gaze shifts. A failure to learn from practice and reshape feed-forward motor commands in response to sensory error signals to achieve appropriate movements has been proposed to explain dysmetric gaze shifts in patients with cerebellar ataxia. We, therefore, assessed the differential roles of both sensory vestibular information and the cerebellum in choosing optimal movement kinematics. We have previously shown that, in the course of several gaze shifts, healthy subjects adjust the motor command to minimize endpoint variability also when movements are experimentally altered by an increase in the head moment of inertia. Here, we increased the head inertia in five patients with chronic complete bilateral vestibular loss (aged 45.4±7.1 years, mean±standard deviation), nine patients with cerebellar ataxia (aged 56.7±12.6 years), and 10 healthy control subjects (aged 39.7±6.3 years) while they performed large (75° and 80°) horizontal gaze shifts towards briefly flashed targets in darkness and, using our previous optimal control model, compared their gaze shift parameters to the expected optimal movements with increased head inertia. Patients with chronic bilateral vestibular loss failed to update any of the gaze shift parameters to the new optimum with increased head inertia. Consequently, they displayed highly variable, suboptimal gaze shifts. Patients with cerebellar ataxia updated some movement parameters to

  9. Vestibulo-Ocular Reflex Suppression during Head-Fixed Saccades Reveals Gaze Feedback Control

    OpenAIRE

    Daye, Pierre M.; Dale C Roberts; David S Zee; Optican, Lance M.

    2015-01-01

    Previous experiments have shown that the vestibulo-ocular reflex (VOR) is partially suppressed during large head-free gaze (gaze = eye-in-head + head-in-space) shifts when both the eyes and head are moving actively, on a fixed body, or when the eyes are moving actively and the head passively on a fixed body. We tested, in human subjects, the hypothesis that the VOR is also suppressed during gaze saccades made with en bloc, head and body together, rotations. Subjects made saccades by following...

  10. Vestibulo-ocular reflex suppression during head-fixed saccades reveals gaze feedback control.

    Science.gov (United States)

    Daye, Pierre M; Roberts, Dale C; Zee, David S; Optican, Lance M

    2015-01-21

    Previous experiments have shown that the vestibulo-ocular reflex (VOR) is partially suppressed during large head-free gaze (gaze = eye-in-head + head-in-space) shifts when both the eyes and head are moving actively, on a fixed body, or when the eyes are moving actively and the head passively on a fixed body. We tested, in human subjects, the hypothesis that the VOR is also suppressed during gaze saccades made with en bloc, head and body together, rotations. Subjects made saccades by following a target light. During some trials, the chair rotated so as to move the entire body passively before, during, or after a saccade. The modulation of the VOR was a function of both saccade amplitude and the time of the head perturbation relative to saccade onset. Despite the perturbation, gaze remained accurate. Thus, VOR modulation is similar when gaze changes are programmed for the eyes alone or for the eyes and head moving together. We propose that the brain always programs a change in gaze using feedback based on gaze and head signals, rather than on separate eye and head trajectories.

  11. Learning to interact with a computer by gaze

    DEFF Research Database (Denmark)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users’ eye movement data during typing of 110 sentences. The experiment revealed...... that inefficient eye movements was dramatically reduced after only 15 to 25 sentences of typing, equal to approximately 3-4 hours of practice. The performance data fits a general learning model based on the power law of practice. The learning model can be used to estimate further improvements in gaze typing...

  12. Social decisions affect neural activity to perceived dynamic gaze.

    Science.gov (United States)

    Latinus, Marianne; Love, Scott A; Rossi, Alejandra; Parada, Francisco J; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin; Puce, Aina

    2015-11-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a 'default mode' that may focus on spatial information; a 'socially aware mode' that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified.

  13. Audience gaze while appreciating a multipart musical performance.

    Science.gov (United States)

    Kawase, Satoshi; Obata, Satoshi

    2016-11-01

    Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.

  14. Culture, gaze and the neural processing of fear expressions.

    Science.gov (United States)

    Adams, Reginald B; Franklin, Robert G; Rule, Nicholas O; Freeman, Jonathan B; Kveraga, Kestutis; Hadjikhani, Nouchine; Yoshikawa, Sakiko; Ambady, Nalini

    2010-06-01

    The direction of others' eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direct- versus averted-fear expressions, finding clear evidence for a culturally determined role of gaze in the processing of fear. Greater neural responsivity was apparent to averted- versus direct-gaze fear in several regions related to face and emotion processing, including bilateral amygdalae, when posed on same-culture faces, whereas greater response to direct- versus averted-gaze fear was apparent in these same regions when posed on other-culture faces. We also found preliminary evidence for intercultural variation including differential responses across participants to Japanese versus US Caucasian stimuli, and to a lesser degree differences in how Japanese and US Caucasian participants responded to these stimuli. These findings reveal a meaningful role of culture in the processing of eye gaze and emotion, and highlight their interactive influences in neural processing.

  15. Gaze-cueing requires intact face processing - Insights from acquired prosopagnosia.

    Science.gov (United States)

    Burra, Nicolas; Kerzel, Dirk; Ramon, Meike

    2017-04-01

    Gaze-cueing is the automatic spatial orienting of attention in the direction of perceived gaze. Participants respond faster to targets located at positions congruent with the direction of gaze, compared to incongruent ones (gaze cueing effect, GCE). However, it still remains unclear whether its occurrence depends on intact integration of information from the entire eye region or face, rather than simply the presence of the eyes per se. To address this question, we investigated the GCE in PS, an extensively studied case of pure acquired prosopagnosia. In our gaze-cueing paradigm, we manipulated the duration at which cues were presented (70ms vs. 400ms) and the availability of facial information (full-face vs. eyes-only). For 70ms cue duration, we found a context-dependent dissociation between PS and controls: PS showed a GCE for eyes-only stimuli, whereas controls showed a GCE only for full-face stimuli. For 400ms cue duration, PS showed gaze-cueing independently of stimulus context, whereas in healthy controls a GCE again emerged only for full-face stimuli. Our findings suggest that attentional deployment based on the gaze direction of briefly presented faces requires intact processing of facial information, which affords salience to the eye region. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The Epistemology of the Gaze

    DEFF Research Database (Denmark)

    Kramer, Mette

    2007-01-01

    In psycho-semiotic film theory the gaze is often considered to be a straitjacket for the female spectator. If we approach the gaze from an empiric so-called ‘naturalised’ lens, it is possible to regard the gaze as a functional devise through which the spectator can obtain knowledge essential for ...

  17. A GazeWatch Prototype

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Biermann, Florian; Møllenbach, Emile

    2015-01-01

    We demonstrate potentials of adding a gaze tracking unit to a smartwatch, allowing hands-free interaction with the watch itself and control of the environment. Users give commands via gaze gestures, i.e. looking away and back to the GazeWatch. Rapid presentation of single words on the watch displ...

  18. Gaze stabilization reflexes in the mouse: New tools to study vision and sensorimotor

    NARCIS (Netherlands)

    B. van Alphen (Bart)

    2010-01-01

    markdownabstract__abstract__ Gaze stabilization reflexes are a popular model system in neuroscience for connecting neurophysiology and behavior as well as studying the neural correlates of behavioral plasticity. These compensatory eye movements are one of the simplest motor behaviors,

  19. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze

  20. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Directory of Open Access Journals (Sweden)

    Johanna Palcu

    2017-06-01

    Full Text Available Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a determine whether presenting human faces (static or animated in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product in an advertisement, and (c to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product. Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers

  1. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers’ visual

  2. Towards gaze-controlled platform games

    DEFF Research Database (Denmark)

    Muñoz, Jorge; Yannakakis, Georgios N.; Mulvey, Fiona

    2011-01-01

    This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial...... analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character...... controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game...

  3. Ritual relieved axial dystonia triggered by gaze-evoked amaurosis.

    Science.gov (United States)

    Jacome, D E

    1997-11-01

    A woman with chronic posttraumatic axial lateropulsion cervical dystonia ("belly dancer's head") found relief of her spontaneous dystonic spasms by the sequential performance of an elaborate motor ritual. During an episode of left optic papillitis caused by central retinal vein occlusion, gaze-evoked amaurosis of the left eye developed, preceded by achromatopsia, during left lateral gaze. Gaze-evoked amaurosis triggered axial dystonia, which was followed by her unique, stereotyped, dystonia-relieving ritual that simulated a slow dance. Visual symptoms improved progressively in 1 year. Eventually, she was unable to trigger her dystonia by eye movements. Spontaneous dystonia remained otherwise unchanged from before the episode of papillitis and was still relieved by her unique ritual.

  4. A Regression-based User Calibration Framework for Real-time Gaze Estimation

    OpenAIRE

    Arar, Nuri Murat; Gao, Hua; Thiran, Jean-Philippe

    2016-01-01

    Eye movements play a very significant role in human computer interaction (HCI) as they are natural and fast, and contain important cues for human cognitive state and visual attention. Over the last two decades, many techniques have been proposed to accurately estimate the gaze. Among these, video-based remote eye trackers have attracted much interest since they enable non-intrusive gaze estimation. To achieve high estimation accuracies for remote systems, user calibration is inevitable in ord...

  5. Effect of interface reflection in pseudophakic eyes with an additional refractive intraocular lens.

    Science.gov (United States)

    Schrecker, Jens; Zoric, Katja; Meßner, Arthur; Eppig, Timo

    2012-09-01

    To compare the surface reflections in a pseudophakic model eye with and without a monofocal additional refractive intraocular lens (add-on IOL). Department of Ophthalmology, Rudolf-Virchow-Klinikum Glauchau, Glauchau, and Experimental Ophthalmology, Saarland University, Homburg, Germany. Experimental study. The Liou and Brennan model eye was used to determine the retinal surface reflections in a pseudophakic model eye with and without an add-on IOL. The crystalline lens of the model eye was replaced by (1) a standard posterior chamber IOL (PC IOL) with a refractive power of 22.0 diopters (D) and (2) a PC IOL and an add-on IOL with refractive powers of 19.0 D and 2.5 D, respectively. To theoretically estimate the impact of the reflected images to visual impression, the signal-to-noise ratio (SNR) was calculated under 2 conditions: without and with straylight and double reflection effects. Compared with the pseudophakic model eye without an add-on IOL, the pseudophakic model eye with an add-on IOL showed no relevant differences in the SNR under both conditions. Findings indicate that implantation of monofocal add-on IOLs will not induce relevant additional disturbing glare compared with conventional pseudophakia. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows.

  7. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    Science.gov (United States)

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  8. Atypical face gaze in autism.

    Science.gov (United States)

    Trepagnier, Cheryl; Sebrechts, Marc M; Peterson, Rebecca

    2002-06-01

    An eye-tracking study of face and object recognition was conducted to clarify the character of face gaze in autistic spectrum disorders. Experimental participants were a group of individuals diagnosed with Asperger's disorder or high-functioning autistic disorder according to their medical records and confirmed by the Autism Diagnostic Interview-Revised (ADI-R). Controls were selected on the basis of age, gender, and educational level to be comparable to the experimental group. In order to maintain attentional focus, stereoscopic images were presented in a virtual reality (VR) headset in which the eye-tracking system was installed. Preliminary analyses show impairment in face recognition, in contrast with equivalent and even superior performance in object recognition among participants with autism-related diagnoses, relative to controls. Experimental participants displayed less fixation on the central face than did control-group participants. The findings, within the limitations of the small number of subjects and technical difficulties encountered in utilizing the helmet-mounted display, suggest an impairment in face processing on the part of the individuals in the experimental group. This is consistent with the hypothesis of disruption in the first months of life, a period that may be critical to typical social and cognitive development, and has important implications for selection of appropriate targets of intervention.

  9. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    Science.gov (United States)

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  10. What Captures Gaze in Visual Design - Insights from Cognitive Psychology

    DEFF Research Database (Denmark)

    Andersen, Emil; Maier, Anja

    2016-01-01

    Visual information is vital for user behaviour and thus of utmost importance to design. Consequently, tracking and interpreting gaze data has been the target of increasing amounts of research in design science. This research is in part facilitated by new methods, such as eye-tracking, becoming more...

  11. Differences in gaze anticipation for locomotion with and without vision

    Directory of Open Access Journals (Sweden)

    Colas Nils Authié

    2015-06-01

    Full Text Available Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics.We asked ten participants to walk along two predefined complex trajectories (limacon and figure eight without any cue on the trajectory to follow. Two visual conditions were used: (i in light and (ii in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements.We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude. The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory.These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition.

  12. Effects of Gaze Direction Perception on Gaze Following Behavior%注视方向的知觉对注视追随行为的影响

    Institute of Scientific and Technical Information of China (English)

    张智君; 赵亚军; 占琪涛

    2011-01-01

    他人的注视线索可诱导观察者将注意自动地转移到该线索所指示的方向上去(注视追随),但仍不清楚注视方向的知觉在注视追随中起到何种作用.本研究结合注视适应和注视线索提示范式发现:知觉到的注视线索角度越大,其线索提示效应越强;知觉适应后被试判断注视方向的准确性下降,注视线索引起的注意转移量显著减少.可见,对注视方向的知觉能直接影响注视追随行为,而注视方向抽取受到刺激显著性(注视角度)和知觉适应等因素的调节.这提示:在意识状态下,注视知觉与注视追随存在直接联系,即可能存在从注视知觉系统到注意转移系统的皮层加工通路;注视追随并非纯粹的反射式加工,它受自上而下知觉经验的调节.%Observing another person's averted eye gaze leads to automatic shift of attention to the same object and facilitates subsequent early visual processing. This phenomenon is termed 'Joint attention'. Joint attention processing proceeds through two main stages: gaze perception and gaze following. Gaze perception refers to analysis of the perceptual features of a gaze cue. By contrast, gaze following refers to the tendency of observers to shift attention to locations looked at by others, which is indicated by gaze cueing effect (GCE). Most of researchers maintained that the latter process relied on the former (serial model). They hold an idea that mechanisms involved in gaze perception precede those involved in attentional cueing from gaze. However, other results suggested that gaze perception and gaze following might involve dissociable mechanisms (parallel model). As Doherty et al. (2009) reported in young children, it was possible for gaze following to occur in the absence of precise gaze perception. Thus, it remains unclear what role the gaze perception plays in gaze following behavior. In the current study, a gaze adaptation technique and a gaze cueing paradigm

  13. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Science.gov (United States)

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  14. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements

    Science.gov (United States)

    Kessler, Luise; Schweinberger, Stefan R.

    2016-01-01

    A speaker’s gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., “sniffer dogs cannot smell the difference between identical twins”). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze. PMID:27643789

  15. Gaze as a Supplementary Modality for Interacting with Ambient Intelligence Environments

    CERN Document Server

    Gepner, Daniel; Carbonell, Noëlle

    2007-01-01

    We present our current research on the implementation of gaze as an efficient and usable pointing modality supplementary to speech, for interacting with augmented objects in our daily environment or large displays, especially immersive virtual reality environments, such as reality centres and caves. We are also addressing issues relating to the use of gaze as the main interaction input modality. We have designed and developed two operational user interfaces: one for providing motor-disabled users with easy gaze-based access to map applications and graphical software; the other for iteratively testing and improving the usability of gaze-contingent displays.

  16. Gaze, goals and growing up: Effects on imitative grasping.

    Science.gov (United States)

    Brubacher, Sonja P; Roberts, Kim P; Obhi, Sukhvinder S

    2013-09-01

    Developmental differences in the use of social-attention cues to imitation were examined among children aged 3 and 6 years old (n = 58) and adults (n = 29). In each of 20 trials, participants watched a model grasp two objects simultaneously and move them together. On every trial, the model directed her gaze towards only one of the objects. Some object pairs were related and had a clear functional relationship (e.g., flower, vase), while others were functionally unrelated (e.g., cardboard square, ladybug). Owing to attentional effects of eye gaze, it was expected that all participants would more faithfully imitate the grasp on the gazed-at object than the object not gazed-at. Children were expected to imitate less faithfully on trials with functionally related objects than those without, due to goal-hierarchy effects. Results support effects of eye gaze on imitation of grasping. Children's grasping accuracy on functionally related and functionally unrelated trials was similar, but they were more likely to only use one hand on trials where the object pairs were functionally related than unrelated. Implications for theories of imitation are discussed. © 2013 The British Psychological Society.

  17. Investigating gaze of children with ASD in naturalistic settings.

    Directory of Open Access Journals (Sweden)

    Basilio Noris

    Full Text Available BACKGROUND: Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD. Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii whether new atypical elements appear when studying visual behavior across the whole field of view. METHODOLOGY/PRINCIPAL FINDINGS: Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS. The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. CONCLUSIONS/SIGNIFICANCE: The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment.

  18. Using Gaze Patterns to Predict Task Intent in Collaboration

    Directory of Open Access Journals (Sweden)

    Chien-Ming eHuang

    2015-07-01

    Full Text Available In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a worker'' prepared a sandwich by adding ingredients requested by a customer.'' In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 seconds before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.

  19. Can we resist another person’s gaze?

    Science.gov (United States)

    Marino, Barbara F. M.; Mirabella, Giovanni; Actis-Grosso, Rossana; Bricolo, Emanuela; Ricciardelli, Paola

    2015-01-01

    Adaptive adjustments of strategies are needed to optimize behavior in a dynamic and uncertain world. A key function in implementing flexible behavior and exerting self-control is represented by the ability to stop the execution of an action when it is no longer appropriate for the environmental requests. Importantly, stimuli in our environment are not equally relevant and some are more valuable than others. One example is the gaze of other people, which is known to convey important social information about their direction of attention and their emotional and mental states. Indeed, gaze direction has a significant impact on the execution of voluntary saccades of an observer since it is capable of inducing in the observer an automatic gaze-following behavior: a phenomenon named social or joint attention. Nevertheless, people can exert volitional inhibitory control on saccadic eye movements during their planning. Little is known about the interaction between gaze direction signals and volitional inhibition of saccades. To fill this gap, we administered a countermanding task to 15 healthy participants in which they were asked to observe the eye region of a face with the eyes shut appearing at central fixation. In one condition, participants were required to suppress a saccade, that was previously instructed by a gaze shift toward one of two peripheral targets, when the eyes were suddenly shut down (social condition, SC). In a second condition, participants were asked to inhibit a saccade, that was previously instructed by a change in color of one of the two same targets, when a change of color of a central picture occurred (non-social condition, N-SC). We found that inhibitory control was more impaired in the SC, suggesting that actions initiated and stopped by social cues conveyed by the eyes are more difficult to withhold. This is probably due to the social value intrinsically linked to these cues and the many uses we make of them. PMID:26550008

  20. [Unilateral Solar Maculopathy after Gazing at Solar Eclipse].

    Science.gov (United States)

    Mehlan, J; Linke, S J; Wagenfeld, L; Steinberg, J

    2016-06-01

    A 43-year-old male patient with unilateral metamorphosia presented after gazing at an eclipse with only one eye. Damage of the macula was demonstrated funduscopically, with OCT and angiography. Six weeks after initial presentation and oral methylprednisolone therapy (40 mg/d for 10 days), the symptoms and the morphological changes decreased. Solar retinopathy is a photochemical alteration of the retina, usually seen after sun gazing. In younger patients, it mostly presents as bilateral solar maculopathy. Some patients exhibit partial or total recovery.

  1. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    Science.gov (United States)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon

    2017-01-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871

  2. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    Science.gov (United States)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  3. Text Entry by Gazing and Smiling

    Directory of Open Access Journals (Sweden)

    Outi Tuisku

    2013-01-01

    Full Text Available Face Interface is a wearable prototype that combines the use of voluntary gaze direction and facial activations, for pointing and selecting objects on a computer screen, respectively. The aim was to investigate the functionality of the prototype for entering text. First, three on-screen keyboard layout designs were developed and tested (n=10 to find a layout that would be more suitable for text entry with the prototype than traditional QWERTY layout. The task was to enter one word ten times with each of the layouts by pointing letters with gaze and select them by smiling. Subjective ratings showed that a layout with large keys on the edge and small keys near the center of the keyboard was rated as the most enjoyable, clearest, and most functional. Second, using this layout, the aim of the second experiment (n=12 was to compare entering text with Face Interface to entering text with mouse. The results showed that text entry rate for Face Interface was 20 characters per minute (cpm and 27 cpm for the mouse. For Face Interface, keystrokes per character (KSPC value was 1.1 and minimum string distance (MSD error rate was 0.12. These values compare especially well with other similar techniques.

  4. Design of a computer game using an eye-tracking device for eye's activity rehabilitation

    Science.gov (United States)

    Lin, Chern-Sheng; Huan, Chia-Chin; Chan, Chao-Ning; Yeh, Mau-Shiun; Chiu, Chuang-Chien

    2004-07-01

    An eye mouse interface that can be used to operate a computer using the movement of the eyes is described. We developed this eye-tracking system for eye motion disability rehabilitation. When the user watches the screen of a computer, a charge-coupled device will catch images of the user's eye and transmit it to the computer. A program, based on a new cross-line tracking and stabilizing algorithm, will locate the center point of the pupil in the images. The calibration factors and energy factors are designed for coordinate mapping and blink functions. After the system transfers the coordinates of pupil center in the images to the display coordinate, it will determine the point at which the user gazed on the display, then transfer that location to the game subroutine program. We used this eye-tracking system as a joystick to play a game with an application program in a multimedia environment. The experimental results verify the feasibility and validity of this eye-game system and the rehabilitation effects for the user's visual movement.

  5. A kinematic model for 3-D head-free gaze-shifts.

    Science.gov (United States)

    Daemi, Mehdi; Crawford, J Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.

  6. A Kinematic Model for 3-D Head-Free Gaze-Shifts

    Directory of Open Access Journals (Sweden)

    Mehdi eDaemi

    2015-06-01

    Full Text Available Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR, relative eye and head contributions, the non-commutativity of rotations, and Listing’s and Fick constraints for the eyes and head respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: 1 a saccade generator, 2 a head rotation generator, 3 a VOR predictor. Simulations illustrate that the model can implement: 1 the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters, 2 the experimentally verified constraints on static eye and head orientations during fixation, and 3 the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.

  7. Comprehension of the Communicative Intent behind Pointing and Gazing Gestures by Young Children with Williams Syndrome or Down Syndrome

    Science.gov (United States)

    John, Angela E.; Mervis, Carolyn B.

    2010-01-01

    Purpose: In this study, the authors examined the ability of preschoolers with Williams syndrome (WS) or Down syndrome (DS) to infer communicative intent as expressed through gestures (pointing and eye-gaze shift). Method: Participants were given a communicative or noncommunicative cue involving pointing or gaze shifting in the context of a hiding…

  8. Eye Contact Facilitates Awareness of Faces during Interocular Suppression

    Science.gov (United States)

    Stein, Timo; Senju, Atsushi; Peelen, Marius V.; Sterzer, Philipp

    2011-01-01

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than…

  9. Models for Gaze Tracking Systems

    Directory of Open Access Journals (Sweden)

    Villanueva Arantxa

    2007-01-01

    Full Text Available One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies.

  10. Models for Gaze Tracking Systems

    Directory of Open Access Journals (Sweden)

    Arantxa Villanueva

    2007-10-01

    Full Text Available One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies.

  11. Eye Tracking the Use of a Collapsible Facets Panel in a Search Interface

    NARCIS (Netherlands)

    M.J. Kemman (Max); M. Kleppe (Martijn); J. Maarseveen (Jim)

    2013-01-01

    textabstractAbstract. Facets can provide an interesting functionality in digital libraries. However, while some research shows facets are important, other research found facets are only moderately used. Therefore, in this exploratory study we compare two search interfaces; one where the facets panel

  12. ScreenMasker: An Open-source Gaze-contingent Screen Masking Environment.

    Science.gov (United States)

    Orlov, Pavel A; Bednarik, Roman

    2016-09-01

    The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .

  13. [Bionic model for coordinated head-eye motion control].

    Science.gov (United States)

    Mao, Xiaobo; Chen, Tiejun

    2011-10-01

    The relationships between eye movements and head movements of the primate during gaze shifts are analyzed in detail in the present paper. Applying the mechanisms of neurophysiology to engineering domain, we have improved the robot eye-head coordination. A bionic control strategy of coordinated head-eye motion was proposed. The processes of gaze shifts are composed of an initial fast phase followed by a slow phase. In the fast phase saccade eye movements and slow head movements were combined, which cooperate to bring gaze from an initial resting position toward the new target rapidly, while in the slow phase the gaze stability and target fixation were ensured by the action of the vestibulo-ocular reflex (VOR) where the eyes and head rotate by equal amplitudes in opposite directions. A bionic gaze control model was given. The simulation results confirmed the effectiveness of the model by comparing with the results of neurophysiology experiments.

  14. Investigating social gaze as an action-perception online performance

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2012-04-01

    Full Text Available In interpersonal interactions, linguistic information is complemented by non-linguistic information originating largely from facial expressions. The study of online face-to-face social interaction thus entails investigating the multimodal simultaneous processing of oral and visual percepts. Moreover, gaze in and of itself functions as a powerful communicative channel. In this respect, gaze should not be examined as a purely perceptive process but also as an active social performance. We designed a task involving multimodal deciphering of social information based on virtual characters, embedded in naturalistic backgrounds, who directly address the participant with non-literal speech and meaningful facial expressions. Eighteen adult participants were to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking to them face-to-face. To examine self-control and self-awareness of gaze in this context, visual feedback is provided to the participant by a real-time gaze-contingent viewing window centered on the focal point, while the rest of the display is blurred. Eye-tracking data showed that the viewing window induced changes in gaze behaviour, notably longer visual fixations. Notwithstanding, only half the participants ascribed the window displacements to their eye movements. These results highlight the dissociation between non volitional gaze adaptation and self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, especially concerning Autism Spectrum Disorders (ASD where poor pragmatic understanding of oral speech are considered linked to visual peculiarities that impede face exploration.

  15. Demo of Gaze Controlled Flying

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Hansen, John Paulin; Scott MacKenzie, I.

    2012-01-01

    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV....

  16. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    Science.gov (United States)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  17. Decryptable to Your Eyes: Visualization of Security Protocols at the User Interface

    CERN Document Server

    Nyang, DaeHun; Kwon, Taekyoung; Kang, Brent; Stavrou, Angelos

    2011-01-01

    The design of authentication protocols, for online banking services in particular and any service that is of sensitive nature in general, is quite challenging. Indeed, enforcing security guarantees has overhead thus imposing additional computation and design considerations that do not always meet usability and user requirements. On the other hand, relaxing assumptions and rigorous security design to improve the user experience can lead to security breaches that can harm the users' trust in the system. In this paper, we demonstrate how careful visualization design can enhance not only the security but also the usability of the authentication process. To that end, we propose a family of visualized authentication protocols, a visualized transaction verification, and a "decryptable to your eyes only" protocol. Through rigorous analysis, we verify that our protocols are immune to many of the challenging authentication attacks applicable in the literature. Furthermore, using an extensive case study on a prototype o...

  18. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    Science.gov (United States)

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  19. Does gaze direction modulate facial expression processing in children with autism spectrum disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent motivational tendency (i.e., an avoidant facial expression with averted eye gaze) than those with an incongruent motivational tendency. Children with ASD (9-14 years old; n = 14) were not affected by the gaze direction of facial stimuli. This finding was replicated in Experiment 2, which presented only the eye region of the face to typically developing children (n = 10) and children with ASD (n = 10). These results demonstrated that children with ASD do not encode and/or integrate multiple communicative signals based on their affective or motivational tendency.

  20. 注视方向影响社交焦虑个体对情绪面孔加工的眼动研究%Gaze direction affects the processing of emotional faces among socially anxious individuals:an eye-tracking study

    Institute of Scientific and Technical Information of China (English)

    李丹; 朱春燕; 汪凯; 余凤琼; 叶榕; 谢新晖; 刘云峰; 李丹丹

    2013-01-01

    Objective To explore the effects of gaze direction in the processing of facial expression among socially anxious individuals.Methods 56 students were selected from Anhui Medical University.Based on Liebowitz Social Anxiety Scale (LSAS) scores,the subjects were grouped into high socially anxious individuals (HSA) and low socially anxious individuals(LSA).Eye movements were recorded while pairs of disgust and neutral faces as experimental stimulus were presented.Results Under the direct gaze condition,there were significant differences (P < 0.05) between the total dwell time of disgust faces ((2311.09 ± 521.41) ms) and the neutral faces((1910.69 ±607.59)ms) in HSA,while there were no significant differences in LSA(P>0.05).Under the averted gaze condition,the total dwell time were not significant between disgust faces and neutral faces in HSA and LSA (P > 0.05).Conclusion Gaze direction affects the processing of facial expressions among socially anxious individuals.Disgust face with direct gaze may be perceived as a social threat information for socially anxiety individuals,whereas disgust face with averted gaze is not a clear social threat information.%目的 探讨注视方向线索对社交焦虑个体面部表情加工的影响.方法 从安徽医科大学抽取56名本科生,依Liebowitz社交焦虑量表(LSAS)得分将被试分为高社交焦虑组(HSA)和低社交焦虑组(LSA),采用配对的厌恶和中性面孔作为实验刺激材料并记录眼动数据.结果 直视条件下,HSA组注视厌恶面孔的时间为(2311.09±521.41)ms,注视中性面孔的时间为(1910.69±607.59)ms,差异具有统计学意义(P<0.05),LSA组注视厌恶面孔与中性面孔的时间差异无统计学意义(P>0.05);斜视条件下,HSA与LSA注视厌恶面孔与中性面孔的时间差异均无统计学意义(P>0.05).结论 注视方向影响了社交焦虑个体对厌恶表情的加工,表明直视的厌恶面孔对于HSA个体来说是一种社交威胁信

  1. Target position relative to the head is essential for predicting head movement during head-free gaze pursuit.

    Science.gov (United States)

    C Pallus, Adam; G Freedman, Edward

    2016-08-01

    Gaze pursuit is the coordinated movement of the eyes and head that allows humans and other foveate animals to track moving objects. The control of smooth pursuit eye movements when the head is restrained is relatively well understood, but how the eyes coordinate with concurrent head movements when the head is free remains unresolved. In this study, we describe behavioral tasks that dissociate head and gaze velocity during head-free pursuit in monkeys. Existing models of gaze pursuit propose that both eye and head movements are driven only by the perceived velocity of the visual target and are therefore unable to account for these data. We show that in addition to target velocity, the positions of the eyes in the orbits and the retinal position of the target are important factors for predicting head movement during pursuit. When the eyes are already near their limits, further pursuit in that direction will be accompanied by more head movement than when the eyes are centered in the orbits, even when target velocity is the same. The step-ramp paradigm, often used in pursuit tasks, produces larger or smaller head movements, depending on the direction of the position step, while gaze pursuit velocity is insensitive to this manipulation. Using these tasks, we can reliably evoke head movements with peak velocities much faster than the target's velocity. Under these circumstances, the compensatory eye movements, which are often called counterproductive since they rotate the eyes in the opposite direction, are essential to maintaining accurate gaze velocity.

  2. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  3. The Microstructure of Infants' Gaze as They View Adult Shifts in Overt Attention

    Science.gov (United States)

    Gredeback, Gustaf; Theuring, Carolin; Hauf, Petra; Kenward, Ben

    2008-01-01

    We presented infants (5, 6, 9, and 12 months old) with movies in which a female model turned toward and fixated 1 of 2 toys placed on a table. Infants' gaze was measured using a Tobii 1750 eye tracker. Six-, 9-, and 12-month-olds' first gaze shift from the model's face (after the model started turning) was directed to the attended toy. The…

  4. Age-related changes in the integration of gaze direction and facial expressions of emotion.

    Science.gov (United States)

    Slessor, Gillian; Phillips, Louise H; Bull, Rebecca

    2010-08-01

    Gaze direction influences younger adults' perception of emotional expressions, with direct gaze enhancing the perception of anger and joy, while averted gaze enhances the perception of fear. Age-related declines in emotion recognition and eye-gaze processing have been reported, indicating that there may be age-related changes in the ability to integrate these facial cues. As there is evidence of a positivity bias with age, age-related difficulties integrating these cues may be greatest for negative emotions. The present research investigated age differences in the extent to which gaze direction influenced explicit perception (e.g., anger, fear and joy; Study 1) and social judgments (e.g., of approachability; Study 2) of emotion faces. Gaze direction did not influence the perception of fear in either age group. In both studies, age differences were found in the extent to which gaze direction influenced judgments of angry and joyful faces, with older adults showing less integration of gaze and emotion cues than younger adults. Age differences were greatest when interpreting angry expressions. Implications of these findings for older adults' social functioning are discussed.

  5. How does gaze direction affect facial processing in social anxiety? -An ERP study.

    Science.gov (United States)

    Li, Dan; Yu, Fengqiong; Ye, Rong; Chen, Xingui; Xie, Xinhui; Zhu, Chunyan; Wang, Kai

    2017-02-09

    Previous behavioral studies have demonstrated an effect of eye gaze direction on the processing of emotional expressions in adults with social anxiety. However, specific brain responses to the interaction between gaze direction and facial expressions in social anxiety remain unclear. The present study aimed to explore the time course of such interaction using event-related potentials (ERPs) in participants with social anxiety. High socially anxious individuals and low socially anxious individuals were asked to identify the gender of angry or neutral faces with direct or averted gaze while their behavioral performance and electrophysiological data were monitored. We found that identification of angry faces with direct but not averted gaze elicited larger N2 amplitude in high socially anxious individuals compared to low socially anxious individuals, while identification of neutral faces did not produce any gaze modulation effect. Moreover, the N2 was correlated with increased anxiety severity upon exposure to angry faces with direct gaze. Therefore, our results suggest that gaze direction modulates the processing of threatening faces in social anxiety. The N2 component elicited by angry faces with direct gaze could be a state-dependent biomarker of social anxiety and may be an important reference biomarker for social anxiety diagnosis and intervention.

  6. Infant Eyes: A Window on Cognitive Development

    Science.gov (United States)

    Aslin, Richard N.

    2012-01-01

    Eye-trackers suitable for use with infants are now marketed by several commercial vendors. As eye-trackers become more prevalent in infancy research, there is the potential for users to be unaware of dangers lurking "under the hood" if they assume the eye-tracker introduces no errors in measuring infants' gaze. Moreover, the influx of voluminous…

  7. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  8. Eye-head coordination in cats.

    Science.gov (United States)

    Guitton, D; Douglas, R M; Volle, M

    1984-12-01

    Gaze is the position of the visual axis in space and is the sum of the eye movement relative to the head plus head movement relative to space. In monkeys, a gaze shift is programmed with a single saccade that will, by itself, take the eye to a target, irrespective of whether the head moves. If the head turns simultaneously, the saccade is correctly reduced in size (to prevent gaze overshoot) by the vestibuloocular reflex (VOR). Cats have an oculomotor range (OMR) of only about +/- 25 degrees, but their field of view extends to about +/- 70 degrees. The use of the monkey's motor strategy to acquire targets lying beyond +/- 25 degrees requires the programming of saccades that cannot be physically made. We have studied, in cats, rapid horizontal gaze shifts to visual targets within and beyond the OMR. Heads were either totally unrestrained or attached to an apparatus that permitted short unexpected perturbations of the head trajectory. Qualitatively, similar rapid gaze shifts of all sizes up to at least 70 degrees could be accomplished with the classic single-eye saccade and a saccade-like head movement. For gaze shifts greater than 30 degrees, this classic pattern frequently was not observed, and gaze shifts were accomplished with a series of rapid eye movements whose time separation decreased, frequently until they blended into each other, as head velocity increased. Between discrete rapid eye movements, gaze continued in constant velocity ramps, controlled by signals added to the VOR-induced compensatory phase that followed a saccade. When the head was braked just prior to its onset in a 10 degrees gaze shift, the eye attained the target. This motor strategy is the same as that reported for monkeys. However, for larger target eccentricities (e.g., 50 degrees), the gaze shift was interrupted by the brake and the average saccade amplitude was 12-15 degrees, well short of the target and the OMR. Gaze shifts were completed by vestibularly driven eye movements when the

  9. Horizontal gaze palsy with progressive scoliosis: CT and MR findings

    Energy Technology Data Exchange (ETDEWEB)

    Bomfim, Rodrigo C.; Tavora, Daniel G.F.; Nakayama, Mauro; Gama, Romulo L. [Sarah Network of Rehabilitation Hospitals, Department of Radiology, Ceara (Brazil)

    2009-02-15

    Horizontal gaze palsy with progressive scoliosis (HGPPS) is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and progressive scoliosis developing in childhood and adolescence. We present a child with clinical and neuroimaging findings typical of HGPPS. CT and MRI of the brain demonstrated pons hypoplasia, absence of the facial colliculi, butterfly configuration of the medulla and a deep midline pontine cleft. We briefly discuss the imaging aspects of this rare entity in light of the current literature. (orig.)

  10. Interface

    DEFF Research Database (Denmark)

    Computerens interface eller grænseflade har spredt sig overalt. Mobiltelefoner, spilkonsoller, pc'er og storskærme indeholder computere – men computere indbygges også i tøj og andre hverdagslige genstande, så vi konstant har adgang til digitale data. Interface retter fokus mod, hvordan den digita...

  11. Speech through ears and eyes: interfacing the senses with the supramodal brain

    Directory of Open Access Journals (Sweden)

    Virginie evan Wassenhove

    2013-07-01

    Full Text Available The comprehension of auditory-visual (AV speech integration has greatly benefited from recent advances in neurosciences and multisensory research. AV speech integration raises numerous questions relevant to the computational rules needed for binding information (within and across sensory modalities, the representational format in which speech information is encoded in the brain (e.g. auditory vs. articulatory, or how AV speech ultimately interfaces with the linguistic system. The following non-exhaustive review provides a set of empirical findings and theoretical questions that have fed the original proposal for predictive coding in AV speech processing. More recently, predictive coding has pervaded many fields of inquiries and positively reinforced the need to refine the notion of internal models in the brain together with their implications for the interpretation of neural activity recorded with various neuroimaging techniques. However, it is argued here that the strength of predictive coding frameworks reside in the specificity of the generative internal models not in their generality; specifically, internal models come with a set of rules applied on particular representational formats themselves depending on the levels and the network structure at which predictive operations occur. As such, predictive coding in AV speech owes to specify the level(s and the kinds of internal predictions that are necessary to account for the perceptual benefits or illusions observed in the field. Among those specifications, the actual content of a prediction comes first and foremost, followed by the representational granularity of that prediction in time. This review specifically presents a focused discussion on these issues.

  12. Interface

    DEFF Research Database (Denmark)

    Computerens interface eller grænseflade har spredt sig overalt. Mobiltelefoner, spilkonsoller, pc'er og storskærme indeholder computere – men computere indbygges også i tøj og andre hverdagslige genstande, så vi konstant har adgang til digitale data. Interface retter fokus mod, hvordan den digitale...... kunst og kultur skabes, spredes og opleves igennem interfaces. Forfatterne undersøger og diskuterer interfacets æstetik, ideologi og kultur – og analyserer aktuel interfacekunst på tværs af musik, kunst, litteratur og film. Bogen belyser interfacets oprindelse i den kolde krigs laboratorier og dets...

  13. In the eye of the beholder: eye contact increases resistance to persuasion.

    Science.gov (United States)

    Chen, Frances S; Minson, Julia A; Schöne, Maren; Heinrichs, Markus

    2013-11-01

    Popular belief holds that eye contact increases the success of persuasive communication, and prior research suggests that speakers who direct their gaze more toward their listeners are perceived as more persuasive. In contrast, we demonstrate that more eye contact between the listener and speaker during persuasive communication predicts less attitude change in the direction advocated. In Study 1, participants freely watched videos of speakers expressing various views on controversial sociopolitical issues. Greater direct gaze at the speaker's eyes was associated with less attitude change in the direction advocated by the speaker. In Study 2, we instructed participants to look at either the eyes or the mouths of speakers presenting arguments counter to participants' own attitudes. Intentionally maintaining direct eye contact led to less persuasion than did gazing at the mouth. These findings suggest that efforts at increasing eye contact may be counterproductive across a variety of persuasion contexts.

  14. Examining the durability of incidentally learned trust from gaze cues.

    Science.gov (United States)

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  15. Spontaneous social orienting and gaze following in ringtailed lemurs (Lemur catta).

    Science.gov (United States)

    Shepherd, Stephen V; Platt, Michael L

    2008-01-01

    Both human and nonhuman primates preferentially orient toward other individuals and follow gaze in controlled environments. Precisely where any animal looks during natural behavior, however, remains unknown. We used a novel telemetric gaze-tracking system to record orienting behavior of ringtailed lemurs (Lemur catta) interacting with a naturalistic environment. We here provide the first evidence that ringtailed lemurs, group-living prosimian primates, preferentially gaze towards other individuals and, moreover, follow other lemurs' gaze while freely moving and interacting in naturalistic social and ecological environments. Our results support the hypothesis that stem primates were capable of orienting toward and following the attention of other individuals. Such abilities may have enabled the evolution of more complex social behavior and cognition, including theory of mind and language, which require spontaneous attention sharing. This is the first study to use telemetric eye-tracking to quantitatively monitor gaze in any nonhuman animal during locomotion, feeding, and social interaction. Moreover, this is the first demonstration of gaze following by a prosimian primate and the first to report gaze following during spontaneous interaction in naturalistic social environments.

  16. Single dose testosterone administration alleviates gaze avoidance in women with Social Anxiety Disorder.

    Science.gov (United States)

    Enter, Dorien; Terburg, David; Harrewijn, Anita; Spinhoven, Philip; Roelofs, Karin

    2016-01-01

    Gaze avoidance is one of the most characteristic and persistent social features in people with Social Anxiety Disorder (SAD). It signals social submissiveness and hampers adequate social interactions. Patients with SAD typically show reduced testosterone levels, a hormone that facilitates socially dominant gaze behavior. Therefore we tested as a proof of principle whether single dose testosterone administration can reduce gaze avoidance in SAD. In a double-blind, within-subject design, 18 medication-free female participants with SAD and 19 female healthy control participants received a single dose of 0.5mg testosterone and a matched placebo, at two separate days. On each day, their spontaneous gaze behavior was recorded using eye-tracking, while they looked at angry, happy, and neutral facial expressions. Testosterone enhanced the percentage of first fixations to the eye-region in participants with SAD compared to healthy controls. In addition, SAD patients' initial gaze avoidance in the placebo condition was associated with more severe social anxiety symptoms and this relation was no longer present after testosterone administration. These findings indicate that single dose testosterone administration can alleviate gaze avoidance in SAD. They support theories on the dominance enhancing effects of testosterone and extend those by showing that effects are particularly strong in individuals featured by socially submissive behavior. The finding that this core characteristic of SAD can be directly influenced by single dose testosterone administration calls for future inquiry into the clinical utility of testosterone in the treatment of SAD.

  17. Gaze angle: a possible mechanism of visual stress in virtual reality headsets.

    Science.gov (United States)

    Mon-Williams, M; Plooy, A; Burgess-Limerick, R; Wann, J

    1998-03-01

    It is known that some Virtual Reality (VR) head-mounted displays (HMDs) can cause temporary deficits in binocular vision. On the other hand, the precise mechanism by which visual stress occurs is unclear. This paper is concerned with a potential source of visual stress that has not been previously considered with regard to VR systems: inappropriate vertical gaze angle. As vertical gaze angle is raised or lowered the 'effort' required of the binocular system also changes. The extent to which changes in vertical gaze angle alter the demands placed upon the vergence eye movement system was explored. The results suggested that visual stress may depend, in part, on vertical gaze angle. The proximity of the display screens within an HMD means that a VR headset should be in the correct vertical location for any individual user. This factor may explain some previous empirical results and has important implications for headset design. Fortuitously, a reasonably simple solution exists.

  18. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Keiko Sakurai

    2017-01-01

    Full Text Available A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor.

  19. Increased Eye Contact during Conversation Compared to Play in Children with Autism

    Science.gov (United States)

    Jones, Rebecca M.; Southerland, Audrey; Hamo, Amarelle; Carberry, Caroline; Bridges, Chanel; Nay, Sarah; Stubbs, Elizabeth; Komarow, Emily; Washington, Clay; Rehg, James M.; Lord, Catherine; Rozga, Agata

    2017-01-01

    Children with autism have atypical gaze behavior but it is unknown whether gaze differs during distinct types of reciprocal interactions. Typically developing children (N = 20) and children with autism (N = 20) (4-13 years) made similar amounts of eye contact with an examiner during a conversation. Surprisingly, there was minimal eye contact…

  20. Trait Anxiety Impacts the Perceived Gaze Direction of Fearful But Not Angry Faces

    Directory of Open Access Journals (Sweden)

    Zhonghua Hu

    2017-07-01

    Full Text Available Facial expression and gaze direction play an important role in social communication. Previous research has demonstrated the perception of anger is enhanced by direct gaze, whereas, it is unclear whether perception of fear is enhanced by averted gaze. In addition, previous research has shown the anxiety affects the processing of facial expression and gaze direction, but hasn’t measured or controlled for depression. As a result, firm conclusions cannot be made regarding the impact of individual differences in anxiety and depression on perceptions of face expressions and gaze direction. The current study attempted to reexamine the effect of the anxiety level on the processing of facial expressions and gaze direction by matching participants on depression scores. A reliable psychophysical index of the range of eye gaze angles judged as being directed at oneself [the cone of direct gaze (CoDG] was used as the dependent variable in this study. Participants were stratified into high/low trait anxiety groups and asked to judge the gaze of angry, fearful, and neutral faces across a range of gaze directions. The result showed: (1 the perception of gaze direction was influenced by facial expression and this was modulated by trait anxiety. For the high trait anxiety group, the CoDG for angry expressions was wider than for fearful and neutral expressions, and no significant difference emerged between fearful and neutral expressions; For the low trait anxiety group, the CoDG for both angry and fearful expressions was wider than for neutral, and no significant difference emerged between angry and fearful expressions. (2 Trait anxiety modulated the perception of gaze direction only in the fearful condition, such that the fearful CoDG for the high trait anxiety group was narrower than the low trait anxiety group. This demonstrated that anxiety distinctly affected gaze perception in expressions that convey threat (angry, fearful, such that a high trait anxiety

  1. Gaze behavior of pre-adolescent children afflicted with Asperger syndrome.

    Science.gov (United States)

    Wiklund, Mari

    2012-01-01

    Asperger syndrome (AS) is a form of high-functioning autism characterized by qualitative impairment in social interaction. People afflicted with AS typically have abnormal nonverbal behaviors which are often manifested by avoiding eye contact. Gaze constitutes an important interactional resource, and an AS person's tendency to avoid eye contact may affect the fluidity of conversations and cause misunderstandings. For this reason, it is important to know the precise ways in which this avoidance is done, and in what ways it affects the interaction. The objective of this article is to describe the gaze behavior of preadolescent AS children in institutional multiparty conversations. Methodologically, the study is based on conversation analysis and a multimodal study of interaction. The findings show that three main patterns are used for avoiding eye contact: (1) fixing one's gaze straight ahead; (2) letting one's gaze wander around; and (3) looking at one's own hands when speaking. The informants of this study do not look at the interlocutors at all in the beginning or the middle of their turn. However, sometimes they turn to look at the interlocutors at the end of their turn. This proves that these children are able to use gaze as a source offeedback. When listening, looking at the speaker also seems to be easier for them than looking at the listeners when speaking

  2. Holistic gaze strategy to categorize facial expression of varying intensities.

    Directory of Open Access Journals (Sweden)

    Kun Guo

    Full Text Available Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region, however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.

  3. Eye Typing using Markov and Active Appearance Models

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin; Nielsen, Mads

    2002-01-01

    We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...

  4. Gaze Direction Detection in Autism Spectrum Disorder

    Science.gov (United States)

    Forgeot d'Arc, Baudouin; Delorme, Richard; Zalla, Tiziana; Lefebvre, Aline; Amsellem, Frédérique; Moukawane, Sanaa; Letellier, Laurence; Leboyer, Marion; Mouren, Marie-Christine; Ramus, Franck

    2017-01-01

    Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of…

  5. Can Speaker Gaze Modulate Syntactic Structuring and Thematic Role Assignment during Spoken Sentence Comprehension?

    Science.gov (United States)

    Knoeferle, Pia; Kreysa, Helene

    2012-01-01

    During comprehension, a listener can rapidly follow a frontally seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects are unclear. In two "visual-world" eye-tracking experiments participants watched a video of a speaker, seated at an angle, describing transitive (non-depicted) actions between two of three Second Life characters on a computer screen. Sentences were in German and had either subject(NP1)-verb-object(NP2) or object(NP1)-verb-subject(NP2) structure; the speaker either shifted gaze to the NP2 character or was obscured. Several seconds later, participants verified either the sentence referents or their role relations. When participants had seen the speaker's gaze shift, they anticipated the NP2 character before its mention and earlier than when the speaker was obscured. This effect was more pronounced for SVO than OVS sentences in both tasks. Interactions of speaker gaze and sentence structure were more pervasive in role-relations verification: participants verified the role relations faster for SVO than OVS sentences, and faster when they had seen the speaker shift gaze than when the speaker was obscured. When sentence and template role-relations matched, gaze-following even eliminated the SVO-OVS response-time differences. Thus, gaze-following is robust even when the speaker is seated at an angle to the listener; it varies depending on the syntactic structure and thematic role relations conveyed by a sentence; and its effects can extend to delayed post-sentence comprehension processes. These results suggest that speaker gaze effects contribute pervasively to visual attention and comprehension processes and should thus be accommodated by accounts of situated language comprehension.

  6. Can speaker gaze modulate syntactic structuring and thematic role assignment during spoken sentence comprehension?

    Directory of Open Access Journals (Sweden)

    Pia eKnoeferle

    2012-12-01

    Full Text Available During comprehension, a listener can rapidly follow a frontally-seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects are unclear. In two ``visual-world'' eye-tracking experiments participants watched a video of a speaker, seated at an angle, describing transitive (non-depicted actions between two of three Second Life characters on a computer screen. Sentences were in German and had either subject(NP1-verb-object(NP2 or object(NP1-verb-subject(NP2 structure; the speaker either shifted gaze to the NP2 character or was obscured. Several seconds later,participants verified either the sentence referents or their role relations. When participants had seen the speaker's gaze shift, they anticipated the NP2 character before its mention and earlier than when the speaker was obscured. This effect was more pronounced for SVO than OVS sentences in both tasks. Interactions of speaker gaze and sentence structure were more pervasive in role-relations verification: Participants verified the role relations faster for SVO than OVS sentences, and faster when they had seen the speaker shift gaze than when the speaker was obscured. When sentence and template role relations matched, gaze-following even eliminated the SVO-OVS response time differences. Thus, gaze-following is robust even when the speaker is seated at an angle to the listener; it varies depending on the syntactic structure and thematic role relations conveyed by a sentence; and its effects can extend to delayed post-sentence comprehension processes. These results suggest that speaker gaze effects contribute pervasively to visual attention and comprehension processes and should thus be accommodated by accounts of situated language comprehension.

  7. Brain-computer interface combining eye saccade two-electrode EEG signals and voice cues to improve the maneuverability of wheelchair.

    Science.gov (United States)

    Wang, Ker-Jiun; Zhang, Lan; Luan, Bo; Tung, Hsiao-Wei; Liu, Quanfeng; Wei, Jiacheng; Sun, Mingui; Mao, Zhi-Hong

    2017-07-01

    Brain-computer interfaces (BCIs) largely augment human capabilities by translating brain wave signals into feasible commands to operate external devices. However, many issues face the development of BCIs such as the low classification accuracy of brain signals and the tedious human-learning procedures. To solve these problems, we propose to use signals associated with eye saccades and blinks to control a BCI interface. By extracting existing physiological eye signals, the user does not need to adapt his/her brain waves to the device. Furthermore, using saccade signals to control an external device frees the limbs to perform other tasks. In this research, we use two electrodes placed on top of the left and right ears of thirteen participants. Then we use Independent Component Analysis (ICA) to extract meaningful EEG signals associated with eye movements. A sliding-window technique was implemented to collect relevant features. Finally, we classified the features as horizontal or blink eye movements using KNN and SVM. We were able to achieve a mean classification accuracy of about 97%. The two electrodes were then integrated with off-the-shelf earbuds to control a wheelchair. The earbuds can generate voice cues to indicate when to rotate the eyeballs to certain locations (i.e., left or right) or blink, so that the user can select directional commands to drive the wheelchair. In addition, through properly designing the contents of voice menus, we can generate as many commands as possible, even though we only have limited numbers of states of the identified eye saccade movements.

  8. Gaze shifts during dual-tasking stair descent.

    Science.gov (United States)

    Miyasike-daSilva, Veronica; McIlroy, William E

    2016-11-01

    To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.

  9. interfaces

    Directory of Open Access Journals (Sweden)

    Dipayan Sanyal

    2005-01-01

    macroscopic conservation equations with an order parameter which can account for the solid, liquid, and the mushy zones with the help of a phase function defined on the basis of the liquid fraction, the Gibbs relation, and the phase diagram with local approximations. Using the above formalism for alloy solidification, the width of the diffuse interface (mushy zone was computed rather accurately for iron-carbon and ammonium chloride-water binary alloys and validated against experimental data from literature.

  10. DIAGNOSIS OF MYASTHENIA GRAVIS USING FUZZY GAZE TRACKING SOFTWARE

    Directory of Open Access Journals (Sweden)

    Javad Rasti

    2015-04-01

    Full Text Available Myasthenia Gravis (MG is an autoimmune disorder, which may lead to paralysis and even death if not treated on time. One of its primary symptoms is severe muscular weakness, initially arising in the eye muscles. Testing the mobility of the eyeball can help in early detection of MG. In this study, software was designed to analyze the ability of the eye muscles to focus in various directions, thus estimating the MG risk. Progressive weakness in gazing at the directions prompted by the software can reveal abnormal fatigue of the eye muscles, which is an alert sign for MG. To assess the user’s ability to keep gazing at a specified direction, a fuzzy algorithm was applied to images of the user’s eyes to determine the position of the iris in relation to the sclera. The results of the tests performed on 18 healthy volunteers and 18 volunteers in early stages of MG confirmed the validity of the suggested software.

  11. Enhancing sensorimotor activity by controlling virtual objects with gaze.

    Directory of Open Access Journals (Sweden)

    Cristián Modroño

    Full Text Available This fMRI work studies brain activity of healthy volunteers who manipulated a virtual object in the context of a digital game by applying two different control methods: using their right hand or using their gaze. The results show extended activations in sensorimotor areas, not only when participants played in the traditional way (using their hand but also when they used their gaze to control the virtual object. Furthermore, with the exception of the primary motor cortex, regional motor activity was similar regardless of what the effector was: the arm or the eye. These results have a potential application in the field of the neurorehabilitation as a new approach to generate activation of the sensorimotor system to support the recovery of the motor functions.

  12. Influence of Gaze Direction on Face Recognition: A Sensitive Effect

    Directory of Open Access Journals (Sweden)

    Noémy Daury

    2011-08-01

    Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.

  13. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  14. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  15. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    Science.gov (United States)

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  16. The inversion effect on gaze perception reflects processing of component information.

    Science.gov (United States)

    Schwaninger, Adrian; Lobmaier, Janek S; Fischer, Martin H

    2005-11-01

    When faces are turned upside-down they are much more difficult to recognize than other objects. This "face inversion effect" has often been explained in terms of configural processing, which is impaired when faces are rotated away from the upright. Here we report a "gaze inversion effect" and discuss whether it is related to configural face processing of the whole face. Observers reported the gaze locations of photographed upright or inverted faces. When whole faces were presented, we found an inversion effect both for constant errors and observer sensitivity. These results were closely replicated when only the eyes were visible. Together, our findings suggest that gaze processing is largely based on component-based information from the eye region. Processing this information is orientation-sensitive and does not seem to rely on configural processing of the whole face.

  17. Vision research: losing sight of eye dominance.

    Science.gov (United States)

    Carey, D P

    2001-10-16

    Most people prefer to use their right eye for viewing. New evidence reveals that this dominance is much more plastic than that for one hand or foot: it changes from one eye to the other depending on angle of gaze. Remarkably, sighting dominance depends on the hand being directed towards the visual target.

  18. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    , the user simply captures an image using the HMD’s camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can......To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  19. Gaze maintenance and autism spectrum disorder.

    Science.gov (United States)

    Kaye, Leah; Kurtz, Marie; Tierney, Cheryl; Soni, Ajay; Augustyn, Marilyn

    2014-01-01

    were equal and reactive without afferent pupillary defect, and normal visual tracking as assessed through pursuit and saccades. There were some head jerking motions observed which were not thought to be part of Chase's attempts to view objects. Gaze impersistence was noted, although it was not clear if this was due to a lack of attention or a true inability to maintain a gaze in the direction instructed. On review of the school's speech and language report, they state that he is >90% intelligible. He has occasional lip trills. Testing with the Clinical Evaluation of Language Fundamentals shows mild delays in receptive language, especially those that require visual attention. Verbal Motor Production Assessment for Children reveals focal oromotor control and sequencing skills that are below average, with groping when asked to imitate single oromotor nonspeech movements and sequenced double oromotor nonspeech movements. At 5½ years, he returns for follow-up, and he is outgoing and imaginative, eager to play and socialize. He makes eye contact but does not always maintain it. He asks and responds to questions appropriately, and he is able to follow verbal directions and verbal redirection. He is very interested in Toy Story characters but willing to share them and plays with other toys. Chase's speech has predictable, easy to decode sound substitutions. On interview with him, you feel that he has borderline cognitive abilities. He also demonstrates good eye contact but lack of visual gaze maintenance; this is the opposite of the pattern you are accustomed to in patients with autism spectrum disorder. What do you do next?

  20. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts.

    Science.gov (United States)

    Sadeh, Morteza; Sajad, Amirsaman; Wang, Hongying; Yan, Xiaogang; Crawford, John Douglas

    2015-12-01

    We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.

  1. The Disturbance of Gaze in Progressive Supranuclear Palsy (PSP: Implications for Pathogenesis

    Directory of Open Access Journals (Sweden)

    Athena L Chen

    2010-12-01

    Full Text Available Progressive supranuclear palsy (PSP is a disease of later life that is currently regarded as a form of neurodegenerative tauopathy. Disturbance of gaze is a cardinal clinical feature of PSP that often helps clinicians to establish the diagnosis. Since the neurobiology of gaze control is now well understood, it is possible to use eye movements as investigational tools to understand aspects of the pathogenesis of PSP. In this review, we summarize each disorder of gaze control that occurs in PSP, drawing on our studies of fifty patients, and on reports from other laboratories that have measured the disturbances of eye movements. When these gaze disorders are approached by considering each functional class of eye movements and its neurobiological basis, a distinct pattern of eye movement deficits emerges that provides insight into the pathogenesis of PSP. Although some aspects of all forms of eye movements are affected in PSP, the predominant defects concern vertical saccades (slow and hypometric, both up and down, impaired vergence, and inability to modulate the linear vestibulo-ocular reflex appropriately for viewing distance. These vertical and vergence eye movements habitually work in concert to enable visuomotor skills that are important during locomotion with the hands free. Taken with the prominent early feature of falls, these findings suggest that PSP tauopathy impairs a recently-evolved neural system concerned with bipedal locomotion in an erect posture and frequent gaze shifts between the distant environment and proximate hands. This approach provides a conceptual framework that can be used to address the nosological challenge posed by overlapping clinical and neuropathological features of neurodegenerative tauopathies.

  2. Affective-Motivational Brain Responses to Direct Gaze in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Kylliainen, Anneli; Wallace, Simon; Coutanche, Marc N.; Leppanen, Jukka M.; Cusack, James; Bailey, Anthony J.; Hietanen, Jari K.

    2012-01-01

    Background: It is unclear why children with autism spectrum disorders (ASD) tend to be inattentive to, or even avoid eye contact. The goal of this study was to investigate affective-motivational brain responses to direct gaze in children with ASD. To this end, we combined two measurements: skin conductance responses (SCR), a robust arousal…

  3. Gaze directed displays as an enabling technology for attention aware systems

    NARCIS (Netherlands)

    Toet, A.

    2006-01-01

    Visual information can in principle be dynamically optimised by monitoring the user’s state of attention, e.g. by tracking eye movements. Gaze directed displays are therefore an important enabling technology for attention aware systems. We present a state-of-the-art review of both (1) techniques to

  4. Gaze-centered updating of remembered visual space during active whole-body translation

    NARCIS (Netherlands)

    Pelt, S. van; Medendorp, W.P.

    2007-01-01

    Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating acros

  5. Looking for Action: Talk and Gaze Home Position in the Airline Cockpit

    Science.gov (United States)

    Nevile, Maurice

    2010-01-01

    This paper considers the embodied nature of discourse for a professional work setting. It examines language in interaction in the airline cockpit, and specifically how shifts in pilots' eye gaze direction can indicate the action of talk, that is, what talk is doing and its relative contribution to work-in-progress. Looking towards the other…

  6. Looking for Action: Talk and Gaze Home Position in the Airline Cockpit

    Science.gov (United States)

    Nevile, Maurice

    2010-01-01

    This paper considers the embodied nature of discourse for a professional work setting. It examines language in interaction in the airline cockpit, and specifically how shifts in pilots' eye gaze direction can indicate the action of talk, that is, what talk is doing and its relative contribution to work-in-progress. Looking towards the other…

  7. Attention to the Mouth and Gaze Following in Infancy Predict Language Development

    Science.gov (United States)

    Tenenbaum, Elena J.; Sobel, David M.; Sheinkpof, Stephen J.; Malle, Bertram F.; Morgan, James L.

    2015-01-01

    We investigated longitudinal relations among gaze following and face scanning in infancy and later language development. At 12 months, infants watched videos of a woman describing an object while their passive viewing was measured with an eye-tracker. We examined the relation between infants' face scanning behavior and their tendency to follow the…

  8. Horizontal gaze palsy with progressive scoliosis – A case report

    Directory of Open Access Journals (Sweden)

    P Shalini

    2017-01-01

    Full Text Available Horizontal gaze palsy with progressive scoliosis (HGPPS is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and accompanied by progressive scoliosis developing in childhood and adolescence. It occurs due to mutation in ROBO 3 gene/chromosome 11q23-q25. We report a case of a 60-year-old lady who presented with complaints of defective vision in both eyes. On examination, she had scoliosis with restricted abduction and adduction in both eyes with intact elevation and depression. Magnetic resonance imaging of the brain and orbit showed brainstem hypoplasia with absence of facial colliculi, presence of a deep midline pontine cleft (split pons sign, and a butterfly configuration of the medulla, which are the radiological findings seen in this disorder.

  9. Mobile gaze-based screen interaction in 3D environments

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2011-01-01

    Head-mounted eye trackers can be used for mobile interaction as well as gaze estimation purposes. This paper presents a method that enables the user to interact with any planar digital display in a 3D environment using a head-mounted eye tracker. An effective method for identifying the screens...... in the field of view of the user is also presented which can be applied in a general scenario in which multiple users can interact with multiple screens. A particular application of using this technique is implemented in a home environment with two big screens and a mobile phone. In this application a user...... was able to interact with these screens using a wireless head-mounted eye tracker....

  10. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  11. Accurate eye center location through invariant isocentric patterns

    NARCIS (Netherlands)

    Valenti, R.; Gevers, T.

    2012-01-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and

  12. The effect of face eccentricity on the perception of gaze direction.

    Science.gov (United States)

    Todorović, Dejan

    2009-01-01

    The perception of a looker's gaze direction depends not only on iris eccentricity (the position of the looker's irises within the sclera) but also on the orientation of the lookers' head. One among several potential cues of head orientation is face eccentricity, the position of the inner features of the face (eyes, nose, mouth) within the head contour, as viewed by the observer. For natural faces this cue is confounded with many other head-orientation cues, but in schematic faces it can be studied in isolation. Salient novel illustrations of the effectiveness of face eccentricity are 'Necker faces', which involve equal iris eccentricities but multiple perceived gaze directions. In four experiments, iris and face eccentricity in schematic faces were manipulated, revealing strong and consistent effects of face eccentricity on perceived gaze direction, with different types of tasks. An additional experiment confirmed the 'Mona Lisa' effect with this type of stimuli. Face eccentricity most likely acted as a simple but robust cue of head turn. A simple computational account of combined effects of cues of eye and head turn on perceived gaze direction is presented, including a formal condition for the perception of direct gaze. An account of the 'Mona Lisa' effect is presented.

  13. Toward the design of a low cost vision-based gaze tracker for interaction skill acquisition

    Directory of Open Access Journals (Sweden)

    Martinez Francis

    2011-12-01

    Full Text Available The human gaze is a basic mean for non verbal interaction between humans; however, in several situations, especially in the context of upper limb motor impairments, the gaze constitutes also an alternative mean for human’s interactions with the environment (real or virtual. Mastering these interactions through specific tools, requires frequently the acquisition of new skills and understanding of mechanisms which allow to acquire the necessary skills. Therefore the technological tool is a key for new interaction skills’ acquisition. This paper presents a tool for interaction skill acquisition via a gaze. The proposed gaze tracker is a low cost head mounted system based on vision technology “only”. The system hardware specifications and the status of the gaze tracker design are presented; the dedicated algorithm for eye detection and tracking, and an improvement of G. Zelinsky model for eye movement predication during the search of a predefined object in an image are outlined. Results of the software preliminary evaluation are presented.

  14. Exploring combinations of different color and facial expression stimuli for gaze-independent BCIs

    Directory of Open Access Journals (Sweden)

    Long eChen

    2016-01-01

    Full Text Available AbstractBackground: Some studies have proven that a conventional visual brain computer interface (BCI based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention had been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression change had been presented, and obtained high performance. However, some facial expressions were so similar that users couldn’t tell them apart. Especially they were presented at the same position in a rapid serial visual presentation (RSVP paradigm. Consequently, the performance of BCIs is reduced.New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help subjects to locate the target and evoke larger event-related potentials (ERPs. In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the grey dummy face pattern and the colored ball pattern. Comparison with Existing Method(s: The key point that determined the value of the colored dummy faces stimuli in BCI systems were whether dummy face stimuli could obtain higher performance than grey faces or colored balls stimuli. Ten healthy subjects (7 male, aged 21-26 years, mean 24.5±1.25 participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the grey dummy face pattern and the colored ball pattern. Online results showed

  15. Gaze motor asymmetries in the perception of faces during a memory task.

    Science.gov (United States)

    Mertens, I; Siegmund, H; Grüsser, O J

    1993-09-01

    In 33 male and female adult volunteers, eye position recordings were performed by means of an infrared reflection technique. Slides of randomly shuffled black-and-white photographs (7.5 x 10 degrees) of faces and vases were projected for 6 or 20 sec respectively in a visual memory task. In each series, 10 slides of art nouveau vases and of the "inner part" of masked Caucasian faces were used. During recording the head was fixed by a bite-board. (a) For faces the preferred targets of the centre of gaze were the eyes, the mouth and nose region, for vases the contours and some prominent ornaments. (b) Left-right asymmetries in the gaze-movement sampling strategy appeared with faces, but not with vases. In faces, the overall time that the centre of gaze remained in the left half of the field of gaze was significantly longer than in the right half. (c) When, however, the amplitude of the gaze excursions into the left and right halves of the inspected items was taken as a measure and normalized, a preference for the right gaze field was observed. (d) The relative left-right bias during face inspection was stronger with the 6 sec than with the 20 sec inspection period and significantly stronger in female than in male subjects for the 6 sec tasks. (e) Left/right inversion of the face stimuli did not abolish the side bias. Thus the asymmetric sampling strategy when faces were inspected as compared to vases was due to "internal" factors on the part of the subjects. It is hypothesized that a left-right asymmetry in hemispheric visual data processing for face stimuli was the cause of a left-right asymmetry in gaze motor strategies when faces were inspected.

  16. Variation in the human cannabinoid receptor CNR1 gene modulates gaze duration for happy faces

    Directory of Open Access Journals (Sweden)

    Chakrabarti Bhismadev

    2011-06-01

    Full Text Available Abstract Background From an early age, humans look longer at preferred stimuli and also typically look longer at facial expressions of emotion, particularly happy faces. Atypical gaze patterns towards social stimuli are common in autism spectrum conditions (ASC. However, it is unknown whether gaze fixation patterns have any genetic basis. In this study, we tested whether variations in the cannabinoid receptor 1 (CNR1 gene are associated with gaze duration towards happy faces. This gene was selected because CNR1 is a key component of the endocannabinoid system, which is involved in processing reward, and in our previous functional magnetic resonance imaging (fMRI study, we found that variations in CNR1 modulate the striatal response to happy (but not disgust faces. The striatum is involved in guiding gaze to rewarding aspects of a visual scene. We aimed to validate and extend this result in another sample using a different technique (gaze tracking. Methods A total of 30 volunteers (13 males and 17 females from the general population observed dynamic emotional expressions on a screen while their eye movements were recorded. They were genotyped for the identical four single-nucleotide polymorphisms (SNPs in the CNR1 gene tested in our earlier fMRI study. Results Two SNPs (rs806377 and rs806380 were associated with differential gaze duration for happy (but not disgust faces. Importantly, the allelic groups associated with a greater striatal response to happy faces in the fMRI study were associated with longer gaze duration at happy faces. Conclusions These results suggest that CNR1 variations modulate the striatal function that underlies the perception of signals of social reward, such as happy faces. This suggests that CNR1 is a key element in the molecular architecture of perception of certain basic emotions. This may have implications for understanding neurodevelopmental conditions marked by atypical eye contact and facial emotion processing

  17. Viewpoint Consistency: An Eye Movement Study

    Directory of Open Access Journals (Sweden)

    Filipe Cristino

    2012-05-01

    Full Text Available Eye movements have been widely studied, using images and videos in laboratories or portable eye trackers in the real world. Although a good understanding of the saccadic system and extensive models of gaze have been developed over the years, only a few studies have focused on the consistency of eye movements across viewpoints. We have developed a new technique to compute and map the depth of collected eye movements on stimuli rendered from 3D mesh objects using a traditional corneal reflection eye tracker (SR Eyelink 1000. Having eye movements mapped into 3D space (and not on an image space allowed us to compare fixations across viewpoints. Fixation sequences (scanpaths were also studied across viewpoints using the ScanMatch method (Cristino et al 2010, Behavioural and Research Methods 42, 692–700, extended to work with 3D eye movements. In a set of experiments where participants were asked to perform a recognition task on either a set of objects or faces, we recorded their gaze while performing the task. Participants either viewed the stimuli in 2D or using anaglyph glasses. The stimuli were shown from different viewpoints during the learning and testing phases. A high degree of gaze consistency was found across the different viewpoints, particularly between learning and testing phases. Scanpaths were also similar across viewpoints, suggesting not only that the gazed spatial locations are alike, but also their temporal order.

  18. A non-verbal Turing test: differentiating mind from machine in gaze-based social interaction.

    Science.gov (United States)

    Pfeiffer, Ulrich J; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons' gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character's gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete.

  19. A non-verbal Turing test: differentiating mind from machine in gaze-based social interaction.

    Directory of Open Access Journals (Sweden)

    Ulrich J Pfeiffer

    Full Text Available In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons' gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character's gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete.

  20. A Non-Verbal Turing Test: Differentiating Mind from Machine in Gaze-Based Social Interaction

    Science.gov (United States)

    Pfeiffer, Ulrich J.; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons’ gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character’s gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete. PMID:22096599

  1. Constraining eye movement when redirecting walking trajectories alters turning control in healthy young adults.

    Science.gov (United States)

    Pradeep Ambati, V N; Murray, Nicholas G; Saucedo, Fabricio; Powell, Douglas W; Reed-Jones, Rebecca J

    2013-05-01

    Humans use a specific steering synergy, where the eyes and head lead rotation to the new direction, when executing a turn or change in direction. Increasing evidence suggests that eye movement is critical for turning control and that when the eyes are constrained, or participants have difficulties making eye movements, steering control is disrupted. The purpose of the current study was to extend previous research regarding eye movements and steering control to a functional walking and turning task. This study investigated eye, head, trunk, and pelvis kinematics of healthy young adults during a 90° redirection of walking trajectory under two visual conditions: Free Gaze (the eyes were allowed to move naturally in the environment), and Fixed Gaze (participants were required to fixate the eyes on a target in front). Results revealed significant differences in eye, head, and trunk coordination between Free Gaze and Fixed Gaze conditions (p segments moved together with no significant differences between segment onset times. In addition, the sequence of segment rotation during Fixed Gaze suggested a bottom-up postural perturbation control strategy in place of top-down steering control seen in Free Gaze. The results of this study support the hypothesis that eye movement is critical for the release of the steering synergy for turning control.

  2. Look at my poster! Active gaze, preference and memory during a poster session.

    Science.gov (United States)

    Foulsham, Tom; Kingstone, Alan

    2011-01-01

    In science, as in advertising, people often present information on a poster, yet little is known about attention during a poster session. A mobile eye-tracker was used to record participants' gaze during a mock poster session featuring a range of academic psychology posters. Participants spent the most time looking at introductions and conclusions. Larger posters were looked at for longer, as were posters rated more interesting (but not necessarily more aesthetically pleasing). Interestingly, gaze did not correlate with memory for poster details or liking, suggesting that attracting someone towards your poster may not be enough.

  3. Synergistic convergence and split pons in horizontal gaze palsy and progressive scoliosis in two sisters

    Directory of Open Access Journals (Sweden)

    Jain Nitin

    2011-01-01

    Full Text Available Synergistic convergence is an ocular motor anomaly where on attempted abduction or on attempted horizontal gaze, both the eyes converge. It has been related to peripheral causes such as congenital fibrosis of extraocular muscles (CFEOM, congenital cranial dysinnervation syndrome, ocular misinnervation or rarely central causes like horizontal gaze palsy with progressive scoliosis, brain stem dysplasia. We hereby report the occurrence of synergistic convergence in two sisters. Both of them also had kyphoscoliosis. Magnetic resonance imaging (MRI brain and spine in both the patients showed signs of brain stem dysplasia (split pons sign differing in degree (younger sister had more marked changes.

  4. Fear of heights freezes gaze to the horizon.

    Science.gov (United States)

    Kugler, Günter; Huppert, Doreen; Schneider, Erich; Brandt, Thomas

    2014-01-01

    Fear of heights is elicited by a glance into an abyss. However, the visual exploration behavior of fearful subjects at height has not been analyzed yet. We investigated eye- and head movements, i.e. visual exploration behavior, of subjects susceptible to fear of heights during exposure to a visual cliff. The movements of eyes and head were recorded in 19 subjects susceptible to fear of heights and 18 controls while standing still on an emergency balcony 20 meters above ground level for periods of 30 seconds. Participants wore mobile, infrared eye-tracking goggles with inertial sensors for recording head movements. Susceptibles exhibited fewer and smaller-amplitude eye-in-head saccades with fixations of longer duration. Spontaneous head movements were reduced by 49% in susceptibles with a significantly lower mean absolute angular velocity (5.3°/s vs. 10.4°/s), and all three dimensions (yaw, pitch and roll) were equally affected. Gaze-in-space--which indicates exploration by coordinated eye-head movements--covered a smaller total area of the visual scene (explored horizontal angle: 19° vs. 32°, vertical: 9° vs. 17°). We hypothesize that the susceptibles suppress eye and head movements to alleviate fear of heights. However, this behavior has the potential disadvantage of impairing the visual stabilization of postural balance.

  5. Infrared Eye: Prototype 2

    Science.gov (United States)

    2016-06-07

    within the wide field and slaved to the operator’s line of sight by means of an eye- tracking system. The images from both cameras are fused and shown...simultaneously on a high resolution CRT display unit, interfaced with the eye- tracking unit in order to optimize the human-machine interface. The IR Eye...system was flight tested using the Advanced system Research Aircraft (Bell 412 helicopter) from the Flight Research Laboratory of the National Research

  6. Eye contact elicits bodily self-awareness in human adults.

    Science.gov (United States)

    Baltazar, Matias; Hazem, Nesrine; Vilarem, Emma; Beaucousin, Virginie; Picq, Jean-Luc; Conty, Laurence

    2014-10-01

    Eye contact is a typical human behaviour known to impact concurrent or subsequent cognitive processing. In particular, it has been suggested that eye contact induces self-awareness, though this has never been formally proven. Here, we show that the perception of a face with a direct gaze (that establishes eye contact), as compared to either a face with averted gaze or a mere fixation cross, led adult participants to rate more accurately the intensity of their physiological reactions induced by emotional pictures. Our data support the view that bodily self-awareness becomes more acute when one is subjected to another's gaze. Importantly, this effect was not related to a particular arousal state induced by eye contact perception. Rejecting the arousal hypothesis, we suggest that eye contact elicits a self-awareness process by enhancing self-focused attention in humans. We further discuss the implications of this proposal. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Julia eIrwin

    2014-05-01

    Full Text Available Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD and those with typical development (TD. Patterns of gaze were observed under three conditions: audiovisual (AV speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.

  8. The influence of social and symbolic cues on observers' gaze behaviour.

    Science.gov (United States)

    Hermens, Frouke; Walker, Robin

    2016-08-01

    Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task.

  9. Head mounted device for point-of-gaze estimation in three dimensions

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Witzner Hansen, Dan; Krüger, Norbert

    2014-01-01

    This paper presents a fully calibrated extended geometric approach for gaze estimation in three dimensions (3D). The methodology is based on a geometric approach utilising a fully calibrated binocular setup constructed as a head-mounted system. The approach is based on utilisation of two ordinary...... web-cameras for each eye and 6D magnetic sensors allowing free head movements in 3D. Evaluation of initial experiments indicate comparable results to current state-of-the-art on estimating gaze in 3D. Initial results show an RMS error of 39-50 mm in the depth dimension and even smaller...... in the horizontal and vertical dimensions regarding fixations. However, even though the workspace is limited, the fact that the system is designed as a head-mounted device, the workspace volume is relatively positioned to the pose of the device. Hence gaze can be estimated in 3D with relatively free head...

  10. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    Directory of Open Access Journals (Sweden)

    Colas N. Authié

    2017-07-01

    Full Text Available In retinitis pigmentosa (RP, loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present, of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures.

  11. Gaze-independent ERP-BCIs : Augmenting performance through location-congruent bimodal stimuli

    NARCIS (Netherlands)

    Thurlings, M.E.; Brouwer, A.M.; Erp, J.B.F. van; Werkhoven, P.J.

    2014-01-01

    Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bi

  12. Gaze behaviors of goaltenders under spatial-temporal constraints.

    Science.gov (United States)

    Panchuk, D; Vickers, J N

    2006-12-01

    It is still not known what underlies successful performance in goaltending. Some studies have reported that advanced cues from the shooter's body (hip, kicking leg or support leg) are most important (Savelsbergh, G. J. P., Williams, A. M., Van der Kamp, J., & Ward, P. (2002). Visual search, anticipation and expertise in soccer goalkeepers. Journal of Sports Sciences, 20, 279-287; Savelsbergh, G. J. P., Williams, A. M., Van der Kamp, J., & Ward, P. (2005). Anticipation and visual search behaviour in expert soccer goalkeepers. Ergonomics, 48, 1686-1697; Williams, A. M., & Burwitz, L. (1993). Advanced cue utilization in soccer. In T. Reilly, J. Clarys, & A. Stibbe (Eds.), Science and football II (pp. 239-243). London, England: E&FN Spon), while others have found that the early tracking of the object prior to and during flight is most critical (Bard, C., & Fleury, M. (1981). Considering eye movement as a predictor of attainment. In: I. M. Cockerill, & W. M. MacGillvary (Eds.), Vision and Sport (pp. 28-41). Cheltenham, England: Stanley Thornes (Publishers) Ltd.). These results are similar to those found in a number of interceptive timing studies (Land, M. F., & McLeod, P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience, 3, 1340-1345; Ripoll and Fleurance, 1988; Vickers, J. N., & Adolphe, R. M. (1997). Gaze behaviour during a ball tracking and aiming skill. International Journal of Sports Vision, 4, 18-27). The coupled gaze and motor behavior of elite goaltenders were determined while responding to wrist shots taken from 5 m and 10 m on ice. The results showed that the goalies faced shots that were significantly different in phase durations due to distance (5 versus 10 m), but this was not a factor in making saves. Instead, the ability to stop the puck was dependent on the location, onset and duration of the final fixation/tracking gaze (or quiet eye) prior to initiating the saving action. The relative onset of quiet eye was

  13. Virtual social interactions in social anxiety--the impact of sex, gaze, and interpersonal distance.

    Science.gov (United States)

    Wieser, Matthias J; Pauli, Paul; Grosseibl, Miriam; Molzow, Ina; Mühlberger, Andreas

    2010-10-01

    In social interactions, interpersonal distance between interaction partners plays an important role in determining the status of the relationship. Interpersonal distance is an important nonverbal behavior, and is used to regulate personal space in a complex interplay with other nonverbal behaviors such as eye gaze. In social anxiety, studies regarding the impact of interpersonal distance on within-situation avoidance behavior are so far rare. Thus the present study aimed to scrutinize the relationship between gaze direction, sex, interpersonal distance, and social anxiety in social interactions. Social interactions were modeled in a virtual-reality (VR) environment, where 20 low and 19 high socially anxious women were confronted with approaching male and female characters, who stopped in front of the participant, either some distance away or close to them, and displayed either a direct or an averted gaze. Gaze and head movements, as well as heart rate, were measured as indices of avoidance behavior and fear reactions. High socially anxious participants showed a complex pattern of avoidance behavior: when the avatar was standing farther away, high socially anxious women avoided gaze contact with male avatars showing a direct gaze. Furthermore, they showed avoidance behavior (backward head movements) in response to male avatars showing a direct gaze, regardless of the interpersonal distance. Overall, the current study proved that VR social interactions might be a very useful tool for investigating avoidance behavior of socially anxious individuals in highly controlled situations. This might also be the first step in using VR social interactions in clinical protocols for the therapy of social anxiety disorder.

  14. Impact of cognitive and linguistic ability on gaze behavior in children with hearing impairment

    Directory of Open Access Journals (Sweden)

    Olof eSandgren

    2013-11-01

    Full Text Available In order to explore verbal-nonverbal integration, we investigated the influence of cognitive and linguistic ability on gaze behavior during spoken language conversation between children with mild-to-moderate hearing impairment (HI and normal-hearing (NH peers. Ten HI-NH and ten NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Cox proportional hazards regression was used to model associations between performance on cognitive and linguistic tasks and the probability of gaze to the conversational partner’s face. Analyses compare the listeners in each dyad (HI: n = 10, mean age = 12;6 years, SD = 2;0, mean better ear pure-tone average 33.0 dB HL, SD = 7.8; NH: n = 10, mean age = 13;7 years, SD = 1;11. Group differences in gaze behavior – with HI gazing more to the conversational partner than NH – remained significant despite adjustment for ability on receptive grammar, expressive vocabulary, and complex working memory. Adjustment for phonological short term memory, as measured by nonword repetition, removed group differences, revealing an interaction between group membership and nonword repetition ability. Stratified analysis showed a twofold increase of the probability of gaze-to-partner for HI with low phonological short term memory capacity, and a decreased probability for HI with high capacity, as compared to NH peers. The results revealed differences in gaze behavior attributable to performance on a phonological short term memory task. Participants with hearing impairment and low phonological short term memory capacity showed a doubled probability of gaze to the conversational partner, indicative of a visual bias. The results stress the need to look beyond the hearing impairment in diagnostics and intervention. Acknowledgment of the finding requires clinical assessment of children with hearing impairment to be supported by tasks tapping

  15. Hovering by Gazing: A Novel Strategy for Implementing Saccadic Flight-based Navigation in GPS-denied Environments

    Directory of Open Access Journals (Sweden)

    Augustin Manecy

    2014-04-01

    Full Text Available Hovering flies are able to stay still in place when hovering above flowers and burst into movement towards a new object of interest (a target. This suggests that sensorimotor control loops implemented onboard could be usefully mimicked for controlling Unmanned Aerial Vehicles (UAVs. In this study, the fundamental head-body movements occurring in free-flying insects was simulated in a sighted twin-engine robot with a mechanical decoupling inserted between its eye (or gaze and its body. The robot based on this gaze control system achieved robust and accurate hovering performances, without an accelerometer, over a ground target despite a narrow eye field of view (±5◦. The gaze stabilization strategy validated under Processor-In-the-Loop (PIL and inspired by three biological Oculomotor Reflexes (ORs enables the aerial robot to lock its gaze onto a fixed target regardless of its roll angle. In addition, the gaze control mechanism allows the robot to perform short range target to target navigation by triggering an automatic fast "target jump" behaviour based on a saccadic eye movement.

  16. Enhancing User Experience in Next Generation Mobile Devices Using Eye Tracking as a Biometric Sensor

    DEFF Research Database (Denmark)

    Bækgaard, Per

    place means we need ways of measuring concepts like attention. The basis for this should preferably be rooted in our understanding of the anatomically based attention networks of the brain. This thesis looks at biometric markers of cognitive and affective processes; at the overview level....... It is demonstrated that it is possible to identify components of attention and cognitive load using low cost eye tracking in conventional office settings. It is also shown that aspects of surprise, similar to negativity feedback error coding, is measurable. Behavioural patterns possibly related to time on target......, cognitive load, performance or stimuli are inferred. The existence of possibly unique individual gaze patterns related to visual stimuli or to the brain’s Default Mode Network are shown. A way of synchronizing EEG and Eye Tracking is also suggested, and in addition, a few software assets (a Python interface...

  17. Adaptive gaze control for object detection

    NARCIS (Netherlands)

    De Croon, G.C.H.E.; Postma, E.O.; Van den Herik, H.J.

    2011-01-01

    We propose a novel gaze-control model for detecting objects in images. The model, named act-detect, uses the information from local image samples in order to shift its gaze towards object locations. The model constitutes two main contributions. The first contribution is that the model’s setup makes

  18. Wrist-worn pervasive gaze interaction

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Lund, Haakon; Biermann, Florian

    2016-01-01

    This paper addresses gaze interaction for smart home control, conducted from a wrist-worn unit. First we asked ten people to enact the gaze movements they would propose for e.g. opening a door or adjusting the room temperature. On basis of their suggestions we built and tested different versions ...

  19. Gaze-Based Controlling a Vehicle

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    Research and applications of gaze interaction has mainly been conducted on a 2 dimensional surface (usually screens) for controlling a computer or controlling the movements of a robot. Emerging wearable and mobile technologies, such as google glasses may shift how gaze is used as an interactive...

  20. Dry Eye

    Science.gov (United States)

    ... Eye > Facts About Dry Eye Facts About Dry Eye This information was developed by the National Eye ... the best person to answer specific questions. Dry Eye Defined What is dry eye? Dry eye occurs ...

  1. Eye Allergies

    Science.gov (United States)

    ... Español Eye Health / Eye Health A-Z Eye Allergies Sections What Are Eye Allergies? Eye Allergy Symptoms ... allergy diagnosis Eye allergy treatment What Are Eye Allergies? Written by: David Turbert Reviewed by: Brenda Pagan- ...

  2. A comparison of geometric- and regression-based mobile gaze-tracking

    Science.gov (United States)

    Browatzki, Björn; Bülthoff, Heinrich H.; Chuang, Lewis L.

    2014-01-01

    Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation. PMID:24782737

  3. A comparison of geometric- and regression-based mobile gaze-tracking

    Directory of Open Access Journals (Sweden)

    Björn eBrowatzki

    2014-04-01

    Full Text Available Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation.

  4. Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal

    NARCIS (Netherlands)

    Neggers, SFW; Bekkering, H

    2001-01-01

    A well-coordinated pattern of eye and hand movements can be observed during goal-directed arm movements. Typically, a saccadic eye movement precedes the arm movement, and its occurrence is temporally correlated with the start of the arm movement. Furthermore, the coupling of gaze and aiming movement

  5. Design of a Binocular Pupil and Gaze Point Detection System Utilizing High Definition Images

    Directory of Open Access Journals (Sweden)

    Yilmaz Durna

    2017-05-01

    Full Text Available This study proposes a novel binocular pupil and gaze detection system utilizing a remote full high definition (full HD camera and employing LabVIEW. LabVIEW is inherently parallel and has fewer time-consuming algorithms. Many eye tracker applications are monocular and use low resolution cameras due to real-time image processing difficulties. We utilized the computer’s direct access memory channel for rapid data transmission and processed full HD images with LabVIEW. Full HD images make easier determinations of center coordinates/sizes of pupil and corneal reflection. We modified the camera so that the camera sensor passed only infrared (IR images. Glints were taken as reference points for region of interest (ROI area selection of the eye region in the face image. A morphologic filter was applied for erosion of noise, and a weighted average technique was used for center detection. To test system accuracy with 11 participants, we produced a visual stimulus set up to analyze each eye’s movement. Nonlinear mapping function was utilized for gaze estimation. Pupil size, pupil position, glint position and gaze point coordinates were obtained with free natural head movements in our system. This system also works at 2046 × 1086 resolution at 40 frames per second. It is assumed that 280 frames per second for 640 × 480 pixel images is the case. Experimental results show that the average gaze detection error for 11 participants was 0.76° for the left eye, 0.89° for right eye and 0.83° for the mean of two eyes.

  6. An auditory brain-computer interface evoked by natural speech

    Science.gov (United States)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  7. Gaze position reveals impaired attentional shift during visual word recognition in dysfluent readers.

    Directory of Open Access Journals (Sweden)

    Jarkko Hautala

    Full Text Available Effects reflecting serial within-word processing are frequently found in pseudo- and non-word recognition tasks not only among fluent, but especially among dyslexic readers. However, the time course and locus of these serial within-word processing effects in the cognitive hierarchy (i.e., orthographic, phonological, lexical have remained elusive. We studied whether a subject's eye movements during a lexical decision task would provide information about the temporal dynamics of serial within-word processing. We assumed that if there is serial within-word processing proceeding from left to right, items with informative beginnings would attract the gaze position and (micro-saccadic eye movements earlier in time relative to those with informative endings. In addition, we compared responses to word, non-word, and pseudo-word items to study whether serial within-word processing stems mainly from a lexical, orthographic, or phonological processing level, respectively. Gaze positions showed earlier responses to anomalies located at pseudo- and non-word beginnings rather than endings, whereas informative word beginnings or endings did not affect gaze positions. The overall pattern of results suggests parallel letter processing of real words and rapid serial within-word processing when reading novel words. Dysfluent readers' gaze position responses toward anomalies located at pseudo- and non-word endings were delayed substantially, suggesting impairment in serial processing at an orthographic processing level.

  8. Gaze position reveals impaired attentional shift during visual word recognition in dysfluent readers.

    Science.gov (United States)

    Hautala, Jarkko; Parviainen, Tiina

    2014-01-01

    Effects reflecting serial within-word processing are frequently found in pseudo- and non-word recognition tasks not only among fluent, but especially among dyslexic readers. However, the time course and locus of these serial within-word processing effects in the cognitive hierarchy (i.e., orthographic, phonological, lexical) have remained elusive. We studied whether a subject's eye movements during a lexical decision task would provide information about the temporal dynamics of serial within-word processing. We assumed that if there is serial within-word processing proceeding from left to right, items with informative beginnings would attract the gaze position and (micro-)saccadic eye movements earlier in time relative to those with informative endings. In addition, we compared responses to word, non-word, and pseudo-word items to study whether serial within-word processing stems mainly from a lexical, orthographic, or phonological processing level, respectively. Gaze positions showed earlier responses to anomalies located at pseudo- and non-word beginnings rather than endings, whereas informative word beginnings or endings did not affect gaze positions. The overall pattern of results suggests parallel letter processing of real words and rapid serial within-word processing when reading novel words. Dysfluent readers' gaze position responses toward anomalies located at pseudo- and non-word endings were delayed substantially, suggesting impairment in serial processing at an orthographic processing level.

  9. Eyes only? Perceiving eye contact is neither sufficient nor necessary for attentional capture by face direction.

    Science.gov (United States)

    Böckler, Anne; van der Wel, Robrecht P R D; Welsh, Timothy N

    2015-09-01

    Direct eye contact and motion onset both constitute powerful cues that capture attention. Recent research suggests that (social) gaze and (non-social) motion onset influence information processing in parallel, even when combined as sudden onset direct gaze cues (i.e., faces suddenly establishing eye contact). The present study investigated the role of eye visibility for attention capture by these sudden onset face cues. To this end, face direction was manipulated (away or towards onlooker) while faces had closed eyes (eliminating visibility of eyes, Experiment 1), wore sunglasses (eliminating visible eyes, but allowing for the expectation of eyes to be open, Experiment 2), and were inverted with visible eyes (disrupting the integration of eyes and faces, Experiment 3). Participants classified targets appearing on one of four faces. Initially, two faces were oriented towards participants and two faces were oriented away from participants. Simultaneous to target presentation, one averted face became directed and one directed face became averted. Attention capture by face direction (i.e., facilitation for faces directed towards participants) was absent when eyes were closed, but present when faces wore sunglasses. Sudden onset direct faces can, hence, induce attentional capture, even when lacking eye cues. Inverted faces, by contrast, did not elicit attentional capture. Thus, when eyes cannot be integrated into a holistic face representation they are not sufficient to capture attention. Overall, the results suggest that visibility of eyes is neither necessary nor sufficient for the sudden direct face effect.

  10. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    paradigm in the context of human-machine interaction as low-cost gaze trackers become more ubiquitous. The viability of gaze gestures as an innovative way to control a computer rests on how easily they can be assimilated by potential users and also on the ability of machine learning algorithms...... to discriminate intentional gaze gestures from typical gaze activity performed during standard interaction with electronic devices. In this work, through a set of experiments and user studies, we evaluate the performance of two different gaze gestures modalities, gliding gaze gestures and saccadic gaze gestures...

  11. Gaze Aversion to Stuttered Speech: A Pilot Study Investigating Differential Visual Attention to Stuttered and Fluent Speech

    Science.gov (United States)

    Bowers, Andrew L.; Crawcour, Stephen C.; Saltuklaroglu, Tim; Kalinowski, Joseph

    2010-01-01

    Background: People who stutter are often acutely aware that their speech disruptions, halted communication, and aberrant struggle behaviours evoke reactions in communication partners. Considering that eye gaze behaviours have emotional, cognitive, and pragmatic overtones for communicative interactions and that previous studies have indicated…

  12. Coordination of Gaze and Speech in Communication between Children with Hearing Impairment and Normal-Hearing Peers

    Science.gov (United States)

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-01-01

    Purpose: To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Method: Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions,…

  13. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    2015-01-01

    of passing. The analysis of the young Muslim men and women’s narratives points towards the particular embodied, intersectional and local possibilities for becoming and being visible as a legitimate Muslim subject in a society fraught with stereotypical and often negative images and discourses on Islam...

  14. Evaluation of a remote webcam-based eye tracker

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Agustin, Javier San; Johansen, Sune Alstrup;

    2011-01-01

    In this paper we assess the performance of an open-source gaze tracker in a remote (i.e. table-mounted) setup, and compare it with two other commercial eye trackers. An experiment with 5 subjects showed the open-source eye tracker to have a significantly higher level of accuracy than one of the c...

  15. Eye-based head gestures for interaction in the car

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2013-01-01

    In this paper we suggest using a new method for head gesture recognition in the automotive context. This method involves using only the eye tracker for measuring the head movements through the eye movements when the gaze point is fixed. It allows for identifying a wide range of head gestures...

  16. Wrist-worn pervasive gaze interaction

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Lund, Haakon; Biermann, Florian;

    2016-01-01

    This paper addresses gaze interaction for smart home control, conducted from a wrist-worn unit. First we asked ten people to enact the gaze movements they would propose for e.g. opening a door or adjusting the room temperature. On basis of their suggestions we built and tested different versions...... selection. Their subjective evaluations were positive with regard to the speed of the interaction. We conclude that gaze gesture input seems feasible for fast and brief remote control of smart home technology provided that robustness of tracking is improved....

  17. The role of observers' gaze behaviour when watching object manipulation tasks: predicting and evaluating the consequences of action.

    Science.gov (United States)

    Flanagan, J Randall; Rotman, Gerben; Reichelt, Andreas F; Johansson, Roland S

    2013-10-19

    When watching an actor manipulate objects, observers, like the actor, naturally direct their gaze to each object as the hand approaches and typically maintain gaze on the object until the hand departs. Here, we probed the function of observers' eye movements, focusing on two possibilities: (i) that observers' gaze behaviour arises from processes involved in the prediction of the target object of the actor's reaching movement and (ii) that this gaze behaviour supports the evaluation of mechanical events that arise from interactions between the actor's hand and objects. Observers watched an actor reach for and lift one of two presented objects. The observers' task was either to predict the target object or judge its weight. Proactive gaze behaviour, similar to that seen in self-guided action-observation, was seen in the weight judgement task, which requires evaluating mechanical events associated with lifting, but not in the target prediction task. We submit that an important function of gaze behaviour in self-guided action observation is the evaluation of mechanical events associated with interactions between the hand and object. By comparing predicted and actual mechanical events, observers, like actors, can gain knowledge about the world, including information about objects they may subsequently act upon.

  18. Gaze aversion to stuttered speech: a pilot study investigating differential visual attention to stuttered and fluent speech.

    Science.gov (United States)

    Bowers, Andrew L; Crawcour, Stephen C; Saltuklaroglu, Tim; Kalinowski, Joseph

    2010-01-01

    People who stutter are often acutely aware that their speech disruptions, halted communication, and aberrant struggle behaviours evoke reactions in communication partners. Considering that eye gaze behaviours have emotional, cognitive, and pragmatic overtones for communicative interactions and that previous studies have indicated increased physiological arousal in listeners in response to stuttering, it was hypothesized that stuttered speech incurs increased gaze aversion relative to fluent speech. The possible importance in uncovering these visible reactions to stuttering is that they may contribute to the social penalty associated with stuttering. To compare the eye gaze responses of college students while observing and listening to fluent and severely stuttered speech samples produced by the same adult male who stutters. Twelve normally fluent adult college students watched and listened to three 20-second audio-video clips of the face of an adult male stuttering and three 20-second clips of the same male producing fluent speech. Their pupillary movements were recorded with an eye-tracking device and mapped to specific regions of interest (that is, the eyes, the nose and the mouth of the speaker). Participants spent 39% more time fixating on the speaker's eyes while witnessing fluent speech compared with stuttered speech. In contrast, participants averted their direct eye gaze more often and spent 45% more time fixating on the speaker's nose when witnessing stuttered speech compared with fluent speech. These relative time differences occurred as a function of the number of fixations in each area of interest. Thus, participants averted their gaze from the eyes of the speaker more frequently during the stuttered stimuli than the fluent stimuli. This laboratory study provides pilot data suggesting that gaze aversion is a salient response to the breakdown in communication that occurs during stuttering. This response may occur as a result of emotional, cognitive, and

  19. Eye Contact Judgment Is Influenced by Perceivers’ Social Anxiety But Not by Their Affective State

    Science.gov (United States)

    Chen, Tingji; Nummenmaa, Lauri; Hietanen, Jari K.

    2017-01-01

    Fast and accurate judgment of whether another person is making eye contact or not is crucial for our social interaction. As affective states have been shown to influence social perceptions and judgments, we investigated the influence of observers’ own affective states and trait anxiety on their eye contact judgments. In two experiments, participants were required to judge whether animated faces (Experiment 1) and real faces (Experiment 2) with varying gaze angles were looking at them or not. Participants performed the task in pleasant, neutral, and unpleasant odor conditions. The results from two experiments showed that eye contact judgments were not modulated by observers’ affective state, yet participants with higher levels of social anxiety accepted a wider range of gaze deviations from the direct gaze as eye contact. We conclude that gaze direction judgments depend on individual differences in affective predispositions, yet they are not amenable to situational affective influences.

  20. Decreased visual attention further from the perceived direction of gaze for equidistant retinal targets

    DEFF Research Database (Denmark)

    Balslev, Daniela; Gowen, Emma; Miall, R Chris

    2011-01-01

    The oculomotor and spatial attention systems are interconnected. Whereas a link between motor commands and spatial shifts in visual attention is demonstrated, it is still unknown whether the recently discovered proprioceptive signal in somatosensory cortex impacts on visual attention, too...... stimulation (rTMS), which decreases cortical processing of eye muscle proprioceptive inflow and produces an underestimation of the rotation of the right eye. Participants detected near-threshold visual targets presented in the left or right visual hemifield at equal distance from fixation. We have previously...... it in the right. This effect depended on the direction of rotation of the right eye. When the right eye was rotated rightward and TMS, we assume, shifted perceived gaze direction in opposite direction, leftward, visual accuracy decreased now in the right hemifield. We suggest that the proprioceptive eye position...

  1. Predicting others' actions via grasp and gaze: evidence for distinct brain networks.

    Science.gov (United States)

    Ramsey, Richard; Cross, Emily S; Hamilton, Antonia F de C

    2012-07-01

    During social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people's actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a 'peeping window' protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person's eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action.

  2. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  3. Latvijas Gaze buyback likely to flop

    Index Scriptorium Estoniae

    2003-01-01

    Veerandi Läti gaasifirma Latvijas Gaze omanik Itera kavatseb lähiajal lõpule viia üheksa protsendi Läti firma aktsiate müügi ettevõttele Gazprom. Gazprom'i kontrolli all on praegu 25 protsenti, Ruhrgas'il 28,66 ning E.ON Energie AG-l 18,06 protsenti Latvijas Gaze aktsiatest

  4. Latvijas Gaze buyback likely to flop

    Index Scriptorium Estoniae

    2003-01-01

    Veerandi Läti gaasifirma Latvijas Gaze omanik Itera kavatseb lähiajal lõpule viia üheksa protsendi Läti firma aktsiate müügi ettevõttele Gazprom. Gazprom'i kontrolli all on praegu 25 protsenti, Ruhrgas'il 28,66 ning E.ON Energie AG-l 18,06 protsenti Latvijas Gaze aktsiatest

  5. Looking ahead: anticipatory gaze and motor ability in infancy.

    Directory of Open Access Journals (Sweden)

    Ettore Ambrosini

    Full Text Available The present study asks when infants are able to selectively anticipate the goals of observed actions, and how this ability relates to infants' own abilities to produce those specific actions. Using eye-tracking technology to measure on-line anticipation, 6-, 8- and 10-month-old infants and a control group of adults were tested while observing an adult reach with a whole hand grasp, a precision grasp or a closed fist towards one of two different sized objects. The same infants were also given a comparable action production task. All infants showed proactive gaze to the whole hand grasps, with increased degrees of proactivity in the older groups. Gaze proactivity to the precision grasps, however, was present from 8 months of age. Moreover, the infants' ability in performing precision grasping strongly predicted their ability in using the actor's hand shape cues to differentially anticipate the goal of the observed action, even when age was partialled out. The results are discussed in terms of the specificity of action anticipation, and the fine-grained relationship between action production and action perception.

  6. Following student gaze patterns in physical science lectures

    Science.gov (United States)

    Rosengrant, David; Hearrington, Doug; Alvarado, Kerriann; Keeble, Danielle

    2012-02-01

    This study investigates the gaze patterns of undergraduate college students attending a lecture-based physical science class to better understand the relationships between gaze and focus patterns and student attention during class. The investigators used a new eye-tracking product; Tobii Glasses. The glasses eliminate the need for subjects to focus on a computer screen or carry around a backpack-sized recording device, thus giving an investigator the ability to study a broader range of research questions. This investigation includes what students focus on in the classroom (i.e. demonstrations, instructor, notes, board work, and presentations) during a normal lecture, what diverts attention away from being on task as well as what keeps a subject on task. We report on the findings from 8 subjects during physical science lectures designed for future elementary school teachers. We found that students tended not to focus on the instructor for most parts of the lecture but rather the information, particularly new information presented on PowerPoint slides. Finally, we found that location in the classroom also impacted students' attention spans due to more distractors.

  7. Infant Eye-Tracking in the Context of Goal-Directed Actions

    Science.gov (United States)

    Corbetta, Daniela; Guan, Yu; Williams, Joshua L.

    2012-01-01

    This paper presents two methods that we applied to our research to record infant gaze in the context of goal-oriented actions using different eye-tracking devices: head-mounted and remote eye-tracking. For each type of eye-tracking system, we discuss their advantages and disadvantages, describe the particular experimental setups we used to study…

  8. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine

    Science.gov (United States)

    Fox, Sharon E.; Faulkner-Jones, Beverly E.

    2017-01-01

    Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…

  9. The Gaze and Being Gazed:From Madama Butterfly to M. Butterfly

    Institute of Scientific and Technical Information of China (English)

    李琼华

    2012-01-01

      Abstrac]Through the analysis of both Madama Butterfly and M. Butterfly,this paper expores the gaze of the Occidental upon the Oriental especially the women. It analyzes the change of the Occi-dental gaze and gets the result that the misconception of the Oriental and its culture might form a big mockery to the Orientalism.

  10. Gaze inspired subtitle position evaluation for MOOCs videos

    Science.gov (United States)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  11. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    Science.gov (United States)

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  12. Keeping our eyes on the eyes: the case of Arcimboldo.

    Science.gov (United States)

    Bubic, Andreja; Susac, Ana; Palmovic, Marijan

    2014-01-01

    While contemporaries often viewed his reversible composite heads as scherzi (jokes) and modem art connoisseurs as creative masterpieces, Giuseppe Arcimboldo's ingenious paintings served as inspiring stimuli for the present eye-tracking experiment. One group of participants viewed three chosen paintings in an upright, and another in an upside-down, orientation. We compared how participants viewed three selected areas of interest (AOIs) within the painting when these could, and could not, be identified as a face or distinct facial element (eyes and mouth). The obtained results indicate that the participants fixated the parts of the painting which represent faces more in the upright than in the inverted orientation. Furthermore, in the upright orientation the participants focused more on the upper AOls (eyes) than the lower AOIs (mouth). This was not the case for the inverted orientation of two paintings. In conclusion, the face inversion effect occurs even in this artistic context, and the gaze often goes where the eyes are.

  13. [Eye contact effects: A therapeutic issue?

    Science.gov (United States)

    Baltazar, M; Conty, L

    2016-12-01

    The perception of a direct gaze - that is, of another individual's gaze directed at the observer that leads to eye contact - is known to influence a wide range of cognitive processes and behaviors. We stress that these effects mainly reflect positive impacts on human cognition and may thus be used as relevant tools for therapeutic purposes. In this review, we aim (1) to provide an exhaustive review of eye contact effects while discussing the limits of the dominant models used to explain these effects, (2) to illustrate the therapeutic potential of eye contact by targeting those pathologies that show both preserved gaze processing and deficits in one or several functions that are targeted by the eye contact effects, and (3) to propose concrete ways in which eye contact could be employed as a therapeutic tool. (1) We regroup the variety of eye contact effects into four categories, including memory effects, activation of prosocial behavior, positive appraisals of self and others and the enhancement of self-awareness. We emphasize that the models proposed to account for these effects have a poor predictive value and that further descriptions of these effects is needed. (2) We then emphasize that people with pathologies that affect memory, social behavior, and self and/or other appraisal, and self-awareness could benefit from eye contact effects. We focus on depression, autism and Alzheimer's disease to illustrate our proposal. To our knowledge, no anomaly of eye contact has been reported in depression. Patients suffering from Alzheimer disease, at the early and moderate stage, have been shown to maintain a normal amount of eye contact with their interlocutor. We take into account that autism is controversial regarding whether gaze processing is preserved or altered. In the first view, individuals are thought to elude or omit gazing at another's eyes while in the second, individuals are considered to not be able to process the gaze of others. We adopt the first stance

  14. Brief Report: Broad Autism Phenotype in Adults Is Associated with Performance on an Eye-Tracking Measure of Joint Attention

    Science.gov (United States)

    Swanson, Meghan R.; Siller, Michael

    2014-01-01

    The current study takes advantage of modern eye-tracking technology and evaluates how individuals allocate their attention when viewing social videos that display an adult model who is gazing at a series of targets that appear and disappear in the four corners of the screen (congruent condition), or gazing elsewhere (incongruent condition). Data…

  15. Eye redness

    Science.gov (United States)

    Bloodshot eyes; Red eyes; Scleral injection; Conjunctival injection ... There are many causes of a red eye or eyes. Some are medical emergencies. Others are a cause for concern, but not an emergency. Many are nothing to worry about. Eye ...

  16. Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.

    Science.gov (United States)

    Mylonas, George P; Kwok, Ka-Wai; James, David R C; Leff, Daniel; Orihuela-Espina, Felipe; Darzi, Ara; Yang, Guang-Zhong

    2012-04-01

    The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Stare in the crowd: frontal face guides overt attention independently of its gaze direction.

    Science.gov (United States)

    Aya, Shirama

    2012-01-01

    Whether or not a stare in the midst of many faces can guide visual attention is a controversial issue. Two experiments are reported that investigate the hypothesis that visual attention is guided toward a frontal face in the search for a stare among faces with varied head angles. The participants were required to search for a face with a direct gaze in a context where the target could be at any of various head angles and the target's head angle was unpredictable in one trial. The search performance was better for a frontal-face target than for deviated-face targets. Furthermore, eye-movement analyses revealed that a frontal-face stimulus tended to be initially fixated prior to deviated-face stimuli, and many of the initially fixated frontal-face stimuli had an averted gaze. The findings suggest that a frontal face guides overt attention independently of its gaze direction in the search for a stare in a crowd. The validity of prioritising a frontal face in order to find a direct gaze among faces and the characteristics of a human-face detection system are discussed.

  18. Wild robins (Petroica longipes) respond to human gaze.

    Science.gov (United States)

    Garland, Alexis; Low, Jason; Armstrong, Nicola; Burns, Kevin C

    2014-09-01

    Gaze following and awareness of attentional cues are hallmarks of human and non-human social intelligence. Here, we show that the North Island robin (Petroica longipes), a food-hoarding songbird endemic to New Zealand, responds to human eyes. Robins were presented with six different conditions, in which two human experimenters altered the orientation or visibility of their body, head or eyes in relation to mealworm prey. One experimenter had visual access to the prey, and the second experimenter did not. Robins were then given the opportunity to 'steal' one of two mealworms presented by each experimenter. Robins responded by preferentially choosing the mealworm in front of the experimenter who could not see, in all conditions but one. Robins failed to discriminate between experimenters who were facing the mealworm and those who had their head turned 90° to the side. This may suggest that robins do not make decisions using the same eye visibility cues that primates and corvids evince, whether for ecological, experiential or evolutionary reasons.

  19. Smaller is better: drift in gaze measurements due to pupil dynamics.

    Science.gov (United States)

    Drewes, Jan; Zhu, Weina; Hu, Yingzhou; Hu, Xintian

    2014-01-01

    Camera-based eye trackers are the mainstay of eye movement research and countless practical applications of eye tracking. Recently, a significant impact of changes in pupil size on gaze position as measured by camera-based eye trackers has been reported. In an attempt to improve the understanding of the magnitude and population-wise distribution of the pupil-size dependent shift in reported gaze position, we present the first collection of binocular pupil drift measurements recorded from 39 subjects. The pupil-size dependent shift varied greatly between subjects (from 0.3 to 5.2 deg of deviation, mean 2.6 deg), but also between the eyes of individual subjects (0.1 to 3.0 deg difference, mean difference 1.0 deg). We observed a wide range of drift direction, mostly downward and nasal. We demonstrate two methods to partially compensate the pupil-based shift using separate calibrations in pupil-constricted and pupil-dilated conditions, and evaluate an improved method of compensation based on individual look-up-tables, achieving up to 74% of compensation.

  20. Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience.

    Science.gov (United States)

    Schrammel, Franziska; Pannasch, Sebastian; Graupner, Sven-Thomas; Mojzisch, Andreas; Velichkovsky, Boris M

    2009-09-01

    The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.

  1. Patterns of Visual Attention and Gaze to Human and Animal Faces in Children with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Servet Bayram

    2012-12-01

    Full Text Available The aim of the study is to investigate the patterns of visual attention and gaze to familiar female/male faces and animal faces in high-functioning children with Autism Spectrum Disorders (ASD. Seven children with ASD and ten (10 typically developing (TD children participated in this study. To collect data, an eye-tracking system was used while participants looked at visual stimuli. According to the results of the study, high-functioning children with ASD have deficiency in getting relevant social information from the eyes though faces familiar to them, but they use information from the eye region in face exploration more than from the other parts of the faces. In addition, children with ASD seem to present gaze patterns similar to those of TD children during face exploration.

  2. High-Speed Noninvasive Eye-Tracking System

    Science.gov (United States)

    Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin

    2007-01-01

    The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.

  3. Head movements evoked in alert rhesus monkey by vestibular prosthesis stimulation: implications for postural and gaze stabilization.

    Directory of Open Access Journals (Sweden)

    Diana E Mitchell

    Full Text Available The vestibular system detects motion of the head in space and in turn generates reflexes that are vital for our daily activities. The eye movements produced by the vestibulo-ocular reflex (VOR play an essential role in stabilizing the visual axis (gaze, while vestibulo-spinal reflexes ensure the maintenance of head and body posture. The neuronal pathways from the vestibular periphery to the cervical spinal cord potentially serve a dual role, since they function to stabilize the head relative to inertial space and could thus contribute to gaze (eye-in-head + head-in-space and posture stabilization. To date, however, the functional significance of vestibular-neck pathways in alert primates remains a matter of debate. Here we used a vestibular prosthesis to 1 quantify vestibularly-driven head movements in primates, and 2 assess whether these evoked head movements make a significant contribution to gaze as well as postural stabilization. We stimulated electrodes implanted in the horizontal semicircular canal of alert rhesus monkeys, and measured the head and eye movements evoked during a 100 ms time period for which the contribution of longer latency voluntary inputs to the neck would be minimal. Our results show that prosthetic stimulation evoked significant head movements with latencies consistent with known vestibulo-spinal pathways. Furthermore, while the evoked head movements were substantially smaller than the coincidently evoked eye movements, they made a significant contribution to gaze stabilization, complementing the VOR to ensure that the appropriate gaze response is achieved. We speculate that analogous compensatory head movements will be evoked when implanted prosthetic devices are transitioned to human patients.

  4. Eye Movements Affect Postural Control in Young and Older Females.

    Science.gov (United States)

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  5. Virtual button interface

    Science.gov (United States)

    Jones, Jake S.

    1999-01-01

    An apparatus and method of issuing commands to a computer by a user interfacing with a virtual reality environment. To issue a command, the user directs gaze at a virtual button within the virtual reality environment, causing a perceptible change in the virtual button, which then sends a command corresponding to the virtual button to the computer, optionally after a confirming action is performed by the user, such as depressing a thumb switch.

  6. Eye Contact Is Crucial for Referential Communication in Pet Dogs

    Science.gov (United States)

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners’ visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners’ direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners’ availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives. PMID:27626933

  7. Gaze patterns reveal how texts are remembered: A mental model of what was described is favored over the text itself

    DEFF Research Database (Denmark)

    Traub, Franziska; Johansson, Roger; Holmqvist, Kenneth

    Several studies have reported that spontaneous eye movements occur when visuospatial information is recalled from memory. Such gazes closely reflect the content and spatial relations from the original scene layout (e.g., Johansson et al., 2012). However, when someone has originally read a scene...... description, the memory of the physical layout of the text itself might compete with the memory of the spatial arrangement of the described scene. 
The present study was designed to address this fundamental issue by having participants read scene descriptions that were manipulated to be either congruent....... Recollection was performed orally while gazing at a blank screen. 
Results demonstrate that participant’s gaze patterns during recall more closely reflect the spatial layout of the scene than the physical locations of the text. We conclude that participants formed a mental model that represents the content...

  8. A gaze independent hybrid-BCI based on visual spatial attention

    Science.gov (United States)

    Egan, John M.; Loughnane, Gerard M.; Fletcher, Helen; Meade, Emma; Lalor, Edmund C.

    2017-08-01

    Objective. Brain-computer interfaces (BCI) use measures of brain activity to convey a user’s intent without the need for muscle movement. Hybrid designs, which use multiple measures of brain activity, have been shown to increase the accuracy of BCIs, including those based on EEG signals reflecting covert attention. Our study examined whether incorporating a measure of the P3 response improved the performance of a previously reported attention-based BCI design that incorporates measures of steady-state visual evoked potentials (SSVEP) and alpha band modulations. Approach. Subjects viewed stimuli consisting of two bi-laterally located flashing white boxes on a black background. Streams of letters were presented sequentially within the boxes, in random order. Subjects were cued to attend to one of the boxes without moving their eyes, and they were tasked with counting the number of target-letters that appeared within. P3 components evoked by target appearance, SSVEPs evoked by the flashing boxes, and power in the alpha band are modulated by covert attention, and the modulations can be used to classify trials as left-attended or right-attended. Main Results. We showed that classification accuracy was improved by including a P3 feature along with the SSVEP and alpha features (the inclusion of a P3 feature lead to a 9% increase in accuracy compared to the use of SSVEP and Alpha features alone). We also showed that the design improves the robustness of BCI performance to individual subject differences. Significance. These results demonstrate that incorporating multiple neurophysiological indices of covert attention can improve performance in a gaze-independent BCI.

  9. Genetics Home Reference: horizontal gaze palsy with progressive scoliosis

    Science.gov (United States)

    ... Health Conditions HGPPS horizontal gaze palsy with progressive scoliosis Printable PDF Open All Close All Enable Javascript ... collapse boxes. Description Horizontal gaze palsy with progressive scoliosis ( HGPPS ) is a disorder that affects vision and ...

  10. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  11. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  12. The importance of the eyes: communication skills in infants of blind parents.

    Science.gov (United States)

    Senju, Atsushi; Tucker, Leslie; Pasco, Greg; Hudry, Kristelle; Elsabbagh, Mayada; Charman, Tony; Johnson, Mark H

    2013-06-07

    The effects of selectively different experience of eye contact and gaze behaviour on the early development of five sighted infants of blind parents were investigated. Infants were assessed longitudinally at 6-10, 12-15 and 24-47 months. Face scanning and gaze following were assessed using eye tracking. In addition, established measures of autistic-like behaviours and standardized tests of cognitive, motor and linguistic development, as well as observations of naturalistic parent-child interaction were collected. These data were compared with those obtained from a larger group of sighted infants of sighted parents. Infants with blind parents did not show an overall decrease in eye contact or gaze following when they observed sighted adults on video or in live interactions, nor did they show any autistic-like behaviours. However, they directed their own eye gaze somewhat less frequently towards their blind mothers and also showed improved performance in visual memory and attention at younger ages. Being reared with significantly reduced experience of eye contact and gaze behaviour does not preclude sighted infants from developing typical gaze processing and other social-communication skills. Indeed, the need to switch between different types of communication strategy may actually enhance other skills during development.

  13. Horizontal eye position affects measured vertical VOR gain on the video Head Impulse Test

    Directory of Open Access Journals (Sweden)

    Leigh A. McGarvie

    2015-03-01

    Full Text Available Background/Hypothesis. With the video Head Impulse Test (vHIT, the vertical VOR gain is defined as (vertical eye velocity/vertical head velocity, but compensatory eye movements to vertical canal stimulation usually have a torsional component. To minimize the contribution of torsion to the eye movement measurement, the horizontal gaze direction should be directed 40º from straight ahead so it is in the plane of the stimulated canal plane pair. Hypothesis: as gaze is systematically moved horizontally away from canal plane alignment, the measured vertical VOR gain should decrease.Study Design. 10 healthy subjects, with vHIT measuring vertical eye movement to head impulses in the plane of the left anterior-right posterior (LARP canal plane, with gaze at one of 5 horizontal gaze positions (40º (aligned with the LARP plane, 20º, 0º, -20º, -40º.Methods. Every head impulse was in the LARP plane. The compensatory eye movement was measured by the vHIT prototype system. The one operator delivered every impulse. Results. The canal stimulus remained identical across trials, but the measured vertical VOR gain decreased as horizontal gaze angle was shifted away from alignment with the LARP canal plane.Conclusion. In measuring vertical VOR gain with vHIT the horizontal gaze angle should be aligned with the canal plane under test.

  14. Functional changes of the reward system underlie blunted response to social gaze in cocaine users.

    Science.gov (United States)

    Preller, Katrin H; Herdener, Marcus; Schilbach, Leonhard; Stämpfli, Philipp; Hulka, Lea M; Vonmoos, Matthias; Ingold, Nina; Vogeley, Kai; Tobler, Philippe N; Seifritz, Erich; Quednow, Boris B

    2014-02-18

    Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants' gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction.

  15. Affine transform to reform pixel coordinates of EOG signals for controlling robot manipulators using gaze motions.

    Science.gov (United States)

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-06-10

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  16. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    Directory of Open Access Journals (Sweden)

    Muhammad Ilhamdi Rusydi

    2014-06-01

    Full Text Available Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2 produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  17. Reading from a Head-Fixed Display during Walking: Adverse Effects of Gaze Stabilization Mechanisms.

    Directory of Open Access Journals (Sweden)

    Olivier Borg

    Full Text Available Reading performance during standing and walking was assessed for information presented on earth-fixed and head-fixed displays by determining the minimal duration during which a numerical time stimulus needed to be presented for 50% correct naming answers. Reading from the earth-fixed display was comparable during standing and walking, with optimal performance being attained for visual character sizes in the range of 0.2° to 1°. Reading from the head-fixed display was impaired for small (0.2-0.3° and large (5° visual character sizes, especially during walking. Analysis of head and eye movements demonstrated that retinal slip was larger during walking than during standing, but remained within the functional acuity range when reading from the earth-fixed display. The detrimental effects on performance of reading from the head-fixed display during walking could be attributed to loss of acuity resulting from large retinal slip. Because walking activated the angular vestibulo-ocular reflex, the resulting compensatory eye movements acted to stabilize gaze on the information presented on the earth-fixed display but destabilized gaze from the information presented on the head-fixed display. We conclude that the gaze stabilization mechanisms that normally allow visual performance to be maintained during physical activity adversely affect reading performance when the information is presented on a display attached to the head.

  18. Involvement of the head and trunk during gaze reorientation during standing and treadmill walking.

    Science.gov (United States)

    Cinelli, Michael; Patla, Aftab; Stuart, Bethany

    2007-07-01

    As individuals stand or walk in an environment their gaze may be reoriented from one location to another in response to auditory or visual stimuli. In order to reorient gaze, the eyes and/or the head and trunk must rotate. However, what determines the exact degree of rotation of each segment while standing or walking is not fully understood. In the current study we show that when participants were asked to reorient their gaze towards light cues positioned at eccentric locations of up to 90 degrees while standing or walking on a treadmill their eyes and head mainly facilitated the action. Rotations of the head-in-space were similar for both tasks, but the rotation of the shoulders- and hips-in-space were lower for the treadmill walking condition. It is argued that this difference in the level of head-on-trunk rotation during the two tasks is controlled by the vestibular feedback loop. The regulation of this feedback loop is performed by the cerebellum in response to the level of threat to postural stability.

  19. The Gaze as constituent and annihilator

    Directory of Open Access Journals (Sweden)

    Mats Carlsson

    2012-11-01

    Full Text Available This article aims to join the contemporary effort to promote a psychoanalytic renaissance within cinema studies, post Post-Theory. In trying to shake off the burden of the 1970s film theory's distortion of the Lacanian Gaze, rejuvenating it with the strength of the Real and fusing it with Freudian thoughts on the uncanny, hopefully this new dawn can be reached. I aspire to conceptualize the Gaze in a straightforward manner. This in order to obtain an instrument for the identification of certain strategies within the filmic realm aimed at depicting the subjective destabilizing of diegetic characters as well as thwarting techniques directed at the spectorial subject. In setting this capricious Gaze against the uncanny phenomena described by Freud, we find that these two ideas easily intertwine into a draft description of a powerful, potentially reconstitutive force worth being highlighted.

  20. Emotional expression modulates perceived gaze direction.

    Science.gov (United States)

    Lobmaier, Janek S; Tiddeman, Bernard P; Perrett, David I

    2008-08-01

    Gaze perception is an important social skill, as it portrays information about what another person is attending to. Gaze direction has been shown to affect interpretation of emotional expression. Here the authors investigate whether the emotional facial expression has a reciprocal influence on interpretation of gaze direction. In a forced-choice yes-no task, participants were asked to judge whether three faces expressing different emotions (anger, fear, happiness, and neutral) in different viewing angles were looking at them or not. Happy faces were more likely to be judged as looking at the observer than were angry, fearful, or neutral faces. Angry faces were more often judged as looking at the observer than were fearful and neutral expressions. These findings are discussed on the background of approach and avoidance orientation of emotions and of the self-referential positivity bias.

  1. A Direct Link between Gaze Perception and Social Attention

    Science.gov (United States)

    Bayliss, Andrew P.; Bartlett, Jessica; Naughtin, Claire K.; Kritikos, Ada

    2011-01-01

    How information is exchanged between the cognitive mechanisms responsible for gaze perception and social attention is unclear. These systems could be independent; the "gaze cueing" effect could emerge from the activation of a general-purpose attentional mechanism that is ignorant of the social nature of the gaze cue. Alternatively, orienting to…

  2. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    Science.gov (United States)

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Influence of gaze and directness of approach on the escape responses of the Indian rock lizard, Psammophilus dorsalis (Gray, 1831)

    Indian Academy of Sciences (India)

    Rachakonda Sreekar; Suhel Quader

    2013-12-01

    Animals often evaluate the degree of risk posed by a predator and respond accordingly. Since many predators orient their eyes towards prey while attacking, predator gaze and directness of approach could serve as conspicuous indicators of risk to prey. The ability to perceive these cues and discriminate between high and low predation risk should benefit prey species through both higher survival and decreased energy expenditure. We experimentally examined whether Indian rock lizards (Psammophilus dorsalis) can perceive these two indicators of predation risk by measuring the variation in their fleeing behaviour in response to type of gaze and approach by a human predator. Overall, we found that the gaze and approach of the predator influenced flight initiation distance, which also varied with attributes of the prey (i.e. size/sex and tail-raise behaviour). Flight initiation distance (FID) was 43% longer during direct approaches with direct gaze compared with tangential approaches with averted gaze. In further, exploratory, analyses, we found that FID was 23% shorter for adult male lizards than for female or young male (FYM) lizards. In addition, FYM lizards that showed a tail-raise display during approach had a 71% longer FID than those that did not. Our results suggest that multiple factors influence the decision to flee in animals. Further studies are needed to test the generality of these factors and to investigate the proximate mechanisms underlying flight decisions.

  4. Mona Lisa Effect of Eyes and Face

    Directory of Open Access Journals (Sweden)

    Takao Sato

    2012-10-01

    Full Text Available A person depicted in portrait paintings does not appear slanted even when observers move around. The gaze is also fixed to the observer. This constancy in angle of face/body orientation or gaze direction is called the Mona Lisa effect. Do observers realize the portrait was physically slanted when the effect occurs? What is the relationship between the effect for face/body and gaze? To answer these questions, we separately measured the perceived angle of face, gaze, and background while varying the physical slant of portrait itself. The stimulus was a computer generated face (19 × 12 deg presented on a 3D LCD display. It was surrounded by a 24 × 24 deg black-contour frame filled with a noise texture. There were also no-frame and/or no-texture conditions. The slant was varied between ±30 deg. The observer was asked to judge the direction of gaze and the orientation of face or background in separate sessions. It was found that the perceived gaze almost always directed toward the observer regardless of slant angle or existence of frame or background. In contrast, the face orientation was judged facing the observer only in 40–50% of trials, and it was facing at the correct angle in 50–60% of trials. The background was perceived correctly in most trials. These results demonstrate special characteristics of eyes. The gaze is always directed to you even when the portrait is slanted and the background is perceived slanted. The face has intermediate characteristics: it is sometimes directed to you, but sometimes it appears slanted.

  5. Towards Wearable Gaze Supported Augmented Cognition

    DEFF Research Database (Denmark)

    Toshiaki Kurauchi, Andrew; Hitoshi Morimoto, Carlos; Mardanbeigi, Diako;

    to reduce the amount of information and provide an appropriate mechanism for low and divided attention interaction. We claim that most current gaze interaction paradigms are not appropriate for wearable computing because they are not designed for divided attention. We have used principles suggested...... by the wearable computing community to develop a gaze supported augmented cognition application with three interaction modes. The application provides information of the person being looked at. The continuous mode updates information every time the user looks at a different face. The key activated discrete mode...

  6. Amygdala activation for eye contact despite complete cortical blindness.

    Science.gov (United States)

    Burra, Nicolas; Hervais-Adelman, Alexis; Kerzel, Dirk; Tamietto, Marco; de Gelder, Beatrice; Pegna, Alan J

    2013-06-19

    Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex. A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer. Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.

  7. Eye Typing using Markov and Active Appearance Models

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin; Nielsen, Mads

    2002-01-01

    multi-modal interactions based on video tracking systems. Robust methods are needed to track the eyes using web cameras due to the poor image quality. A real-time tracking scheme using a mean-shift color tracker and an Active Appearance Model of the eye is proposed. It is possible from this model......We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...... to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes....

  8. An eye for the I: Preferential attention to the eyes of ingroup members.

    Science.gov (United States)

    Kawakami, Kerry; Williams, Amanda; Sidhu, David; Choma, Becky L; Rodriguez-Bailón, Rosa; Cañadas, Elena; Chung, Derek; Hugenberg, Kurt

    2014-07-01

    Human faces, and more specifically the eyes, play a crucial role in social and nonverbal communication because they signal valuable information about others. It is therefore surprising that few studies have investigated the impact of intergroup contexts and motivations on attention to the eyes of ingroup and outgroup members. Four experiments investigated differences in eye gaze to racial and novel ingroups using eye tracker technology. Whereas Studies 1 and 3 demonstrated that White participants attended more to the eyes of White compared to Black targets, Study 2 showed a similar pattern of attention to the eyes of novel ingroup and outgroup faces. Studies 3 and 4 also provided new evidence that eye gaze is flexible and can be meaningfully influenced by current motivations. Specifically, instructions to individuate specific social categories increased attention to the eyes of target group members. Furthermore, the latter experiments demonstrated that preferential attention to the eyes of ingroup members predicted important intergroup biases such as recognition of ingroup over outgroup faces (i.e., the own-race bias; Study 3) and willingness to interact with outgroup members (Study 4). The implication of these findings for general theorizing on face perception, individuation processes, and intergroup relations are discussed.

  9. Cursive writing with smooth pursuit eye movements.

    Science.gov (United States)

    Lorenceau, Jean

    2012-08-21

    The eyes never cease to move: ballistic saccades quickly turn the gaze toward peripheral targets, whereas smooth pursuit maintains moving targets on the fovea where visual acuity is best. Despite the oculomotor system being endowed with exquisite motor abilities, any attempt to generate smooth eye movements against a static background results in saccadic eye movements. Although exceptions to this rule have been reported, volitional control over smooth eye movements is at best rudimentary. Here, I introduce a novel, temporally modulated visual display, which, although static, sustains smooth eye movements in arbitrary directions. After brief training, participants gain volitional control over smooth pursuit eye movements and can generate digits, letters, words, or drawings at will. For persons deprived of limb movement, this offers a fast, creative, and personal means of linguistic and emotional expression. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  11. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Science.gov (United States)

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  12. Body mass index moderates gaze orienting biases and pupil diameter to high and low calorie food images.

    Science.gov (United States)

    Graham, Reiko; Hoover, Alison; Ceballos, Natalie A; Komogortsev, Oleg

    2011-06-01

    The primary goal of this study was to examine eye gaze behavior to different kinds of food images in individuals differing in BMI status. Eye-tracking methods were used to examine gaze and pupil responses while normal weight and overweight women freely viewed pairs of different food images: high calorie sweet foods, high calorie savory foods, and low calorie foods. Self-report measures of hunger, state and trait cravings, and restrained eating were also obtained. Results revealed orienting biases to low calorie foods and decreases in pupil diameter to high calorie sweet foods relative to low calorie foods in the overweight group. Groups did not differ in the average amount of time spent gazing at the different image types. Furthermore, increased state cravings were associated with larger pupil diameters to high calorie savory foods, especially in individuals with lower BMIs. In contrast, restrained eating scores were associated with a decreased orienting bias to high calorie sweet foods in the high BMI group. In conclusion, BMI status appears to influence gaze parameters that are less susceptible to cognitive control. Results suggest that overweight individuals, especially those who diet, have negative implicit attitudes toward high calorie foods, especially sweets. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. The Effect of Head Orientation on Perceived Gaze Direction: Revisiting Gibson and Pick (1963) and Cline (1967)

    Science.gov (United States)

    Moors, Pieter; Verfaillie, Karl; Daems, Thalia; Pomianowska, Iwona; Germeys, Filip

    2016-01-01

    Two biases in perceived gaze direction have been observed when eye and head orientation are not aligned. An overshoot effect indicates that perceived gaze direction is shifted away from head orientation (i.e., a repulsive effect), whereas a towing effect indicates that perceived gaze direction falls in between head and eye orientation (i.e., an attraction effect). In the 60s, three influential papers have been published with respect to the effect of head orientation on perceived gaze direction (Gibson and Pick, 1963; Cline, 1967; Anstis et al., 1969). Throughout the years, the results of two of these (Gibson and Pick, 1963; Cline, 1967) have been interpreted differently by a number of authors. In this paper, we critically discuss potential sources of confusion that have led to differential interpretations of both studies. At first sight, the results of Cline (1967), despite having been a major topic of discussion, unambiguously seem to indicate a towing effect whereas Gibson and Pick’s (1963) results seem to be the most ambiguous, although they have never been questioned in the literature. To shed further light on this apparent inconsistency, we repeated the critical experiments reported in both studies. Our results indicate an overshoot effect in both studies. PMID:27559325

  14. Healthy Eyes

    Science.gov (United States)

    ... los Ojos Cómo hablarle a su oculista Healthy Eyes Having a comprehensive dilated eye exam is one ... or contact lenses. What is a comprehensive dilated eye exam? A comprehensive dilated eye exam is a ...

  15. EFFECTIVENESS OF GAZE STABILITY EXERCISES ON BALANCE IN HEALTHY ELDERLY POPULATION

    Directory of Open Access Journals (Sweden)

    Vaishali Bhardwaj

    2014-08-01

    Full Text Available Purpose: To determine whether gaze stability exercises are effective in improving the balance disturbance and related confidence in healthy elderly population. Design: Randomized pre-test and post-test experimental design. Setting: Superspeciality hospital setup. Participants: 30 subjects without any definite balance disorder but with history of subjective apprehension attending the outpatient physiotherapy department of BLK Superspeciality hospital, New Delhi, were selected for the study. Intervention: The experimental group (n=15 performed gaze stability exercises (GSE, and the control group (n=15 performed placebo eye movements. Weekly progression of exercise was done as per the designed exercise protocol. Total time for placebo eye exercises was matched to the time the GSE group spent performing gaze stability exercises. The participants performed the exercises 3 times daily over a 6-week period. Outcome measures: Participants were evaluated at baseline and after 6 weeks of intervention using the Berg Balance Scale (BBS and the activities-specific balance confidence scale (ABC outcome measures for assessing the balance and subjective confidence of balance respectively. Result: There were no baseline differences (P ≤ .05 between the GSE and CON groups in age and sex, or any outcome measures. GSE group improved significantly in both the outcome measures compared with the control group. Conclusion: The results of this study suggest us that vestibular-specific gaze stability exercises leads to improvement of balance and subjective confidence to carry out the activities of daily life due to adaptation of VOR reflex in age related vestibulo-ocular reflex degeneration (VOR found in healthy elderly population.

  16. Eye movement prediction by oculomotor plant Kalman filter with brainstem control

    Institute of Scientific and Technical Information of China (English)

    Oleg V.KOMOGORTSEV; Javed I.KHAN

    2009-01-01

    Our work addresses one of the core issues related to Human Computer Interaction (HCI) systems that use eye gaze as an input.This issue is the sensor,transmission and other delays that exist in any eye tracker-based system,reducing its performance.A delay effect can be compensated by an accurate prediction of the eye movement trajectories.This paper introduces a mathematical model of the human eye that uses anatomical properties of the Human Visual System to predict eye movement trajectories.The eye mathematical model is transformed into a Kalman filter form to provide continuous eye position signal prediction during all eye movement types.The model presented in this paper uses brainstem control properties employed during transitions between fast (saccade) and slow (fixations,pursuit) eye movements.Results presented in this paper indicate that the proposed eye model in a Kalman filter form improves the accuracy of eye move-ment prediction and is capable of a real-time performance.In addition to the HCI systems with the direct eye gaze input,the proposed eye model can be immediately applied for a bit-rate/computational reduction in real-time gaze-contingent systems.

  17. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  18. Owners' direct gazes increase dogs' attention-getting behaviors.

    Science.gov (United States)

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs.

  19. Gaze Patterns and Audiovisual Speech Enhancement

    Science.gov (United States)

    Yi, Astrid; Wong, Willy; Eizenman, Moshe

    2013-01-01

    Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either…

  20. The Use of Gaze to Control Drones

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Alapetite, Alexandre; MacKenzie, I. Scott

    2014-01-01

    This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or “drones”). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty, reliabil...

  1. Between Gazes: Feminist, Queer, and 'Other' Films

    DEFF Research Database (Denmark)

    Elias, Camelia

    In this book Camelia Elias introduces key terms in feminist, queer, and postcolonial/diaspora film. Taking her point of departure in the question, "what do you want from me?" she detours through Lacanian theory of the gaze and reframes questions of subjectivity and representation in an entertaini...

  2. Between Gazes: Feminist, Queer, and 'Other' Films

    DEFF Research Database (Denmark)

    Elias, Camelia

    In this book Camelia Elias introduces key terms in feminist, queer, and postcolonial/diaspora film. Taking her point of departure in the question, "what do you want from me?" she detours through Lacanian theory of the gaze and reframes questions of subjectivity and representation in an entertaini...

  3. The impact of coping style on gaze duration.

    Directory of Open Access Journals (Sweden)

    Tim Klucken

    Full Text Available The understanding of individual differences in response to threat (e.g., attentional bias is important to better understand the development of anxiety disorders. Previous studies revealed only a small attentional bias in high-anxious (HA subjects. One explanation for this finding may be the assumption that all HA-subjects show a constant attentional bias. Current models distinguish HA-subjects depending on their level of tolerance for uncertainty and for arousal. These models assume that only HA-subjects with intolerance for uncertainty but tolerance for arousal ("sensitizers" show an attentional bias, compared to HA-subjects with intolerance for uncertainty and intolerance for arousal ("fluctuating subjects". Further, it is assumed that repressors (defined as intolerance for arousal but tolerance for uncertainty would react with avoidance behavior when confronted with threatening stimuli. The present study investigated the influence of coping styles on attentional bias. After an extensive recruiting phase, 36 subjects were classified into three groups (sensitizers, fluctuating, and repressors. All subjects were exposed to presentations of happy and threatening faces, while recording gaze durations with an eye-tracker. The results showed that only sensitizer showed an attentional bias: they gazed longer at the threatening face rather than at the happy face during the first 500 ms. The results support the findings of the relationship between anxiety and attention and extend these by showing variations according to coping styles. The differentiation of subjects according to a multifaceted coping style allows a better prediction of the attentional bias and contributes to an insight into the complex interplay of personality, coping, and behavior.

  4. The impact of coping style on gaze duration.

    Science.gov (United States)

    Klucken, Tim; Brouwer, Anne-Marie; Chatziastros, Astros; Kagerer, Sabine; Netter, Petra; Hennig, Juergen

    2010-11-15

    The understanding of individual differences in response to threat (e.g., attentional bias) is important to better understand the development of anxiety disorders. Previous studies revealed only a small attentional bias in high-anxious (HA) subjects. One explanation for this finding may be the assumption that all HA-subjects show a constant attentional bias. Current models distinguish HA-subjects depending on their level of tolerance for uncertainty and for arousal. These models assume that only HA-subjects with intolerance for uncertainty but tolerance for arousal ("sensitizers") show an attentional bias, compared to HA-subjects with intolerance for uncertainty and intolerance for arousal ("fluctuating subjects"). Further, it is assumed that repressors (defined as intolerance for arousal but tolerance for uncertainty) would react with avoidance behavior when confronted with threatening stimuli. The present study investigated the influence of coping styles on attentional bias. After an extensive recruiting phase, 36 subjects were classified into three groups (sensitizers, fluctuating, and repressors). All subjects were exposed to presentations of happy and threatening faces, while recording gaze durations with an eye-tracker. The results showed that only sensitizer showed an attentional bias: they gazed longer at the threatening face rather than at the happy face during the first 500 ms. The results support the findings of the relationship between anxiety and attention and extend these by showing variations according to coping styles. The differentiation of subjects according to a multifaceted coping style allows a better prediction of the attentional bias and contributes to an insight into the complex interplay of personality, coping, and behavior.

  5. The Influences of Face Inversion and Facial Expression on Sensitivity to Eye Contact in High-Functioning Adults with Autism Spectrum Disorders

    Science.gov (United States)

    Vida, Mark D.; Maurer, Daphne; Calder, Andrew J.; Rhodes, Gillian; Walsh, Jennifer A.; Pachai, Matthew V.; Rutherford, M. D.

    2013-01-01

    We examined the influences of face inversion and facial expression on sensitivity to eye contact in high-functioning adults with and without an autism spectrum disorder (ASD). Participants judged the direction of gaze of angry, fearful, and neutral faces. In the typical group only, the range of directions of gaze leading to the perception of eye…

  6. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Science.gov (United States)

    Folgerø, Per O.; Hodne, Lasse; Johansson, Christer; Andresen, Alf E.; Sætren, Lill C.; Specht, Karsten; Skaar, Øystein O.; Reber, Rolf

    2016-01-01

    between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it. PMID:27679567

  7. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Directory of Open Access Journals (Sweden)

    Per Olav Folgerø

    2016-09-01

    larger contrast between the gaze directions for profiles.Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  8. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    Science.gov (United States)

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  9. Coordinating one hand with two eyes: optimizing for field of view in a pointing task.

    Science.gov (United States)

    Khan, Aarlenne Z; Crawford, J Douglas

    2003-02-01

    We previously found that subjects switched 'ocular dominance' as a function of horizontal gaze direction in a reaching task [Vision Res. 41 (14) (2001) 1743]. Here we extend these findings to show that when subjects pointed to targets across the horizontal binocular field, they aligned the fingertip with a vertical plane located between the eyes and the target. This eye-target plane gradually shifted from aligning with the left eye (leftward targets) to between the two eyes (intermediate targets) to the right eye (rightward targets). We suggest that this occurs to optimize eye-hand alignment towards the eye with the best overall field of view.

  10. Physiology and pathology of eye-head coordination.

    Science.gov (United States)

    Proudlock, Frank Antony; Gottlob, Irene

    2007-09-01

    Human head movement control can be considered as part of the oculomotor system since the control of gaze involves coordination of the eyes and head. Humans show a remarkable degree of flexibility in eye-head coordination strategies, nonetheless an individual will often demonstrate stereotypical patterns of eye-head behaviour for a given visual task. This review examines eye-head coordination in laboratory-based visual tasks, such as saccadic gaze shifts and combined eye-head pursuit, and in common tasks in daily life, such as reading. The effect of the aging process on eye-head coordination is then reviewed from infancy through to senescence. Consideration is also given to how pathology can affect eye-head coordination from the lowest through to the highest levels of oculomotor control, comparing conditions as diverse as eye movement restrictions and schizophrenia. Given the adaptability of the eye-head system we postulate that this flexible system is under the control of the frontal cortical regions, which assist in planning, coordinating and executing behaviour. We provide evidence for this based on changes in eye-head coordination dependant on the context and expectation of presented visual stimuli, as well as from changes in eye-head coordination caused by frontal lobe dysfunction.

  11. Quantitative linking hypotheses for infant eye movements.

    Directory of Open Access Journals (Sweden)

    Daniel Yurovsky

    Full Text Available The study of cognitive development hinges, largely, on the analysis of infant looking. But analyses of eye gaze data require the adoption of linking hypotheses: assumptions about the relationship between observed eye movements and underlying cognitive processes. We develop a general framework for constructing, testing, and comparing these hypotheses, and thus for producing new insights into early cognitive development. We first introduce the general framework--applicable to any infant gaze experiment--and then demonstrate its utility by analyzing data from a set of experiments investigating the role of attentional cues in infant learning. The new analysis uncovers significantly more structure in these data, finding evidence of learning that was not found in standard analyses and showing an unexpected relationship between cue use and learning rate. Finally, we discuss general implications for the construction and testing of quantitative linking hypotheses. MATLAB code for sample linking hypotheses can be found on the first author's website.

  12. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    Science.gov (United States)

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  13. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    Science.gov (United States)

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  14. Eye size and visual acuity influence vestibular anatomy in mammals.

    Science.gov (United States)

    Kemp, Addison D; Christopher Kirk, E

    2014-04-01

    The semicircular canals of the inner ear detect head rotations and trigger compensatory movements that stabilize gaze and help maintain visual fixation. Mammals with large eyes and high visual acuity require precise gaze stabilization mechanisms because they experience diminished visual functionality at low thresholds of uncompensated motion. Because semicircular canal radius of curvature is a primary determinant of canal sensitivity, species with large canal radii are expected to be capable of more precise gaze stabilization than species with small canal radii. Here, we examine the relationship between mean semicircular canal radius of curvature, eye size, and visual acuity in a large sample of mammals. Our results demonstrate that eye size and visual acuity both explain a significant proportion of the variance in mean canal radius of curvature after statistically controlling for the effects of body mass and phylogeny. These findings suggest that variation in mean semicircular canal radius of curvature among mammals is partly the result of selection for improved gaze stabilization in species with large eyes and acute vision. Our results also provide a possible functional explanation for the small semicircular canal radii of fossorial mammals and plesiadapiforms. Copyright © 2014 Wiley Periodicals, Inc.

  15. Abnormal Fixational Eye Movements in Amblyopia.

    Science.gov (United States)

    Shaikh, Aasef G; Otero-Millan, Jorge; Kumar, Priyanka; Ghasia, Fatema F

    2016-01-01

    Fixational saccades shift the foveal image to counteract visual fading related to neural adaptation. Drifts are slow eye movements between two adjacent fixational saccades. We quantified fixational saccades and asked whether their changes could be attributed to pathologic drifts seen in amblyopia, one of the most common causes of blindness in childhood. Thirty-six pediatric subjects with varying severity of amblyopia and eleven healthy age-matched controls held their gaze on a visual target. Eye movements were measured with high-resolution video-oculography during fellow eye-viewing and amblyopic eye-viewing conditions. Fixational saccades and drifts were analyzed in the amblyopic and fellow eye and compared with controls. We found an increase in the amplitude with decreased frequency of fixational saccades in children with amblyopia. These alterations in fixational eye movements correlated with the severity of their amblyopia. There was also an increase in eye position variance during drifts in amblyopes. There was no correlation between the eye position variance or the eye velocity during ocular drifts and the amplitude of subsequent fixational saccade. Our findings suggest that abnormalities in fixational saccades in amblyopia are independent of the ocular drift. This investigation of amblyopia in pediatric age group quantitatively characterizes the fixation instability. Impaired properties of fixational saccades could be the consequence of abnormal processing and reorganization of the visual system in amblyopia. Paucity in the visual feedback during amblyopic eye-viewing condition can attribute to the increased eye position variance and drift velocity.

  16. Examining Vision and Attention in Sports Performance Using a Gaze-Contingent Paradigm

    Directory of Open Access Journals (Sweden)

    Donghyun Ryu

    2012-10-01

    Full Text Available In time-constrained activities, such as competitive sports, the rapid acquisition and comprehension of visual information is vital for successful performance. Currently our understanding of how and what visual information is acquired and how this changes with skill development is quite rudimentary. Interpretation of eye movement behaviour is limited by uncertainties surrounding the relationship between attention, line-of-gaze data, and the mechanism of information pick-up from different sectors of the visual field. We used gaze-contingent display methodology to provide a selective information presentation to the central and peripheral parts of the visual field while performing a decision-making task. Eleven skilled and 11 less-skilled players watched videos of basketball scenarios under three different vision conditions (tunnel, masked, and full vision and in a forced-choice paradigm responded whether it was more appropriate for the ball carrier to pass or drive. In the tunnel and mask conditions vision was selectively restricted to, or occluded from, 5o around the line of gaze respectively. The skilled players showed significantly higher response accuracy and faster response times compared to their lesser skilled counterparts irrespective of the vision condition, demonstrating the skilled players' superiority in information extraction irrespective of the segment of visual field they rely on. Findings suggest that the capability to interpret visual information appears to be the key limiting factor to expert performance rather than the sector of the visual field in which the information is detected.

  17. Age-related differences during a gaze reorientation task while standing or walking on a treadmill.

    Science.gov (United States)

    Cinelli, Michael; Patla, Aftab; Stuart, Bethany

    2008-02-01

    Falls among adults over the age of 65 years have become a growing concern. Two factors related to high incidence of falls in this group of adults are decreased head stability and impaired balance. Older adults' level of control of head stability or balance is unknown when they must reorient their gaze. In the current study, ten older adults (69 +/- 3.27 years) performed a gaze reorienting task while standing or walking on a treadmill. The task was the same as that used on young adults by Cinelli et al. (2007). The results show that older adults use a different strategy than young adults when reorienting gaze. Shoulder and hip rotations occurred synchronously when standing and were more variable when walking on a treadmill. As well, there was a larger difference between the onset of eye movements and body segment movement in the older adults. These differences can be accounted for by decreases in physiological subsystems. The visual presence of a visual target helped the older adults stabilize their heads-in-space by incorporating information from more than one sensory system.

  18. The independence of eye movements in a stomatopod crustacean is task dependent.

    Science.gov (United States)

    Daly, Ilse M; How, Martin J; Partridge, Julian C; Roberts, Nicholas W

    2017-04-01

    Stomatopods have an extraordinary visual system, incorporating independent movement of their eyes in all three degrees of rotational freedom. In this work, we demonstrate that in the peacock mantis shrimp, Odontodactylus scyllarus, the level of ocular independence is task dependent. During gaze stabilization in the context of optokinesis, there is weak but significant correlation between the left and right eyes in the yaw degree of rotational freedom, but not in pitch and torsion. When one eye is completely occluded, the uncovered eye does not drive the covered eye during gaze stabilization. However, occluding one eye does significantly affect the uncovered eye, lowering its gaze stabilization performance. There is a lateral asymmetry, with the magnitude of the effect depending on the eye (left or right) combined with the direction of motion of the visual field. In contrast, during a startle saccade, the uncovered eye does drive a covered eye. Such disparate levels of independence between the two eyes suggest that responses to individual visual tasks are likely to follow different neural pathways. © 2017. Published by The Company of Biologists Ltd.

  19. Joint perception: Gaze and social context

    Directory of Open Access Journals (Sweden)

    Daniel C. Richardson

    2012-07-01

    Full Text Available We found that the way people looked at images was influenced by their belief that others were looking too. If participants believed that an unseen other person was also looking at what they could see, it shifted the balance of their gaze between negative and positive images. The direction of this shift depended upon whether participants thought that later they would be compared against the other person or would be collaborating with them. Changes in the social context influenced both gaze and memory processes, and were not due just to participants’ belief that they are looking at the same images, but also to the belief that they are doing the same task. We believe that this new phenomenon of joint perception reveals the pervasive and subtle effect of social context upon cognitive and perceptual processes.

  20. Eye Tracking and Head Movement Detection: A State-of-Art Survey

    Science.gov (United States)

    2013-01-01

    Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851

  1. Paul Gauguin and the complexity of the primitivist gaze

    Directory of Open Access Journals (Sweden)

    Ruud Welten

    2015-06-01

    Full Text Available The article describes the complexity of Paul Gauguin’s primitivism in a philosophical, more precisely, phenomenological way. It focuses on the phenomenological gaze that, in all its complexity, is active in Gauguin. In his ‘Tahitian’ paintings, we encounter a double gaze: the individual gaze, as painted by Gauguin, makes the spectator aware of the distance between Western and Polynesian cultures. Firstly, colonialism is a gaze that reduces the other to a primitive state in order to esteem this state as more pure and authentic. Secondly, it is precisely the other that resists representation and, as such, gazes back. Gauguin’s anti-conquest, therefore, is not a mere personal, political or cultural revolt, but something that is phenomenological, hidden within the power of his art. Instead, it is the spectator who becomes the intruder, the voyeur, caught red-handed in his own primitivist gaze.

  2. Eye Injuries

    Science.gov (United States)

    The structure of your face helps protect your eyes from injury. Still, injuries can damage your eye, sometimes severely enough that you could lose your vision. Most eye injuries are preventable. If you play sports or ...

  3. Eye Wear

    Science.gov (United States)

    Eye wear protects or corrects your vision. Examples are Sunglasses Safety goggles Glasses (also called eyeglasses) Contact ... jobs and some sports carry a risk of eye injury. Thousands of children and adults get eye ...

  4. Eye Cancer

    Science.gov (United States)

    Cancer of the eye is uncommon. It can affect the outer parts of the eye, such as the eyelid, which are made up ... adults are melanoma and lymphoma. The most common eye cancer in children is retinoblastoma, which starts in ...

  5. Eye Diseases

    Science.gov (United States)

    ... the back of the eye Macular degeneration - a disease that destroys sharp, central vision Diabetic eye problems ... defense is to have regular checkups, because eye diseases do not always have symptoms. Early detection and ...

  6. 3D Gaze Estimation from Remote RGB-D Sensors

    OpenAIRE

    Funes Mora, Kenneth Alberto

    2015-01-01

    The development of systems able to retrieve and characterise the state of humans is important for many applications and fields of study. In particular, as a display of attention and interest, gaze is a fundamental cue in understanding people activities, behaviors, intentions, state of mind and personality. Moreover, gaze plays a major role in the communication process, like for showing attention to the speaker, indicating who is addressed or averting gaze to keep the floor. Therefore, many...

  7. [Case of unilateral thalamo-mesencephalic infarction with enlargement to bilateral vertical gaze palsy due to vertical one-and-a-half syndrome].

    Science.gov (United States)

    Suzuki, Keisuke; Odaka, Masaaki; Tatsumoto, Muneto; Miyamoto, Tomoyuki; Takamatsu, Kazuhiro; Hirata, Koichi

    2008-01-01

    An 88-year-old female with atrial fibrillation and hypertension, was admitted to our hospital with sudden onset diplopia and somnolence. She had right hemiparesis with bilateral positive Babinski's sign. Additionally, there was bilateral blepharoptosis with right esotropia. With regard to extraocular movement, the patient demonstrated conjugate upgaze palsy and left monocular down gaze palsy (vertical one-and-a-half syndrome: VOHS). Horizontal gaze in the left eye was completely impaired and there was limited abduction of the right eye. Magnetic resonance imaging of the brain showed left thalamo-mesencephalic infarction. On day 4, the vertical eye movement developed into conjugate upgaze and down- gaze palsies. Magnetic resonance imaging of the brain indicated high signal lesion extending into the dorsal portion of the midbrain. It was suggested that the pathway to contralateral downgaze neurons could have been damaged due to the unilateral (left) dosal midbrain lesion before its decussation with the unilateral interstitial nucleus of Cajal, the oculomotor nucleus and the rostral interstitial nucleus of the medial longitudinal fasciculus. This case is considered to be important because there has been no previous report of bilateral vertical gaze palsy due to VOHS in the same patient. Since there are various patterns of ocular movement disorder in the thalamo-mesencephalic portion, careful observations are required to localize the lesions.

  8. Gaze-following behind barriers in domestic dogs.

    Science.gov (United States)

    Met, Amandine; Miklósi, Ádám; Lakatos, Gabriella

    2014-11-01

    Although gaze-following abilities have been demonstrated in a wide range of species, so far no clear evidence has been available for dogs. In the current study, we examined whether dogs follow human gaze behind an opaque barrier in two different contexts, in a foraging situation and in a non-foraging situation (food involved vs. food not involved in the situation). We assumed that dogs will spontaneously follow the human gaze and that the foraging context will have a positive effect on dogs' gaze-following behaviour by causing an expectation in the dogs that food might be hidden somewhere in the room and might be communicated by the experimenter. This expectation presumably positively affects their motivational and attentional state. Here, we report that dogs show evidence of spontaneous gaze-following behind barriers in both situations. According to our findings, the dogs gazed earlier at the barrier in the indicated direction in both contexts. However, as we expected, the context also has some effect on dogs' gaze-following behaviour, as more dogs gazed behind the barrier in the indicated direction in the foraging situation. The present results also support the idea that gaze-following is a characteristic skill in mammals which may more easily emerge in certain functional contexts.

  9. Watching (through the Watchmen: Representation and Deconstruction of the Controlling Gaze in Neil Gaiman’s The Sandman

    Directory of Open Access Journals (Sweden)

    Daniele Croci

    2014-05-01

    Full Text Available Two of the most widely represented and recognisable cultural examples of the gaze that spies are the supernatural and/or divine eye, which in religious, mythical and tragic narratives seeks and punishes those who misbehave, and on the other hand the controlling, pervasive gaze of technological surveillance, which is typical of modern institutional apparatuses. This essay aims to analyse the ways in which a highly sophisticated, postmodern graphic novel, Neil Gaiman’s The Sandman (1989, elaborates precise representational strategies in order to blend, metaphorize and then deconstruct the aforementioned narratives of control; in the 2000-page fantasy bildungsroman, mythological, pagan and literary elements are combined to shape the Eumenides, a tripartite deity of vengeance which is represented as pure, abstract gaze, and whose inescapable duty is to punish those who act against nature; however, a complex articulation of the concepts of agency and responsibility manages to undermine the ideological foundation of such essentialism, in order to negotiate forms of resistance against vigilance which revolve around the very necessity of the gaze. Michael Foucault’s works about Panopticism, and especially the relationship between power and control provide the theoretical background for such examinations.

  10. Enhanced orienting of attention in response to emotional gaze cues after oxytocin administration in healthy young men.

    Science.gov (United States)

    Tollenaar, Marieke S; Chatzimanoli, Michaela; van der Wee, Nic J A; Putman, Peter

    2013-09-01

    Oxytocin is known to enhance recognition of emotional expressions and may increase attention to the eye region. Therefore, we investigated whether oxytocin administration would lead to increased orienting of attention in response to gaze cues of emotional faces. In a randomized placebo-controlled double-blind crossover study 20 healthy males received 24 IU of oxytocin or placebo. Thirty-five minutes after administration they performed a gaze cueing task with happy, fearful and neutral faces. Stress levels were measured throughout the study. Oxytocin did not affect stress levels during the study, but significantly increased gaze cueing scores for happy and fearful expressions compared to placebo. No effects were found for neutral expressions. Trait anxiety or depression did not moderate the effect. Oxytocin increases orienting of attention in response to emotional gaze cues, both for happy and fearful expressions. Replication is needed in female and clinical populations. Effects of oxytocin on early, automatic processing levels should be studied in relation to previously found pro-social and behavioral effects of oxytocin. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Following gaze: gaze-following behavior as a window into social cognition

    Directory of Open Access Journals (Sweden)

    Stephen V Shepherd

    2010-03-01

    Full Text Available In general, individuals look where they attend and next intend to act. Many animals, including our own species, use observed gaze as a deictic (pointing cue to guide behavior. Among humans, these responses are reflexive and pervasive: they arise within a fraction of a second, act independently of task relevance, and appear to undergird our initial development of language and theory of mind. Human and nonhuman animals appear to share basic gaze-following behaviors, suggesting the foundations of human social cognition may also be present in nonhuman brains.

  12. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-01-01

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. PMID:27734953

  13. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  14. Control and Functions of Fixational Eye Movements

    Science.gov (United States)

    Rucci, Michele; Poletti, Martina

    2016-01-01

    Humans and other species explore a visual scene by rapidly shifting their gaze 2-3 times every second. Although the eyes may appear immobile in the brief intervals in between saccades, microscopic (fixational) eye movements are always present, even when attending to a single point. These movements occur during the very periods in which visual information is acquired and processed and their functions have long been debated. Recent technical advances in controlling retinal stimulation during normal oculomotor activity have shed new light on the visual contributions of fixational eye movements and their degree of control. The emerging body of evidence, reviewed in this article, indicates that fixational eye movements are important components of the strategy by which the visual system processes fine spatial details, enabling both precise positioning of the stimulus on the retina and encoding of spatial information into the joint space-time domain.

  15. Eye Tracking Technique for Product Information Provision

    Science.gov (United States)

    Kim, Seoksoo

    This paper is about the study of the design of the product information provision system using eye tracking, which helps users in deciding over purchase of the product, with a system to provide product information that the user needs by tracking eye gaze of the user via Smart phone. The system provides the user with information of the product that attracts the user's eye, by means of users' eye tracking, user information confirmation using face recognition and user's preference for product. Therefore, once it is determined that the user requires a product, the server sends the product information stored in the product information database to the user's Smart phone to provide information the user requires. The customer, provided with product information in real time, can purchase the product that he/she wants efficiently, and avoid excessive consumption with accurate product information.

  16. Real-time recording and classification of eye movements in an immersive virtual environment.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  17. Referential Gaze and Word Learning in Adults with Autism

    Science.gov (United States)

    Aldaqre, Iyad; Paulus, Markus; Sodian, Beate

    2015-01-01

    While typically developing children can use referential gaze to guide their word learning, those with autism spectrum disorder are often described to have problems with that. However, some researchers assume that the ability to follow gaze to select the correct referent can develop in autism later compared to typically developing individuals. To…

  18. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren;

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM)....

  19. Reading beyond the glance: eye tracking in neurosciences.

    Science.gov (United States)

    Popa, Livia; Selejan, Ovidiu; Scott, Allan; Mureşanu, Dafin F; Balea, Maria; Rafila, Alexandru

    2015-05-01

    From an interdisciplinary approach, the neurosciences (NSs) represent the junction of many fields (biology, chemistry, medicine, computer science, and psychology) and aim to explore the structural and functional aspects of the nervous system. Among modern neurophysiological methods that "measure" different processes of the human brain to salience stimuli, a special place belongs to eye tracking (ET). By detecting eye position, gaze direction, sequence of eye movement and visual adaptation during cognitive activities, ET is an effective tool for experimental psychology and neurological research. It provides a quantitative and qualitative analysis of the gaze, which is very useful in understanding choice behavior and perceptual decision making. In the high tech era, ET has several applications related to the interaction between humans and computers. Herein, ET is used to evaluate the spatial orienting of attention, the performance in visual tasks, the reactions to information on websites, the customer response to advertising, and the emotional and cognitive impact of various spurs to the brain.

  20. Improvement of Reading Speed and Eye Movements

    Directory of Open Access Journals (Sweden)

    Kenji Yokoi

    2011-05-01

    Full Text Available Although many studies have examined eye movements in reading, little is known which factors differentiate slow and fast readers. Recently, Rayner et al. (2010 reported that fast readers had a larger effective visual field than did slow readers by using the gaze-contingent window method. The fast readers they selected, however, may have acquired better attentional skills inherently or through long experience, and this visual superiority would improve reading performance. To clarify this issue, we investigated eye movements in reading while practicing speed reading. Participants (approx. 600 letters per minute in Japanese exercised speed reading programs for half an hour per day for about 30 days. Reading performance of Japanese editorial articles was recorded every five days of training by the gaze-contingent window method. Our results showed that the size of the effective visual field did not increase in the same manner as reading speed (up to 1000 lpm. Instead, we found that saccadic length became longer and less varied. Fixation duration and the number of regressions were also reduced. These findings suggest that efficiency of comprehension at a single gaze may be the important factor for reading speed.