WorldWideScience

Sample records for human eye gaze

  1. EYE GAZE TRACKING

    DEFF Research Database (Denmark)

    2017-01-01

    This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of: detecting the position of at least three reflections on said eye, transforming said positions to spanning...... a normalized coordinate system spanning a frame of reference, wherein said transformation is performed based on a bilinear transformation or a non linear transformation e.g. a möbius transformation or a homographic transformation, detecting the position of said center of the eye relative to the position...... of said reflections and transforming this position to said normalized coordinate system, tracking the eye gaze by tracking the movement of said eye in said normalized coordinate system. Thereby calibration of a camera, such as knowledge of the exact position and zoom level of the camera, is avoided...

  2. The neurophysiology of human touch and eye gaze and its effects on therapeutic relationships and healing: a scoping review protocol.

    Science.gov (United States)

    Kerr, Fiona; Wiechula, Rick; Feo, Rebecca; Schultz, Tim; Kitson, Alison

    2016-04-01

    The objective of this scoping review is to examine and map the range of neurophysiological impacts of human touch and eye gaze, and better understand their possible links to the therapeutic relationship and the process of healing. The specific question is "what neurophysiological impacts of human touch and eye gaze have been reported in relation to therapeutic relationships and healing?"

  3. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements...

  4. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  5. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  6. Gaze-related mimic word activates the frontal eye field and related network in the human brain: an fMRI study.

    Science.gov (United States)

    Osaka, Naoyuki; Osaka, Mariko

    2009-09-18

    This is an fMRI study demonstrating new evidence that a mimic word highly suggestive of an eye gaze, heard by the ear, significantly activates the frontal eye field (FEF), inferior frontal gyrus (IFG), dorsolateral premotor area (PMdr) and superior parietal lobule (SPL) connected with the frontal-parietal network. However, hearing a non-sense words that did not imply gaze under the same task does not activate this area in humans. We concluded that the FEF would be a critical area for generating/processing an active gaze, evoked by an onomatopoeia word that implied gaze closely associated with social skill. We suggest that the implied active gaze may depend on prefrontal-parietal interactions that modify cognitive gaze led by spatial visual attention associated with the SPL.

  7. The effect of gaze angle on the evaluations of SAR and temperature rise in human eye under plane-wave exposures from 0.9 to 10 GHz

    International Nuclear Information System (INIS)

    Diao, Yinliang; Leung, Sai-Wing; Sun, Weinong; Siu, Yun-Ming; Kong, Richard; Hung Chan, Kwok

    2016-01-01

    This article investigates the effect of gaze angle on the specific absorption rate (SAR) and temperature rise in human eye under electromagnetic exposures from 0.9 to 10 GHz. Eye models in different gaze angles are developed based on bio-metric data. The spatial-average SARs in eyes are investigated using the finite-difference time-domain method, and the corresponding maximum temperature rises in lens are calculated by the finite-difference method. It is found that the changes in the gaze angle produce a maximum variation of 35, 12 and 20 % in the eye-averaged SAR, peak 10 g average SAR and temperature rise, respectively. Results also reveal that the eye-averaged SAR is more sensitive to the changes in the gaze angle than peak 10 g average SAR, especially at higher frequencies. (authors)

  8. Training for eye contact modulates gaze following in dogs.

    Science.gov (United States)

    Wallis, Lisa J; Range, Friederike; Müller, Corsin A; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-08-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task.

  9. CULTURAL DISPLAY RULES DRIVE EYE GAZE DURING THINKING.

    Science.gov (United States)

    McCarthy, Anjanie; Lee, Kang; Itakura, Shoji; Muir, Darwin W

    2006-11-01

    The authors measured the eye gaze displays of Canadian, Trinidadian, and Japanese participants as they answered questions for which they either knew, or had to derive, the answers. When they knew the answers, Trinidadians maintained the most eye contact, whereas Japanese maintained the least. When thinking about the answers to questions, Canadians and Trinidadians looked up, whereas Japanese looked down. Thus, for humans, gaze displays while thinking are at least in part culturally determined.

  10. Reading the mind from eye gaze.

    NARCIS (Netherlands)

    Christoffels, I.; Young, A.W.; Owen, A.M.; Scott, S.K.; Keane, J.; Lawrence, A.D.

    2002-01-01

    S. Baron-Cohen (1997) has suggested that the interpretation of gaze plays an important role in a normal functioning theory of mind (ToM) system. Consistent with this suggestion, functional imaging research has shown that both ToM tasks and eye gaze processing engage a similar region of the posterior

  11. Intermediate view synthesis for eye-gazing

    Science.gov (United States)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  12. Eye gazing direction inspection based on image processing technique

    Science.gov (United States)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  13. Eye gaze tracking based on the shape of pupil image

    Science.gov (United States)

    Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.

  14. Estimating the gaze of a virtuality human.

    Science.gov (United States)

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.

  15. In the presence of conflicting gaze cues, fearful expression and eye-size guide attention.

    Science.gov (United States)

    Carlson, Joshua M; Aday, Jacob

    2017-10-19

    Humans are social beings that often interact in multi-individual environments. As such, we are frequently confronted with nonverbal social signals, including eye-gaze direction, from multiple individuals. Yet, the factors that allow for the prioritisation of certain gaze cues over others are poorly understood. Using a modified conflicting gaze paradigm, we tested the hypothesis that fearful gaze would be favoured amongst competing gaze cues. We further hypothesised that this effect is related to the increased sclera exposure, which is characteristic of fearful expressions. Across three experiments, we found that fearful, but not happy, gaze guides observers' attention over competing non-emotional gaze. The guidance of attention by fearful gaze appears to be linked to increased sclera exposure. However, differences in sclera exposure do not prioritise competing gazes of other types. Thus, fearful gaze guides attention among competing cues and this effect is facilitated by increased sclera exposure - but increased sclera exposure per se does not guide attention. The prioritisation of fearful gaze over non-emotional gaze likely represents an adaptive means of selectively attending to survival-relevant spatial locations.

  16. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    Science.gov (United States)

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  17. Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze and Touch

    OpenAIRE

    Turner , Jayson; Alexander , Jason; Bulling , Andreas; Schmidt , Dominik; Gellersen , Hans

    2013-01-01

    Part 4: Gaze-Enabled Interaction Design; International audience; Previous work has validated the eyes and mobile input as a viable approach for pointing at, and selecting out of reach objects. This work presents Eye Pull, Eye Push, a novel interaction concept for content transfer between public and personal devices using gaze and touch. We present three techniques that enable this interaction: Eye Cut & Paste, Eye Drag & Drop, and Eye Summon & Cast. We outline and discuss several scenarios in...

  18. Anxiety symptoms and children's eye gaze during fear learning.

    Science.gov (United States)

    Michalska, Kalina J; Machlin, Laura; Moroney, Elizabeth; Lowet, Daniel S; Hettema, John M; Roberson-Nay, Roxann; Averbeck, Bruno B; Brotman, Melissa A; Nelson, Eric E; Leibenluft, Ellen; Pine, Daniel S

    2017-11-01

    The eye region of the face is particularly relevant for decoding threat-related signals, such as fear. However, it is unclear if gaze patterns to the eyes can be influenced by fear learning. Previous studies examining gaze patterns in adults find an association between anxiety and eye gaze avoidance, although no studies to date examine how associations between anxiety symptoms and eye-viewing patterns manifest in children. The current study examined the effects of learning and trait anxiety on eye gaze using a face-based fear conditioning task developed for use in children. Participants were 82 youth from a general population sample of twins (aged 9-13 years), exhibiting a range of anxiety symptoms. Participants underwent a fear conditioning paradigm where the conditioned stimuli (CS+) were two neutral faces, one of which was randomly selected to be paired with an aversive scream. Eye tracking, physiological, and subjective data were acquired. Children and parents reported their child's anxiety using the Screen for Child Anxiety Related Emotional Disorders. Conditioning influenced eye gaze patterns in that children looked longer and more frequently to the eye region of the CS+ than CS- face; this effect was present only during fear acquisition, not at baseline or extinction. Furthermore, consistent with past work in adults, anxiety symptoms were associated with eye gaze avoidance. Finally, gaze duration to the eye region mediated the effect of anxious traits on self-reported fear during acquisition. Anxiety symptoms in children relate to face-viewing strategies deployed in the context of a fear learning experiment. This relationship may inform attempts to understand the relationship between pediatric anxiety symptoms and learning. © 2017 Association for Child and Adolescent Mental Health.

  19. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    Science.gov (United States)

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception. © 2016 by the Society for Personality and Social Psychology, Inc.

  20. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    Science.gov (United States)

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Eye and head movements shape gaze shifts in Indian peafowl.

    Science.gov (United States)

    Yorzinski, Jessica L; Patricelli, Gail L; Platt, Michael L; Land, Michael F

    2015-12-01

    Animals selectively direct their visual attention toward relevant aspects of their environments. They can shift their attention using a combination of eye, head and body movements. While we have a growing understanding of eye and head movements in mammals, we know little about these processes in birds. We therefore measured the eye and head movements of freely behaving Indian peafowl (Pavo cristatus) using a telemetric eye-tracker. Both eye and head movements contributed to gaze changes in peafowl. When gaze shifts were smaller, eye movements played a larger role than when gaze shifts were larger. The duration and velocity of eye and head movements were positively related to the size of the eye and head movements, respectively. In addition, the coordination of eye and head movements in peafowl differed from that in mammals; peafowl exhibited a near-absence of the vestibulo-ocular reflex, which may partly result from the peafowl's ability to move their heads as quickly as their eyes. © 2015. Published by The Company of Biologists Ltd.

  2. Visual Foraging With Fingers and Eye Gaze

    Directory of Open Access Journals (Sweden)

    Ómar I. Jóhannesson

    2016-03-01

    Full Text Available A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a The fact that a sizeable number of observers (in particular during gaze foraging had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  3. Biasing moral decisions by exploiting the dynamics of eye gaze.

    Science.gov (United States)

    Pärnamets, Philip; Johansson, Petter; Hall, Lars; Balkenius, Christian; Spivey, Michael J; Richardson, Daniel C

    2015-03-31

    Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

  4. The duality of gaze: Eyes extract and signal social information during sustained cooperative and competitive dyadic gaze

    Directory of Open Access Journals (Sweden)

    Michelle eJarick

    2015-09-01

    Full Text Available In contrast to nonhuman primate eyes, which have a dark sclera surrounding a dark iris, human eyes have a white sclera that surrounds a dark iris. This high contrast morphology allows humans to determine quickly and easily where others are looking and infer what they are attending to. In recent years an enormous body of work has used photos and schematic images of faces to study these aspects of social attention, e.g., the selection of the eyes of others and the shift of attention to where those eyes are directed. However, evolutionary theory holds that humans did not develop a high contrast morphology simply to use the eyes of others as attentional cues; rather they sacrificed camouflage for communication, that is, to signal their thoughts and intentions to others. In the present study we demonstrate the importance of this by taking as our starting point the hypothesis that a cornerstone of nonverbal communication is the eye contact between individuals and the time that it is held. In a single simple study we show experimentally that the effect of eye contact can be quickly and profoundly altered merely by having participants, who had never met before, play a game in a cooperative or competitive manner. After the game participants were asked to make eye contact for a prolonged period of time (10 minutes. Those who had played the game cooperatively found this terribly difficult to do, repeatedly talking and breaking gaze. In contrast, those who had played the game competitively were able to stare quietly at each other for a sustained period. Collectively these data demonstrate that when looking at the eyes of a real person one both acquires and signals information to the other person. This duality of gaze is critical to nonverbal communication, with the nature of that communication shaped by the relationship between individuals, e.g., cooperative or competitive.

  5. A neural-based remote eye gaze tracker under natural head motion.

    Science.gov (United States)

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  6. Does the 'P300' speller depend on eye gaze?

    Science.gov (United States)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  7. Assessing the Usability of Gaze-Adapted Interface against Conventional Eye-based Input Emulation

    OpenAIRE

    Kumar, Chandan; Menges, Raphael; Staab, Steffen

    2017-01-01

    In recent years, eye tracking systems have greatly improved, beginning to play a promising role as an input medium. Eye trackers can be used for application control either by simply emulating the mouse and keyboard devices in the traditional graphical user interface, or by customized interfaces for eye gaze events. In this work, we evaluate these two approaches to assess their impact in usability. We present a gaze-adapted Twitter application interface with direct interaction of eye gaze inpu...

  8. Eye gaze performance for children with severe physical impairments using gaze-based assistive technology-A longitudinal study.

    Science.gov (United States)

    Borgestig, Maria; Sandqvist, Jan; Parsons, Richard; Falkmer, Torbjörn; Hemmingsson, Helena

    2016-01-01

    Gaze-based assistive technology (gaze-based AT) has the potential to provide children affected by severe physical impairments with opportunities for communication and activities. This study aimed to examine changes in eye gaze performance over time (time on task and accuracy) in children with severe physical impairments, without speaking ability, using gaze-based AT. A longitudinal study with a before and after design was conducted on 10 children (aged 1-15 years) with severe physical impairments, who were beginners to gaze-based AT at baseline. Thereafter, all children used the gaze-based AT in daily activities over the course of the study. Compass computer software was used to measure time on task and accuracy with eye selection of targets on screen, and tests were performed with the children at baseline, after 5 months, 9-11 months, and after 15-20 months. Findings showed that the children improved in time on task after 5 months and became more accurate in selecting targets after 15-20 months. This study indicates that these children with severe physical impairments, who were unable to speak, could improve in eye gaze performance. However, the children needed time to practice on a long-term basis to acquire skills needed to develop fast and accurate eye gaze performance.

  9. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua

    2013-01-01

    In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment the langu......In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment...

  10. "Gaze Leading": Initiating Simulated Joint Attention Influences Eye Movements and Choice Behavior

    Science.gov (United States)

    Bayliss, Andrew P.; Murphy, Emily; Naughtin, Claire K.; Kritikos, Ada; Schilbach, Leonhard; Becker, Stefanie I.

    2013-01-01

    Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator--the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon…

  11. Intranasal Oxytocin Treatment Increases Eye-Gaze Behavior toward the Owner in Ancient Japanese Dog Breeds

    Directory of Open Access Journals (Sweden)

    Miho Nagasawa

    2017-09-01

    Full Text Available Dogs acquired unique cognitive abilities during domestication, which is thought to have contributed to the formation of the human-dog bond. In European breeds, but not in wolves, a dog’s gazing behavior plays an important role in affiliative interactions with humans and stimulates oxytocin secretion in both humans and dogs, which suggests that this interspecies oxytocin and gaze-mediated bonding was also acquired during domestication. In this study, we investigated whether Japanese breeds, which are classified as ancient breeds and are relatively close to wolves genetically, establish a bond with their owners through gazing behavior. The subject dogs were treated with either oxytocin or saline before the starting of the behavioral testing. We also evaluated physiological changes in the owners during mutual gazing by analyzing their heart rate variability (HRV and subsequent urinary oxytocin levels in both dogs and their owners. We found that oxytocin treatment enhanced the gazing behavior of Japanese dogs and increased their owners’ urinary oxytocin levels, as was seen with European breeds; however, the measured durations of skin contact and proximity to their owners were relatively low. In the owners’ HRV readings, inter-beat (R-R intervals (RRI, the standard deviation of normal to normal inter-beat (R-R intervals (SDNN, and the root mean square of successive heartbeat interval differences (RMSSD were lower when the dogs were treated with oxytocin compared with saline. Furthermore, the owners of female dogs showed lower SDNN than the owners of male dogs. These results suggest that the owners of female Japanese dogs exhibit more tension during interactions, and apart from gazing behavior, the dogs may show sex differences in their interactions with humans as well. They also suggest that Japanese dogs use eye-gazing as an attachment behavior toward humans similar to European breeds; however, there is a disparity between the dog sexes when

  12. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu

    2013-01-01

    and composition of the two modalities was used for integration. F-measure for Eye-Gaze and Word Accuracy for ASR were used as metrics to evaluate our results. In reading task, we demonstrated a significant improvement in both Eye-Gaze f-measure and speech Word Accuracy. In sight translation task, significant...

  13. Human-like object tracking and gaze estimation with PKD android.

    Science.gov (United States)

    Wijayasinghe, Indika B; Miller, Haylie L; Das, Sumit K; Bugnariu, Nicoleta L; Popa, Dan O

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  14. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  15. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    Science.gov (United States)

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  16. Experimental test of spatial updating models for monkey eye-head gaze shifts.

    Directory of Open Access Journals (Sweden)

    Tom J Van Grootel

    Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.

  17. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Attention and Social Cognition in Virtual Reality : The effect of engagement mode and character eye-gaze

    NARCIS (Netherlands)

    Rooney, Brendan; Bálint, Katalin; Parsons, Thomas; Burke, Colin; O'Leary, T; Lee, C.T.; Mantei, C.

    2017-01-01

    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an

  19. Eyes that bind us: Gaze leading induces an implicit sense of agency.

    Science.gov (United States)

    Stephenson, Lisa J; Edwards, S Gareth; Howard, Emma E; Bayliss, Andrew P

    2018-03-01

    Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention - causing another person to follow our eye movement - can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named 'temporal binding,' is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Predictive Gaze Cues and Personality Judgments: Should Eye Trust You?

    OpenAIRE

    Bayliss, Andrew P.; Tipper, Steven P.

    2006-01-01

    Although following another person's gaze is essential in fluent social interactions, the reflexive nature of this gaze-cuing effect means that gaze can be used to deceive. In a gaze-cuing procedure, participants were presented with several faces that looked to the left or right. Some faces always looked to the target (predictive-valid), some never looked to the target (predictive-invalid), and others looked toward and away from the target in equal proportions (nonpredictive). The standard gaz...

  1. Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder.

    Science.gov (United States)

    Kim, Haena; Shin, Jung Eun; Hong, Yeon-Ju; Shin, Yu-Bin; Shin, Young Seok; Han, Kiwan; Kim, Jae-Jin; Choi, Soo-Hee

    2018-03-01

    One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients' gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.

  2. Eye blinking in an avian species is associated with gaze shifts.

    Science.gov (United States)

    Yorzinski, Jessica L

    2016-08-30

    Even when animals are actively monitoring their environment, they lose access to visual information whenever they blink. They can strategically time their blinks to minimize information loss and improve visual functioning but we have little understanding of how this process operates in birds. This study therefore examined blinking in freely-moving peacocks (Pavo cristatus) to determine the relationship between their blinks, gaze shifts, and context. Peacocks wearing a telemetric eye-tracker were exposed to a taxidermy predator (Vulpes vulpes) and their blinks and gaze shifts were recorded. Peacocks blinked during the majority of their gaze shifts, especially when gaze shifts were large, thereby timing their blinks to coincide with periods when visual information is already suppressed. They inhibited their blinks the most when they exhibited high rates of gaze shifts and were thus highly alert. Alternative hypotheses explaining the link between blinks and gaze shifts are discussed.

  3. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Science.gov (United States)

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  4. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Directory of Open Access Journals (Sweden)

    Simon Ho

    Full Text Available Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials, which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1 validate the overall results from earlier aggregated analyses and 2 provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1 speakers end their turn with direct gaze at the listener and 2 the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  5. Real-time sharing of gaze data between multiple eye trackers-evaluation, tools, and advice.

    Science.gov (United States)

    Nyström, Marcus; Niehorster, Diederick C; Cornelissen, Tim; Garde, Henrik

    2017-08-01

    Technological advancements in combination with significant reductions in price have made it practically feasible to run experiments with multiple eye trackers. This enables new types of experiments with simultaneous recordings of eye movement data from several participants, which is of interest for researchers in, e.g., social and educational psychology. The Lund University Humanities Laboratory recently acquired 25 remote eye trackers, which are connected over a local wireless network. As a first step toward running experiments with this setup, demanding situations with real time sharing of gaze data were investigated in terms of network performance as well as clock and screen synchronization. Results show that data can be shared with a sufficiently low packet loss (0.1 %) and latency (M = 3 ms, M A D = 2 ms) across 8 eye trackers at a rate of 60 Hz. For a similar performance using 24 computers, the send rate needs to be reduced to 20 Hz. To help researchers conduct similar measurements on their own multi-eye-tracker setup, open source software written in Python and PsychoPy are provided. Part of the software contains a minimal working example to help researchers kick-start experiments with two or more eye trackers.

  6. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    Science.gov (United States)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  7. Orienting of attention via observed eye gaze is head-centred.

    Science.gov (United States)

    Bayliss, Andrew P; di Pellegrino, Giuseppe; Tipper, Steven P

    2004-11-01

    Observing averted eye gaze results in the automatic allocation of attention to the gazed-at location. The role of the orientation of the face that produces the gaze cue was investigated. The eyes in the face could look left or right in a head-centred frame, but the face itself could be oriented 90 degrees clockwise or anticlockwise such that the eyes were gazing up or down. Significant cueing effects to targets presented to the left or right of the screen were found in these head orientation conditions. This suggests that attention was directed to the side to which the eyes would have been looking towards, had the face been presented upright. This finding provides evidence that head orientation can affect gaze following, even when the head orientation alone is not a social cue. It also shows that the mechanism responsible for the allocation of attention following a gaze cue can be influenced by intrinsic object-based (i.e. head-centred) properties of the task-irrelevant cue.

  8. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability.

    Science.gov (United States)

    Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando

    2008-01-01

    This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.

  9. Effects of Observing Eye Contact on Gaze Following in High-Functioning Autism

    NARCIS (Netherlands)

    Böckler, A.; Timmermans, B.; Sebanz, N.; Vogeley, K.; Schilbach, L.

    2014-01-01

    Observing eye contact between others enhances the tendency to subsequently follow their gaze and has been suggested to function as a social signal that adds meaning to an upcoming action or event. The present study investigated effects of observed eye contact in high-functioning autism (HFA). Two

  10. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    , and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...

  11. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  12. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Directory of Open Access Journals (Sweden)

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  13. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  14. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  15. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia

    Science.gov (United States)

    2014-01-01

    Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356

  16. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  17. 3D recovery of human gaze in natural environments

    Science.gov (United States)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  18. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience

    OpenAIRE

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that ...

  19. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    Directory of Open Access Journals (Sweden)

    Jorge Torres-Marín

    2017-11-01

    Full Text Available Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40 indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40, we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence

  20. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  1. Human sensitivity to eye contact in 2D and 3D videoconferencing

    NARCIS (Netherlands)

    Eijk, van R.L.J.; Kuijsters, A.; Dijkstra, K.I.; IJsselsteijn, W.A.

    2010-01-01

    Gaze awareness and eye contact serve important functions in social interaction. In order to maintain those functions in 2D and 3D videoconferencing systems, human sensitivity to eye contact and gaze direction needs to be taken into account in the design of such systems. Here we experimentally

  2. An Open Conversation on Using Eye-Gaze Methods in Studies of Neurodevelopmental Disorders

    Science.gov (United States)

    Venker, Courtney E.; Kover, Sara T.

    2015-01-01

    Purpose: Eye-gaze methods have the potential to advance the study of neurodevelopmental disorders. Despite their increasing use, challenges arise in using these methods with individuals with neurodevelopmental disorders and in reporting sufficient methodological detail such that the resulting research is replicable and interpretable. Method: This…

  3. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.

    Science.gov (United States)

    Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.

  4. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    Science.gov (United States)

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  5. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    Science.gov (United States)

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech

  6. A 2D eye gaze estimation system with low-resolution webcam images

    Directory of Open Access Journals (Sweden)

    Kim Jin

    2011-01-01

    Full Text Available Abstract In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI algorithm. Deformable template-based 2D gaze estimation (DTBGE algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  7. A novel attention training paradigm based on operant conditioning of eye gaze: Preliminary findings.

    Science.gov (United States)

    Price, Rebecca B; Greven, Inez M; Siegle, Greg J; Koster, Ernst H W; De Raedt, Rudi

    2016-02-01

    Inability to engage with positive stimuli is a widespread problem associated with negative mood states across many conditions, from low self-esteem to anhedonic depression. Though attention retraining procedures have shown promise as interventions in some clinical populations, novel procedures may be necessary to reliably attenuate chronic negative mood in refractory clinical populations (e.g., clinical depression) through, for example, more active, adaptive learning processes. In addition, a focus on individual difference variables predicting intervention outcome may improve the ability to provide such targeted interventions efficiently. To provide preliminary proof-of-principle, we tested a novel paradigm using operant conditioning to train eye gaze patterns toward happy faces. Thirty-two healthy undergraduates were randomized to receive operant conditioning of eye gaze toward happy faces (train-happy) or neutral faces (train-neutral). At the group level, the train-happy condition attenuated sad mood increases following a stressful task, in comparison to train-neutral. In individual differences analysis, greater physiological reactivity (pupil dilation) in response to happy faces (during an emotional face-search task at baseline) predicted decreased mood reactivity after stress. These Preliminary results suggest that operant conditioning of eye gaze toward happy faces buffers against stress-induced effects on mood, particularly in individuals who show sufficient baseline neural engagement with happy faces. Eye gaze patterns to emotional face arrays may have a causal relationship with mood reactivity. Personalized medicine research in depression may benefit from novel cognitive training paradigms that shape eye gaze patterns through feedback. Baseline neural function (pupil dilation) may be a key mechanism, aiding in iterative refinement of this approach. (c) 2016 APA, all rights reserved).

  8. How does image noise affect actual and predicted human gaze allocation in assessing image quality?

    Science.gov (United States)

    Röhrbein, Florian; Goddard, Peter; Schneider, Michael; James, Georgina; Guo, Kun

    2015-07-01

    A central research question in natural vision is how to allocate fixation to extract informative cues for scene perception. With high quality images, psychological and computational studies have made significant progress to understand and predict human gaze allocation in scene exploration. However, it is unclear whether these findings can be generalised to degraded naturalistic visual inputs. In this eye-tracking and computational study, we methodically distorted both man-made and natural scenes with Gaussian low-pass filter, circular averaging filter and Additive Gaussian white noise, and monitored participants' gaze behaviour in assessing perceived image qualities. Compared with original high quality images, distorted images attracted fewer numbers of fixations but longer fixation durations, shorter saccade distance and stronger central fixation bias. This impact of image noise manipulation on gaze distribution was mainly determined by noise intensity rather than noise type, and was more pronounced for natural scenes than for man-made scenes. We furthered compared four high performing visual attention models in predicting human gaze allocation in degraded scenes, and found that model performance lacked human-like sensitivity to noise type and intensity, and was considerably worse than human performance measured as inter-observer variance. Furthermore, the central fixation bias is a major predictor for human gaze allocation, which becomes more prominent with increased noise intensity. Our results indicate a crucial role of external noise intensity in determining scene-viewing gaze behaviour, which should be considered in the development of realistic human-vision-inspired attention models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. EyeDroid: An Open Source Mobile Gaze Tracker on Android for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbeigi, Diako; Sintos, Ioannis

    2015-01-01

    In this paper we report on development and evaluation of a video-based mobile gaze tracker for eyewear computers. Unlike most of the previous work, our system performs all its processing workload on an Android device and sends the coordinates of the gaze point to an eyewear device through wireless...... connection. We propose a lightweight software architecture for Android to increase the efficiency of image processing needed for eye tracking. The evaluation of the system indicated an accuracy of 1:06 and a battery lifetime of approximate 4.5 hours....

  10. A free geometry model-independent neural eye-gaze tracking system

    Directory of Open Access Journals (Sweden)

    Gneo Massimo

    2012-11-01

    Full Text Available Abstract Background Eye Gaze Tracking Systems (EGTSs estimate the Point Of Gaze (POG of a user. In diagnostic applications EGTSs are used to study oculomotor characteristics and abnormalities, whereas in interactive applications EGTSs are proposed as input devices for human computer interfaces (HCI, e.g. to move a cursor on the screen when mouse control is not possible, such as in the case of assistive devices for people suffering from locked-in syndrome. If the user’s head remains still and the cornea rotates around its fixed centre, the pupil follows the eye in the images captured from one or more cameras, whereas the outer corneal reflection generated by an IR light source, i.e. glint, can be assumed as a fixed reference point. According to the so-called pupil centre corneal reflection method (PCCR, the POG can be thus estimated from the pupil-glint vector. Methods A new model-independent EGTS based on the PCCR is proposed. The mapping function based on artificial neural networks allows to avoid any specific model assumption and approximation either for the user’s eye physiology or for the system initial setup admitting a free geometry positioning for the user and the system components. The robustness of the proposed EGTS is proven by assessing its accuracy when tested on real data coming from: i different healthy users; ii different geometric settings of the camera and the light sources; iii different protocols based on the observation of points on a calibration grid and halfway points of a test grid. Results The achieved accuracy is approximately 0.49°, 0.41°, and 0.62° for respectively the horizontal, vertical and radial error of the POG. Conclusions The results prove the validity of the proposed approach as the proposed system performs better than EGTSs designed for HCI which, even if equipped with superior hardware, show accuracy values in the range 0.6°-1°.

  11. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    Science.gov (United States)

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  12. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  13. Are Eyes Windows to a Deceiver's Soul? Children's Use of Another's Eye Gaze Cues in a Deceptive Situation

    Science.gov (United States)

    Freire, Alejo; Eskritt, Michelle; Lee, Kang

    2004-01-01

    Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In…

  14. Eye Gaze Controlled Projected Display in Automotive and Military Aviation Environments

    Directory of Open Access Journals (Sweden)

    Gowdham Prabhakar

    2018-01-01

    Full Text Available This paper presents an eye gaze controlled projected display that can be used in aviation and automotive environment as a head up display. We have presented details of the hardware and software used in developing the display and an algorithm to improve performance of point and selection tasks in eye gaze controlled graphical user interface. The algorithm does not require changing layout of an interface; it rather puts a set of hotspots on clickable targets using a Simulated Annealing algorithm. Four user studies involving driving and flight simulators have found that the proposed projected display can improve driving and flying performance and significantly reduce pointing and selection times for secondary mission control tasks compared to existing interaction systems.

  15. Eye-gaze control of the computer interface: Discrimination of zoom intent

    International Nuclear Information System (INIS)

    Goldberg, J.H.

    1993-01-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at a statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered

  16. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans.

    Science.gov (United States)

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach-avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.

  17. Keeping Your Eye on the Rail: Gaze Behaviour of Horse Riders Approaching a Jump

    Science.gov (United States)

    Hall, Carol; Varley, Ian; Kay, Rachel; Crundall, David

    2014-01-01

    The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye). The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG) was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10) was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (pJump 1 was fixated on significantly earlier and more frequently than jump 2 or 3 (pjump 3 than with jump 1 (p = 0.01) but there was no difference in errors made between rounds. Although no significant correlations between gaze behaviour and skill scores were found, the riders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07), when the horse was further from the jump (p = 0.09) and their first fixation on the jump was of a longer duration (p = 0.06). Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports. PMID:24846055

  18. Driver fatigue alarm based on eye detection and gaze estimation

    Science.gov (United States)

    Sun, Xinghua; Xu, Lu; Yang, Jingyu

    2007-11-01

    The driver assistant system has attracted much attention as an essential component of intelligent transportation systems. One task of driver assistant system is to prevent the drivers from fatigue. For the fatigue detection it is natural that the information about eyes should be utilized. The driver fatigue can be divided into two types, one is the sleep with eyes close and another is the sleep with eyes open. Considering that the fatigue detection is related with the prior knowledge and probabilistic statistics, the dynamic Bayesian network is used as the analysis tool to perform the reasoning of fatigue. Two kinds of experiments are performed to verify the system effectiveness, one is based on the video got from the laboratory and another is based on the video got from the real driving situation. Ten persons participate in the test and the experimental result is that, in the laboratory all the fatigue events can be detected, and in the practical vehicle the detection ratio is about 85%. Experiments show that in most of situations the proposed system works and the corresponding performance is satisfying.

  19. Eye-gaze patterns as students study worked-out examples in mechanics

    Directory of Open Access Journals (Sweden)

    Brian H. Ross

    2010-10-01

    Full Text Available This study explores what introductory physics students actually look at when studying worked-out examples. Our classroom experiences indicate that introductory physics students neither discuss nor refer to the conceptual information contained in the text of worked-out examples. This study is an effort to determine to what extent students incorporate the textual information into the way they study. Student eye-gaze patterns were recorded as they studied the examples to aid them in solving a target problem. Contrary to our expectations from classroom interactions, students spent 40±3% of their gaze time reading the textual information. Their gaze patterns were also characterized by numerous jumps between corresponding mathematical and textual information, implying that they were combining information from both sources. Despite this large fraction of time spent reading the text, student recall of the conceptual information contained therein remained very poor. We also found that having a particular problem in mind had no significant effects on the gaze-patterns or conceptual information retention.

  20. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behavior in Dealing with Uncertainty.

    Science.gov (United States)

    Wittek, Peter; Liu, Ying-Hsang; Darányi, Sándor; Gedeon, Tom; Lim, Ik Soo

    2016-01-01

    Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Our findings from an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain suggest that user traits of cognitive styles and perceived search task difficulty are significantly correlated with eye gaze and search behavior. We also demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favor of ambiguity minimization. Since the pattern of behavior in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.

  1. Complicating Eroticism and the Male Gaze: Feminism and Georges Bataille’s Story of the Eye

    Directory of Open Access Journals (Sweden)

    Chris Vanderwees

    2014-01-01

    Full Text Available This article explores the relationship between feminist criticism and Georges Bataille’s Story of the Eye . Much of the critical work on Bataille assimilates his psychosocial theories in Erotism with the manifestation of those theories in his fiction without acknowledging potential contradictions between the two bodies of work. The conflation of important distinctions between representations of sex and death in Story of the Eye and the writings of Erotism forecloses the possibility of reading Bataille’s novel as a critique of gender relations. This article unravels some of the distinctions between Erotism and Story of the Eye in order to complicate the assumption that the novel simply reproduces phallogocentric sexual fantasies of transgression. Drawing from the work of Angela Carter and Laura Mulvey, the author proposes the possibility of reading Story of the Eye as a pornographic critique of gender relations through an analysis of the novel’s displacement and destruction of the male gaze.

  2. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  3. The EyeHarp: A Gaze-Controlled Digital Musical Instrument.

    Science.gov (United States)

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective.

  4. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  5. Testing the dual-route model of perceived gaze direction: Linear combination of eye and head cues.

    Science.gov (United States)

    Otsuka, Yumiko; Mareschal, Isabelle; Clifford, Colin W G

    2016-06-01

    We have recently proposed a dual-route model of the effect of head orientation on perceived gaze direction (Otsuka, Mareschal, Calder, & Clifford, 2014; Otsuka, Mareschal, & Clifford, 2015), which computes perceived gaze direction as a linear combination of eye orientation and head orientation. By parametrically manipulating eye orientation and head orientation, we tested the adequacy of a linear model to account for the effect of horizontal head orientation on perceived direction of gaze. Here, participants adjusted an on-screen pointer toward the perceived gaze direction in two image conditions: Normal condition and Wollaston condition. Images in the Normal condition included a change in the visible part of the eye along with the change in head orientation, while images in the Wollaston condition were manipulated to have identical eye regions across head orientations. Multiple regression analysis with explanatory variables of eye orientation and head orientation revealed that linear models account for most of the variance both in the Normal condition and in the Wollaston condition. Further, we found no evidence that the model with a nonlinear term explains significantly more variance. Thus, the current study supports the dual-route model that computes the perceived gaze direction as a linear combination of eye orientation and head orientation.

  6. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    Science.gov (United States)

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Love is in the gaze: an eye-tracking study of love and sexual desire.

    Science.gov (United States)

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports. © The Author(s) 2014.

  8. In the Eye of the Beholder-  A Survey of Models for Eyes and Gaze

    DEFF Research Database (Denmark)

    Witzner Hansen, Dan; Ji, Qiang

    2010-01-01

    Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and...

  9. There is more to gaze than meets the eye: How animals perceive the visual behaviour of others

    NARCIS (Netherlands)

    Goossens, B.M.A.

    2008-01-01

    Gaze following and the ability to understand that another individual sees something different from oneself are considered important components of human and animal social cognition. In animals, gaze following has been documented in various species, however, the underlying cognitive mechanisms and the

  10. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  11. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    Science.gov (United States)

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  12. A Statistical Approach to Continuous Self-Calibrating Eye Gaze Tracking for Head-Mounted Virtual Reality Systems

    OpenAIRE

    Tripathi, Subarna; Guenter, Brian

    2016-01-01

    We present a novel, automatic eye gaze tracking scheme inspired by smooth pursuit eye motion while playing mobile games or watching virtual reality contents. Our algorithm continuously calibrates an eye tracking system for a head mounted display. This eliminates the need for an explicit calibration step and automatically compensates for small movements of the headset with respect to the head. The algorithm finds correspondences between corneal motion and screen space motion, and uses these to...

  13. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  14. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  15. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Science.gov (United States)

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa

    2014-01-01

    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  16. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Directory of Open Access Journals (Sweden)

    Eero Ahtola

    Full Text Available To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm.Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus.The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants.The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals.Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  17. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    Science.gov (United States)

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (pfaces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  18. Gaze as a biometric

    Science.gov (United States)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  19. Gaze as a biometric

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  20. Eye Gaze During Face Processing in Children and Adolescents with 22q11.2 Deletion Syndrome

    Science.gov (United States)

    Glaser, Bronwyn; Debbane, Martin; Ottet, Marie-Christine; Vuilleumier, Patrik; Zesiger, Pascal; Antonarakis, Stylianos E.; Eliez, Stephan

    2010-01-01

    Objective: The 22q11.2 deletion syndrome (22q11DS) is a neurogenetic syndrome with high risk for the development of psychiatric disorder. There is interest in identifying reliable markers for measuring and monitoring socio-emotional impairments in 22q11DS during development. The current study investigated eye gaze as a potential marker during a…

  1. An Exploration of the Use of Eye-Gaze Tracking to Study Problem-Solving on Standardized Science Assessments

    Science.gov (United States)

    Tai, Robert H.; Loehr, John F.; Brigham, Frederick J.

    2006-01-01

    This pilot study investigated the capacity of eye-gaze tracking to identify differences in problem-solving behaviours within a group of individuals who possessed varying degrees of knowledge and expertise in three disciplines of science (biology, chemistry and physics). The six participants, all pre-service science teachers, completed an 18-item…

  2. Social evolution. Oxytocin-gaze positive loop and the coevolution of human-dog bonds.

    Science.gov (United States)

    Nagasawa, Miho; Mitsui, Shouhei; En, Shiori; Ohtani, Nobuyo; Ohta, Mitsuaki; Sakuma, Yasuo; Onaka, Tatsushi; Mogi, Kazutaka; Kikusui, Takefumi

    2015-04-17

    Human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans. We show that gazing behavior from dogs, but not wolves, increased urinary oxytocin concentrations in owners, which consequently facilitated owners' affiliation and increased oxytocin concentration in dogs. Further, nasally administered oxytocin increased gazing behavior in dogs, which in turn increased urinary oxytocin concentrations in owners. These findings support the existence of an interspecies oxytocin-mediated positive loop facilitated and modulated by gazing, which may have supported the coevolution of human-dog bonding by engaging common modes of communicating social attachment. Copyright © 2015, American Association for the Advancement of Science.

  3. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Science.gov (United States)

    Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  4. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Directory of Open Access Journals (Sweden)

    Karine Durand

    Full Text Available This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48 were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  5. Novel Eye Movement Disorders in Whipple’s Disease—Staircase Horizontal Saccades, Gaze-Evoked Nystagmus, and Esotropia

    Directory of Open Access Journals (Sweden)

    Aasef G. Shaikh

    2017-07-01

    Full Text Available Whipple’s disease, a rare systemic infectious disorder, is complicated by the involvement of the central nervous system in about 5% of cases. Oscillations of the eyes and the jaw, called oculo-masticatory myorhythmia, are pathognomonic of the central nervous system involvement but are often absent. Typical manifestations of the central nervous system Whipple’s disease are cognitive impairment, parkinsonism mimicking progressive supranuclear palsy with vertical saccade slowing, and up-gaze range limitation. We describe a unique patient with the central nervous system Whipple’s disease who had typical features, including parkinsonism, cognitive impairment, and up-gaze limitation; but also had diplopia, esotropia with mild horizontal (abduction more than adduction limitation, and vertigo. The patient also had gaze-evoked nystagmus and staircase horizontal saccades. Latter were thought to be due to mal-programmed small saccades followed by a series of corrective saccades. The saccades were disconjugate due to the concurrent strabismus. Also, we noted disconjugacy in the slow phase of gaze-evoked nystagmus. The disconjugacy of the slow phase of gaze-evoked nystagmus was larger during monocular viewing condition. We propose that interaction of the strabismic drifts of the covered eyes and the nystagmus drift, putatively at the final common pathway might lead to such disconjugacy.

  6. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    Directory of Open Access Journals (Sweden)

    Laura Jean Wells

    Full Text Available The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise, each with three levels of intensity (low, moderate, and normal. Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  7. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    Science.gov (United States)

    Wells, Laura Jean; Gillespie, Steven Mark; Rotshtein, Pia

    2016-01-01

    The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  8. Real-time inference of word relevance from electroencephalogram and eye gaze

    Science.gov (United States)

    Wenzel, M. A.; Bogojeski, M.; Blankertz, B.

    2017-10-01

    Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.

  9. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  10. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    Science.gov (United States)

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal

  11. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  12. Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze.

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2016-08-01

    Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Directory of Open Access Journals (Sweden)

    Christoforos eChristoforou

    2015-05-01

    Full Text Available Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e. advertising, shelf testing, and website usability. However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewer’s lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e. movie, narrative content, emotional content, etc., and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group and individual differences in the study of attention-deficit disorders and, the study of desensitization to media violence.

  14. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Science.gov (United States)

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.

  15. Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation.

    Science.gov (United States)

    McLaughlin, Laura; Bond, Raymond; Hughes, Ciara; McConnell, Jonathan; McFadden, Sonyia

    2017-09-01

    To investigate image interpretation performance by diagnostic radiography students, diagnostic radiographers and reporting radiographers by computing eye gaze metrics using eye tracking technology. Three groups of participants were studied during their interpretation of 8 digital radiographic images including the axial and appendicular skeleton, and chest (prevalence of normal images was 12.5%). A total of 464 image interpretations were collected. Participants consisted of 21 radiography students, 19 qualified radiographers and 18 qualified reporting radiographers who were further qualified to report on the musculoskeletal (MSK) system. Eye tracking data was collected using the Tobii X60 eye tracker and subsequently eye gaze metrics were computed. Voice recordings, confidence levels and diagnoses provided a clear demonstration of the image interpretation and the cognitive processes undertaken by each participant. A questionnaire afforded the participants an opportunity to offer information on their experience in image interpretation and their opinion on the eye tracking technology. Reporting radiographers demonstrated a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took a mean of 2.4s longer to clinically decide on all features compared to students. Reporting radiographers also had a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took longer to clinically decide on an image diagnosis (p=0.02) than radiographers. Reporting radiographers had a greater mean fixation duration (p=0.01), mean fixation count (p=0.04) and mean visit count (p=0.04) within the areas of pathology compared to students. Eye tracking patterns, presented within heat maps, were a good reflection of group expertise and search strategies. Eye gaze metrics such as time to first fixate, fixation count, fixation duration and visit count within the areas of pathology were indicative of the radiographer's competency. The accuracy and confidence of

  16. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Mahdi Khoramshahi

    Full Text Available The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game, whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a whether participants are able to exploit these gaze cues to improve their coordination, (b how gaze cues affect action prediction and temporal coordination, and (c whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view.43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues. In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT. This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar.This work confirms that people can exploit gaze cues to

  17. Gender and facial dominance in gaze cuing: Emotional context matters in the eyes that we follow

    NARCIS (Netherlands)

    Ohlsen, G.; van Zoest, W.; van Vugt, M.

    2013-01-01

    Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual's environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We

  18. Recognition of Emotion from Facial Expressions with Direct or Averted Eye Gaze and Varying Expression Intensities in Children with Autism Disorder and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Dina Tell

    2014-01-01

    Full Text Available Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.

  19. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were...

  20. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of

  1. Eye movements reveal epistemic curiosity in human observers.

    Science.gov (United States)

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. "Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze": Correction to Werhahn et al. (2016).

    Science.gov (United States)

    2017-02-01

    Reports an error in "Wolves ( Canis lupus ) and dogs ( Canis familiaris ) differ in following human gaze into distant space but respond similar to their packmates' gaze" by Geraldine Werhahn, Zsófia Virányi, Gabriela Barrera, Andrea Sommese and Friederike Range ( Journal of Comparative Psychology , 2016[Aug], Vol 130[3], 288-298). In the article, the affiliations for the second and fifth authors should be Wolf Science Center, Ernstbrunn, Austria, and Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna/ Medical University of Vienna/University of Vienna. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-26311-001.) Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills

  3. Investigating gaze-controlled input in a cognitive selection test

    OpenAIRE

    Gayraud, Katja; Hasse, Catrin; Eißfeldt, Hinnerk; Pannasch, Sebastian

    2017-01-01

    In the field of aviation, there is a growing interest in developing more natural forms of interaction between operators and systems to enhance safety and efficiency. These efforts also include eye gaze as an input channel for human-machine interaction. The present study investigates the application of gaze-controlled input in a cognitive selection test called Eye Movement Conflict Detection Test. The test enables eye movements to be studied as an indicator for psychological test performance a...

  4. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  5. Gender and facial dominance in gaze cuing: emotional context matters in the eyes that we follow.

    Directory of Open Access Journals (Sweden)

    Garian Ohlsen

    Full Text Available Gaze following is a socio-cognitive process that provides adaptive information about potential threats and opportunities in the individual's environment. The aim of the present study was to investigate the potential interaction between emotional context and facial dominance in gaze following. We used the gaze cue task to induce attention to or away from the location of a target stimulus. In the experiment, the gaze cue either belonged to a (dominant looking male face or a (non-dominant looking female face. Critically, prior to the task, individuals were primed with pictures of threat or no threat to induce either a dangerous or safe environment. Findings revealed that the primed emotional context critically influenced the gaze cuing effect. While a gaze cue of the dominant male face influenced performance in both the threat and no-threat conditions, the gaze cue of the non-dominant female face only influenced performance in the no-threat condition. This research suggests an implicit, context-dependent follower bias, which carries implications for research on visual attention, social cognition, and leadership.

  6. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    Science.gov (United States)

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation123

    Science.gov (United States)

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying

    2016-01-01

    Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation. PMID:27092335

  8. Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments

    OpenAIRE

    Thies Pfeiffer; Ipke Wachsmuth; Marc E. Latoschik

    2009-01-01

    Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of...

  9. Evidence for impairments in using static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism.

    Science.gov (United States)

    Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B

    2008-09-01

    We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.

  10. Fear of Negative Evaluation Influences Eye Gaze in Adolescents with Autism Spectrum Disorder: A Pilot Study

    Science.gov (United States)

    White, Susan W.; Maddox, Brenna B.; Panneton, Robin K.

    2015-01-01

    Social anxiety is common among adolescents with Autism Spectrum Disorder (ASD). In this modest-sized pilot study, we examined the relationship between social worries and gaze patterns to static social stimuli in adolescents with ASD (n = 15) and gender-matched adolescents without ASD (control; n = 18). Among cognitively unimpaired adolescents with…

  11. Does beauty catch the eye?: Sex differences in gazing at attractive opposite-sex targets

    NARCIS (Netherlands)

    van Straaten, I.; Holland, R.; Finkenauer, C.; Hollenstein, T.; Engels, R.C.M.E.

    2010-01-01

    We investigated to what extent the length of people's gazes during conversations with opposite-sex persons is affected by the physical attractiveness of the partner. Single participants (N = 115) conversed for 5 min with confederates who were rated either as low or high on physical attractiveness.

  12. Studying the influence of race on the gaze cueing effect using eye tracking method

    Directory of Open Access Journals (Sweden)

    Galina Ya. Menshikova

    2017-06-01

    Full Text Available The gaze direction of another person is an important social cue, allowing us to orient quickly in social interactions. The effect of short-term redirection of visual attention to the same object that other people are looking at is known as the gaze cueing effect. There is evidence that the strength of this effect depends on many social factors, such as the trust in a partner, her/his gender, social attitudes, etc. In our study we investigated the influence of race of face stimuli on the strength of the gaze cueing effect. Using the modified Posner Cueing Task an attentional shift was assessed in a scene where avatar faces of different race were used as distractors. Participants were instructed to fix the black dot in the centre of the screen until it changes colour, and then as soon as possible to make a rightward or leftward saccade, depending on colour of a fixed point. A male distractor face was shown in the centre of the screen simultaneously with a fixed point. The gaze direction of the distractor face changed from straight ahead to rightward or leftward at the moment when colour of a fixed point changed. It could be either congruent or incongruent with the saccade direction. We used face distractors of three race categories: Caucasian (own race faces, Asian and African (other race faces. Twenty five Caucasian participants took part in our study. The results showed that the race of face distractors influence the strength of the gaze cueing effect, that manifested in the change of latency and velocity of the ongoing saccades.

  13. Gazing toward humans: a study on water rescue dogs using the impossible task paradigm.

    Science.gov (United States)

    D'Aniello, Biagio; Scandurra, Anna; Prato-Previde, Emanuela; Valsecchi, Paola

    2015-01-01

    Various studies have assessed the role of life experiences, including learning opportunities, living conditions and the quality of dog-human relationships, in the use of human cues and problem-solving ability. The current study investigates how and to what extent training affects the behaviour of dogs and the communication of dogs with humans by comparing dogs trained for a water rescue service and untrained pet dogs in the impossible task paradigm. Twenty-three certified water rescue dogs (the water rescue group) and 17 dogs with no training experience (the untrained group) were tested using a modified version of the impossible task described by Marshall-Pescini et al. in 2009. The results demonstrated that the water rescue dogs directed their first gaze significantly more often towards the owner and spent more time gazing toward two people compared to the untrained pet dogs. There was no difference between the dogs of the two groups as far as in the amount of time spent gazing at the owner or the stranger; neither in the interaction with the apparatus attempting to obtain food. The specific training regime, aimed at promoting cooperation during the performance of water rescue, could account for the longer gazing behaviour shown toward people by the water rescue dogs and the priority of gazing toward the owner. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls.

    Science.gov (United States)

    Panchuk, Derek; Vickers, Joan N

    2011-08-01

    We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.

  15. Gaze perception in social anxiety and social anxiety disorder

    Directory of Open Access Journals (Sweden)

    Lars eSchulze

    2013-12-01

    Full Text Available Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD. Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  16. Arthropods affecting the human eye.

    Science.gov (United States)

    Panadero-Fontán, Rosario; Otranto, Domenico

    2015-02-28

    Ocular infestations by arthropods consist in the parasitization of the human eye, either directly (e.g., some insect larvae causing ophthalmomyiasis) or via arthropods feeding on lachrymal/conjunctival secretions (e.g., some eye-seeking insects, which also act as vectors of eye pathogens). In addition, demodicosis and phthiriasis may also cause eye discomfort in humans. Ophthalmomyiasis by larvae of the families Oestridae, Calliphoridae and Sarcophagidae, are frequent causative agents of human ocular infestations. Over the last decades, the extensive use of macrocyclic lactones in cattle has reduced the frequency of infestations by Hypoderma bovis and Hypoderma lineatum (family Oestridae), and consequently, human infestations by these species. A prompt diagnosis of ocular myiasis (e.g., by serological tests) is pivotal for positive prognoses, particularly when the larvae are not detectable during the ophthalmologic examination. Molecular diagnoses may also assist physicians and parasitologists in achieving time-efficient diagnoses of infestations by Oestridae causing myiasis. Finally, due to widespread international travel to exotic destinations, cases of myiasis are increasing in non-endemic areas, therefore requiring physicians to acquire a profound knowledge of the clinical symptoms linked to these infestations to prevent costly, inappropriate treatments or severe complications. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Sociability and gazing toward humans in dogs and wolves: Simple behaviors with broad implications.

    Science.gov (United States)

    Bentosela, Mariana; Wynne, C D L; D'Orazio, M; Elgier, A; Udell, M A R

    2016-01-01

    Sociability, defined as the tendency to approach and interact with unfamiliar people, has been found to modulate some communicative responses in domestic dogs, including gaze behavior toward the human face. The objective of this study was to compare sociability and gaze behavior in pet domestic dogs and in human-socialized captive wolves in order to identify the relative influence of domestication and learning in the development of the dog-human bond. In Experiment 1, we assessed the approach behavior and social tendencies of dogs and wolves to a familiar and an unfamiliar person. In Experiment 2, we compared the animal's duration of gaze toward a person's face in the presence of food, which the animals could see but not access. Dogs showed higher levels of interspecific sociability than wolves in all conditions, including those where attention was unavailable. In addition, dogs gazed longer at the person's face than wolves in the presence of out-of-reach food. The potential contributions of domestication, associative learning, and experiences during ontogeny to prosocial behavior toward humans are discussed. © 2016 Society for the Experimental Analysis of Behavior.

  18. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

    Science.gov (United States)

    Shishkin, Sergei L.; Nuzhdin, Yuri O.; Svirin, Evgeny P.; Trofimov, Alexander G.; Fedorova, Anastasia A.; Kozyrskiy, Bogdan L.; Velichkovsky, Boris M.

    2016-01-01

    We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device

  19. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Sergei L. Shishkin

    2016-11-01

    Full Text Available We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs were collected during game’s control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant’s averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200..500 ms relative to the fixation onset. For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower results were obtained for the shrinkage LDA classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for

  20. Spotting expertise in the eyes: billiards knowledge as revealed by gaze shifts in a dynamic visual prediction task.

    Science.gov (United States)

    Crespi, Sofia; Robino, Carlo; Silva, Ottavia; de'Sperati, Claudio

    2012-10-31

    In sports, as in other activities and knowledge domains, expertise is a highly valuable asset. We assessed whether expertise in billiards is associated with specific patterns of eye movements in a visual prediction task. Professional players and novices were presented a number of simplified billiard shots on a computer screen, previously filmed in a real set, with the last part of the ball trajectory occluded. They had to predict whether or not the ball would have hit the central skittle. Experts performed better than novices, in terms of both accuracy and response time. By analyzing eye movements, we found that during occlusion, experts rarely extrapolated with the gaze the occluded part of the ball trajectory-a behavior that was widely diffused in novices-even when the unseen path was long and with two bounces interposed. Rather, they looked selectively at specific diagnostic points on the cushions along the ball's visible trajectory, in accordance with a formal metrical system used by professional players to calculate the shot coordinates. Thus, the eye movements of expert observers contained a clear signature of billiard expertise and documented empirically a strategy upgrade in visual problem solving from dynamic, analog simulation in imagery to more efficient rule-based, conceptual knowledge.

  1. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    Science.gov (United States)

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research.

    Science.gov (United States)

    Kredel, Ralf; Vater, Christian; Klostermann, André; Hossner, Ernst-Joachim

    2017-01-01

    Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  3. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research

    Directory of Open Access Journals (Sweden)

    Ralf Kredel

    2017-10-01

    Full Text Available Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes, while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses. To meet both demands, some promising compromises of methodological solutions have been proposed—in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  4. EyeFrame: Real-time memory aid improves human multitasking via domain-general eye tracking procedures

    Directory of Open Access Journals (Sweden)

    P. eTaylor

    2015-09-01

    Full Text Available OBJECTIVE: We developed an extensively general closed-loop system to improve human interaction in various multitasking scenarios, with semi-autonomous agents, processes, and robots. BACKGROUND: Much technology is converging toward semi-independent processes with intermittent human supervision distributed over multiple computerized agents. Human operators multitask notoriously poorly, in part due to cognitive load and limited working memory. To multitask optimally, users must remember task order, e.g., the most neglected task, since longer times not monitoring an element indicates greater probability of need for user input. The secondary task of monitoring attention history over multiple spatial tasks requires similar cognitive resources as primary tasks themselves. Humans can not reliably make more than ~2 decisions/s. METHODS: Participants managed a range of 4-10 semi-autonomous agents performing rescue tasks. To optimize monitoring and controlling multiple agents, we created an automated short term memory aid, providing visual cues from users' gaze history. Cues indicated when and where to look next, and were derived from an inverse of eye fixation recency. RESULTS: Contingent eye tracking algorithms drastically improved operator performance, increasing multitasking capacity. The gaze aid reduced biases, and reduced cognitive load, measured by smaller pupil dilation. CONCLUSIONS: Our eye aid likely helped by delegating short-term memory to the computer, and by reducing decision making load. Past studies used eye position for gaze-aware control and interactive updating of displays in application-specific scenarios, but ours is the first to successfully implement domain-general algorithms. Procedures should generalize well to: process control, factory operations, robot control, surveillance, aviation, air traffic control, driving, military, mobile search and rescue, and many tasks where probability of utility is predicted by duration since last

  5. Eye-gaze independent EEG-based brain-computer interfaces for communication

    Science.gov (United States)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  6. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    Science.gov (United States)

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  7. Exploring associations between gaze patterns and putative human mirror neuron system activity

    Directory of Open Access Journals (Sweden)

    Peter Hugh Donaldson

    2015-07-01

    Full Text Available The human mirror neuron system (MNS is hypothesised to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity, healthy right-handed participants aged 18-40 (n = 26 viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation (TMS. Motor-evoked potentials (MEPs recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze (PG and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  8. Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis

    Directory of Open Access Journals (Sweden)

    Sean eAndrist

    2015-07-01

    Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.

  9. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    Science.gov (United States)

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  10. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  11. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2015-01-01

    Full Text Available The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g. think, believe, know in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant’s gaze. Eleven typical adults and eleven high functioning adults with ASD were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  12. Goats display audience-dependent human-directed gazing behaviour in a problem-solving task.

    Science.gov (United States)

    Nawroth, Christian; Brett, Jemma M; McElligott, Alan G

    2016-07-01

    Domestication is an important factor driving changes in animal cognition and behaviour. In particular, the capacity of dogs to communicate in a referential and intentional way with humans is considered a key outcome of how domestication as a companion animal shaped the canid brain. However, the lack of comparison with other domestic animals makes general conclusions about how domestication has affected these important cognitive features difficult. We investigated human-directed behaviour in an 'unsolvable problem' task in a domestic, but non-companion species: goats. During the test, goats experienced a forward-facing or an away-facing person. They gazed towards the forward-facing person earlier and for longer and showed more gaze alternations and a lower latency until the first gaze alternation when the person was forward-facing. Our results provide strong evidence for audience-dependent human-directed visual orienting behaviour in a species that was domesticated primarily for production, and show similarities with the referential and intentional communicative behaviour exhibited by domestic companion animals such as dogs and horses. This indicates that domestication has a much broader impact on heterospecific communication than previously believed. © 2016 The Author(s).

  13. Eye Gaze and Aging: Selective and Combined Effects of Working Memory and Inhibitory Control

    Directory of Open Access Journals (Sweden)

    Trevor J. Crawford

    2017-11-01

    Full Text Available Eye-tracking is increasingly studied as a cognitive and biological marker for the early signs of neuropsychological and psychiatric disorders. However, in order to make further progress, a more comprehensive understanding of the age-related effects on eye-tracking is essential. The antisaccade task requires participants to make saccadic eye movements away from a prepotent stimulus. Speculation on the cause of the observed age-related differences in the antisaccade task largely centers around two sources of cognitive dysfunction: inhibitory control (IC and working memory (WM. The IC account views cognitive slowing and task errors as a direct result of the decline of inhibitory cognitive mechanisms. An alternative theory considers that a deterioration of WM is the cause of these age-related effects on behavior. The current study assessed IC and WM processes underpinning saccadic eye movements in young and older participants. This was achieved with three experimental conditions that systematically varied the extent to which WM and IC were taxed in the antisaccade task: a memory-guided task was used to explore the effect of increasing the WM load; a Go/No-Go task was used to explore the effect of increasing the inhibitory load; a ‘standard’ antisaccade task retained the standard WM and inhibitory loads. Saccadic eye movements were also examined in a control condition: the standard prosaccade task where the load of WM and IC were minimal or absent. Saccade latencies, error rates and the spatial accuracy of saccades of older participants were compared to the same measures in healthy young controls across the conditions. The results revealed that aging is associated with changes in both IC and WM. Increasing the inhibitory load was associated with increased reaction times in the older group, while the increased WM load and the inhibitory load contributed to an increase in the antisaccade errors. These results reveal that aging is associated with

  14. Eye Gaze and Aging: Selective and Combined Effects of Working Memory and Inhibitory Control.

    Science.gov (United States)

    Crawford, Trevor J; Smith, Eleanor S; Berry, Donna M

    2017-01-01

    Eye-tracking is increasingly studied as a cognitive and biological marker for the early signs of neuropsychological and psychiatric disorders. However, in order to make further progress, a more comprehensive understanding of the age-related effects on eye-tracking is essential. The antisaccade task requires participants to make saccadic eye movements away from a prepotent stimulus. Speculation on the cause of the observed age-related differences in the antisaccade task largely centers around two sources of cognitive dysfunction: inhibitory control (IC) and working memory (WM). The IC account views cognitive slowing and task errors as a direct result of the decline of inhibitory cognitive mechanisms. An alternative theory considers that a deterioration of WM is the cause of these age-related effects on behavior. The current study assessed IC and WM processes underpinning saccadic eye movements in young and older participants. This was achieved with three experimental conditions that systematically varied the extent to which WM and IC were taxed in the antisaccade task: a memory-guided task was used to explore the effect of increasing the WM load; a Go/No-Go task was used to explore the effect of increasing the inhibitory load; a 'standard' antisaccade task retained the standard WM and inhibitory loads. Saccadic eye movements were also examined in a control condition: the standard prosaccade task where the load of WM and IC were minimal or absent. Saccade latencies, error rates and the spatial accuracy of saccades of older participants were compared to the same measures in healthy young controls across the conditions. The results revealed that aging is associated with changes in both IC and WM. Increasing the inhibitory load was associated with increased reaction times in the older group, while the increased WM load and the inhibitory load contributed to an increase in the antisaccade errors. These results reveal that aging is associated with changes in both IC and

  15. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    Science.gov (United States)

    Ivanov, Iliya V; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X; Trauzettel-Klosinski, Susanne

    2016-01-01

    Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition

  16. I spy with my little eye: Analysis of airline pilots' gaze patterns in a manual instrument flight scenario.

    Science.gov (United States)

    Haslbeck, Andreas; Zhang, Bo

    2017-09-01

    The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A Proactive Approach of Robotic Framework for Making Eye Contact with Humans

    Directory of Open Access Journals (Sweden)

    Mohammed Moshiul Hoque

    2014-01-01

    Full Text Available Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.

  18. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    Directory of Open Access Journals (Sweden)

    Iliya V Ivanov

    Full Text Available Degenerative retinal diseases, especially retinitis pigmentosa (RP, lead to severe peripheral visual field loss (tunnel vision, which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST.Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental and reading (control training. All subjects who first performed reading training underwent experimental training later (waiting list control group. To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT during exploratory saccade training, Percent Preferred Walking Speed (PPWS, the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field.In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training

  19. A Model of the Human Eye

    Science.gov (United States)

    Colicchia, G.; Wiesner, H.; Waltner, C.; Zollman, D.

    2008-01-01

    We describe a model of the human eye that incorporates a variable converging lens. The model can be easily constructed by students with low-cost materials. It shows in a comprehensible way the functionality of the eye's optical system. Images of near and far objects can be focused. Also, the defects of near and farsighted eyes can be demonstrated.

  20. Monte-Carlo simulation of proton radiotherapy for human eye

    International Nuclear Information System (INIS)

    Liu Yunpeng; Tang Xiaobin; Xie Qin; Chen Feida; Geng Changran; Chen Da

    2010-01-01

    The 62 MeV proton beam was selected to develop a MCNPX model of the human eye to approximate dose delivered from proton therapy by. In the course of proton therapy, two treatment simulations were considered. The first simulation was an ideal treatment scenario. In this case, the dose of tumor was 50.03 Gy, which was at the level of effective treatment, while other organizations were in the range of acceptable dose. The second case was a worst case scenario to simulate a patient gazing directly into the treatment beam during therapy. The bulk of dose deposited in the cornea, lens, and anterior chamber region. However, the dose of tumor area was zero. The calculated results show an agreement accordance with the relative reference, which confirmed that the MCNPX code can simulate proton radiotherapy perfectly, and is a capable platform for patient planning. The data from the worst case can be used for dose reconstruction of the clinical accident. (authors)

  1. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes.

    Science.gov (United States)

    Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M

    2016-01-01

    Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.

  3. Gaze-controlled Driving

    DEFF Research Database (Denmark)

    Tall, Martin; Alapetite, Alexandre; San Agustin, Javier

    2009-01-01

    We investigate if the gaze (point of regard) can control a remote vehicle driving on a racing track. Five different input devices (on-screen buttons, mouse-pointing low-cost webcam eye tracker and two commercial eye tracking systems) provide heading and speed control on the scene view transmitted...

  4. Context-sensitivity in Conversation. Eye gaze and the German Repair Initiator ‘bitte?’ (´pardon?´)

    DEFF Research Database (Denmark)

    Egbert, Maria

    1996-01-01

    . In addition, repair is sensitive to certain characteristics of social situations. The selection of a particular repair initiator, German bitte? ‘pardon?’, indexes that there is no mutual gaze between interlocutors; i.e., there is no common course of action. The selection of bitte? not only initiates repair......; it also spurs establishment of mutual gaze, and thus displays that there is attention to a common focus. (Conversation analysis, context, cross-linguistic analysis, repair, gaze, telephone conversation, co-present interaction, grammar and interaction)...

  5. Social eye gaze modulates processing of speech and co-speech gesture

    NARCIS (Netherlands)

    Holler, J.; Schubotz, L.M.R.; Kelly, S.D.; Hagoort, P.; Schütze, M.; Özyürek, A.

    2014-01-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this

  6. Right Hemispheric Dominance in Gaze-Triggered Reflexive Shift of Attention in Humans

    Science.gov (United States)

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-01-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects,…

  7. The response of guide dogs and pet dogs (Canis familiaris) to cues of human referential communication (pointing and gaze).

    Science.gov (United States)

    Ittyerah, Miriam; Gaunet, Florence

    2009-03-01

    The study raises the question of whether guide dogs and pet dogs are expected to differ in response to cues of referential communication given by their owners; especially since guide dogs grow up among sighted humans, and while living with their blind owners, they still have interactions with several sighted people. Guide dogs and pet dogs were required to respond to point, point and gaze, gaze and control cues of referential communication given by their owners. Results indicate that the two groups of dogs do not differ from each other, revealing that the visual status of the owner is not a factor in the use of cues of referential communication. Both groups of dogs have higher frequencies of performance and faster latencies for the point and the point and gaze cues as compared to gaze cue only. However, responses to control cues are below chance performance for the guide dogs, whereas the pet dogs perform at chance. The below chance performance of the guide dogs may be explained by a tendency among them to go and stand by the owner. The study indicates that both groups of dogs respond similarly in normal daily dyadic interaction with their owners and the lower comprehension of the human gaze may be a less salient cue among dogs in comparison to the pointing gesture.

  8. Dysfunctional gaze processing in bipolar disorder

    Directory of Open Access Journals (Sweden)

    Cristina Berchio

    2017-01-01

    The present study provides neurophysiological evidence for abnormal gaze processing in BP and suggests dysfunctional processing of direct eye contact as a prominent characteristic of bipolar disorder.

  9. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Directory of Open Access Journals (Sweden)

    Johanna Palcu

    2017-06-01

    Full Text Available Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a determine whether presenting human faces (static or animated in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product in an advertisement, and (c to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product. Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers

  10. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze

  11. The Way Dogs (Canis familiaris Look at Human Emotional Faces Is Modulated by Oxytocin. An Eye-Tracking Study

    Directory of Open Access Journals (Sweden)

    Anna Kis

    2017-10-01

    Full Text Available Dogs have been shown to excel in reading human social cues, including facial cues. In the present study we used eye-tracking technology to further study dogs’ face processing abilities. It was found that dogs discriminated between human facial regions in their spontaneous viewing pattern and looked most to the eye region independently of facial expression. Furthermore dogs played most attention to the first two images presented, afterwards their attention dramatically decreases; a finding that has methodological implications. Increasing evidence indicates that the oxytocin system is involved in dogs’ human-directed social competence, thus as a next step we investigated the effects of oxytocin on processing of human facial emotions. It was found that oxytocin decreases dogs’ looking to the human faces expressing angry emotional expression. More interestingly, however, after oxytocin pre-treatment dogs’ preferential gaze toward the eye region when processing happy human facial expressions disappears. These results provide the first evidence that oxytocin is involved in the regulation of human face processing in dogs. The present study is one of the few empirical investigations that explore eye gaze patterns in naïve and untrained pet dogs using a non-invasive eye-tracking technique and thus offers unique but largely untapped method for studying social cognition in dogs.

  12. Race perception and gaze direction differently impair visual working memory for faces: An event-related potential study.

    Science.gov (United States)

    Sessa, Paola; Dalmaso, Mario

    2016-01-01

    Humans are amazingly experts at processing and recognizing faces, however there are moderating factors of this ability. In the present study, we used the event-related potential technique to investigate the influence of both race and gaze direction on visual working memory (i.e., VWM) face representations. In a change detection task, we orthogonally manipulated race (own-race vs. other-race faces) and eye-gaze direction (direct gaze vs. averted gaze). Participants were required to encode identities of these faces. We quantified the amount of information encoded in VWM by monitoring the amplitude of the sustained posterior contralateral negativity (SPCN) time-locked to the faces. Notably, race and eye-gaze direction differently modulated SPCN amplitude such that other-race faces elicited reduced SPCN amplitudes compared with own-race faces only when displaying a direct gaze. On the other hand, faces displaying averted gaze, independently of their race, elicited increased SPCN amplitudes compared with faces displaying direct gaze. We interpret these findings as denoting that race and eye-gaze direction affect different face processing stages.

  13. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    Science.gov (United States)

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  14. Gaze Interactive Building Instructions

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Ahmed, Zaheer; Mardanbeigi, Diako

    We combine eye tracking technology and mobile tablets to support hands-free interaction with digital building instructions. As a proof-of-concept we have developed a small interactive 3D environment where one can interact with digital blocks by gaze, keystroke and head gestures. Blocks may be moved...

  15. Early visual evoked potentials are modulated by eye position in humans induced by whole body rotations

    Directory of Open Access Journals (Sweden)

    Petit Laurent

    2004-09-01

    Full Text Available Abstract Background To reach and grasp an object in space on the basis of its image cast on the retina requires different coordinate transformations that take into account gaze and limb positioning. Eye position in the orbit influences the image's conversion from retinotopic (eye-centered coordinates to an egocentric frame necessary for guiding action. Neuroimaging studies have revealed eye position-dependent activity in extrastriate visual, parietal and frontal areas that is along the visuo-motor pathway. At the earliest vision stage, the role of the primary visual area (V1 in this process remains unclear. We used an experimental design based on pattern-onset visual evoked potentials (VEP recordings to study the effect of eye position on V1 activity in humans. Results We showed that the amplitude of the initial C1 component of VEP, acknowledged to originate in V1, was modulated by the eye position. We also established that putative spontaneous small saccades related to eccentric fixation, as well as retinal disparity cannot explain the effects of changing C1 amplitude of VEP in the present study. Conclusions The present modulation of the early component of VEP suggests an eye position-dependent activity of the human primary visual area. Our findings also evidence that cortical processes combine information about the position of the stimulus on the retinae with information about the location of the eyes in their orbit as early as the stage of primary visual area.

  16. Gaze shifts and fixations dominate gaze behavior of walking cats

    Science.gov (United States)

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  17. Modelling human eye under blast loading.

    Science.gov (United States)

    Esposito, L; Clemente, C; Bonora, N; Rossi, T

    2015-01-01

    Primary blast injury (PBI) is the general term that refers to injuries resulting from the mere interaction of a blast wave with the body. Although few instances of primary ocular blast injury, without a concomitant secondary blast injury from debris, are documented, some experimental studies demonstrate its occurrence. In order to investigate PBI to the eye, a finite element model of the human eye using simple constitutive models was developed. The material parameters were calibrated by a multi-objective optimisation performed on available eye impact test data. The behaviour of the human eye and the dynamics of mechanisms occurring under PBI loading conditions were modelled. For the generation of the blast waves, different combinations of explosive (trinitrotoluene) mass charge and distance from the eye were analysed. An interpretation of the resulting pressure, based on the propagation and reflection of the waves inside the eye bulb and orbit, is proposed. The peculiar geometry of the bony orbit (similar to a frustum cone) can induce a resonance cavity effect and generate a pressure standing wave potentially hurtful for eye tissues.

  18. The Effectiveness of Gaze-Contingent Control in Computer Games.

    Science.gov (United States)

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  19. Optical models of the human eye.

    Science.gov (United States)

    Atchison, David A; Thibos, Larry N

    2016-03-01

    Optical models of the human eye have been used in visual science for purposes such as providing a framework for explaining optical phenomena in vision, for predicting how refraction and aberrations are affected by change in ocular biometry and as computational tools for exploring the limitations imposed on vision by the optical system of the eye. We address the issue of what is understood by optical model eyes, discussing the 'encyclopaedia' and 'toy train' approaches to modelling. An extensive list of purposes of models is provided. We discuss many of the theoretical types of optical models (also schematic eyes) of varying anatomical accuracy, including single, three and four refracting surface variants. We cover the models with lens structure in the form of nested shells and gradient index. Many optical eye models give accurate predictions only for small angles and small fields of view. If aberrations and image quality are important to consider, such 'paraxial' model eyes must be replaced by 'finite model' eyes incorporating features such as aspheric surfaces, tilts and decentrations, wavelength-dependent media and curved retinas. Many optical model eyes are population averages and must become adaptable to account for age, gender, ethnicity, refractive error and accommodation. They can also be customised for the individual when extensive ocular biometry and optical performance data are available. We consider which optical model should be used for a particular purpose, adhering to the principle that the best model is the simplest fit for the task. We provide a glimpse into the future of optical models of the human eye. This review is interwoven with historical developments, highlighting the important people who have contributed so richly to our understanding of visual optics. © 2016 The Authors. Clinical and Experimental Optometry © 2016 Optometry Australia.

  20. Simulating interaction: Using gaze-contingent eye-tracking to measure the reward value of social signals in toddlers with and without autism

    Directory of Open Access Journals (Sweden)

    Angelina Vernetti

    2018-01-01

    Full Text Available Several accounts have been proposed to explain difficulties with social interaction in autism spectrum disorder (ASD, amongst which atypical social orienting, decreased social motivation or difficulties with understanding the regularities driving social interaction. This study uses gaze-contingent eye-tracking to tease apart these accounts by measuring reward related behaviours in response to different social videos. Toddlers at high or low familial risk for ASD took part in this study at age 2 and were categorised at age 3 as low risk controls (LR, high-risk with no ASD diagnosis (HR-no ASD, or with a diagnosis of ASD (HR-ASD. When the on-demand social interaction was predictable, all groups, including the HR-ASD group, looked longer and smiled more towards a person greeting them compared to a mechanical Toy (Condition 1 and also smiled more towards a communicative over a non-communicative person (Condition 2. However, all groups, except the HR-ASD group, selectively oriented towards a person addressing the child in different ways over an invariant social interaction (Condition 3. These findings suggest that social interaction is intrinsically rewarding for individuals with ASD, but the extent to which it is sought may be modulated by the specific variability of naturalistic social interaction. Keywords: Social orienting, Social motivation, Unpredictability, Autism spectrum disorder, High-risk siblings, Gaze-contingency

  1. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  2. Design gaze simulation for people with visual disability

    NARCIS (Netherlands)

    Qiu, S.

    2017-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react. My doctoral research is to design gaze simulation

  3. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  4. The “Social Gaze Space”: A Taxonomy for Gaze-Based Communication in Triadic Interactions

    Directory of Open Access Journals (Sweden)

    Mathis Jording

    2018-02-01

    Full Text Available Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a “simultaneous input-output device”: While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act, our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities – and their phylogenetic and ontogenetic impacts – become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the ‘Social Gaze Space’ as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely ‘partner-oriented,’ ‘object-oriented,’ ‘introspective,’ ‘initiating joint attention,’ and ‘responding joint attention.’ Different combinations of these states explain several interpersonal phenomena. We argue that this taxonomy distinguishes the most relevant interactional states along their distinctive features, and will showcase the implications for prominent social gaze phenomena. The taxonomy allows to identify research desiderates that have been neglected so far. We argue for a systematic investigation of these phenomena and discuss some related methodological issues.

  5. Learning rational temporal eye movement strategies.

    Science.gov (United States)

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  6. Differences in gaze anticipation for locomotion with and without vision

    Science.gov (United States)

    Authié, Colas N.; Hilt, Pauline M.; N'Guyen, Steve; Berthoz, Alain; Bennequin, Daniel

    2015-01-01

    Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition. PMID:26106313

  7. Prediction of Human Eye Fixations using Symmetry

    OpenAIRE

    Kootstra, Gert; Schomaker, Lambert R. B.

    2009-01-01

    Humans are very sensitive to symmetry in visual patterns. Reaction time experiments show that symmetry is detected and recognized very rapidly. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of saliency. In this paper, we discuss local symmetry as a measure of saliency. We propose a number of symmetry models and perform an eye-tracking study with human participants viewing photographic i...

  8. AmbiGaze : direct control of ambient devices by gaze

    OpenAIRE

    Velloso, Eduardo; Wirth, Markus; Weichel, Christian; Abreu Esteves, Augusto Emanuel; Gellersen, Hans-Werner Georg

    2016-01-01

    Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes...

  9. Social interactions through the eyes of macaques and humans.

    Directory of Open Access Journals (Sweden)

    Richard McFarland

    Full Text Available Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression. Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys' gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of

  10. Investigating social gaze as an action-perception online performance.

    Science.gov (United States)

    Grynszpan, Ouriel; Simonin, Jérôme; Martin, Jean-Claude; Nadel, Jacqueline

    2012-01-01

    Gaze represents a major non-verbal communication channel in social interactions. In this respect, when facing another person, one's gaze should not be examined as a purely perceptive process but also as an action-perception online performance. However, little is known about processes involved in the real-time self-regulation of social gaze. The present study investigates the impact of a gaze-contingent viewing window on fixation patterns and the awareness of being the agent moving the window. In face-to-face scenarios played by a virtual human character, the task for the 18 adult participants was to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking. The virtual character was embedded in naturalistic backgrounds to enhance realism. Eye-tracking data showed that the viewing window induced changes in gaze behavior, notably longer visual fixations. Notwithstanding, only half of the participants ascribed the window displacements to their eye movements. These participants also spent more time looking at the eyes and mouth regions of the virtual human character. The outcomes of the study highlight the dissociation between non-volitional gaze adaptation and the self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes, linked by bottom-up and top-down processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, in particular concerning autism spectrum disorders (ASD) where the poor pragmatic understanding of oral speech is considered linked to visual peculiarities that impede facial exploration.

  11. Communication Aid with Human Eyes Only

    Science.gov (United States)

    Arai, Kohei; Yajima, Kenro

    A communication aid with human eyes only is proposed. A set of candidate character is displayed onto computer screen of relatively small and light Head Mount Display: HMD that is mounted on glasses of which user wears on. When user looks at a candidate character with his/hers left eye while right eye picture is taken with small and light web camera that also is mounted on the glasses. The proposed system can selects 81 characters with two layers of 9 by 9 character candidate image. Other than these there is another selective image including control keys and frequently use of sentences. By using image matching between previously acquired template image for each candidate character and the currently acquired image, the proposed system realizes that which character in the candidates is selected. By using blinking and fix one's eye on combine together, the proposed system recognizes that user determines the selected key from the candidates. The blinking detection method employs a morphologic filter to avoid misunderstanding of dark eye detection due to eyebrows and shadows. Thus user can input sentences. User also may edit the sentences and then the sentences are read with Text to Speech: TTS software tool. Thus the system allows support conversations between handicapped and disabled persons without voice and the others peoples because only the function required for conversation is human eyes. Also the proposed system can be used as an input system for wearable computing systems. Test results by the 6 different able persons show that the proposed system does work with acceptable speed, around 1.5 second / character.

  12. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    Science.gov (United States)

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  13. Relations between 18-month-olds' gaze pattern and target action performance: a deferred imitation study with eye tracking.

    Science.gov (United States)

    Óturai, Gabriella; Kolling, Thorsten; Knopf, Monika

    2013-12-01

    Deferred imitation studies are used to assess infants' declarative memory performance. These studies have found that deferred imitation performance improves with age, which is usually attributed to advancing memory capabilities. Imitation studies, however, are also used to assess infants' action understanding. In this second research program it has been observed that infants around the age of one year imitate selectively, i.e., they imitate certain kinds of target actions and omit others. In contrast to this, two-year-olds usually imitate the model's exact actions. 18-month-olds imitate more exactly than one-year-olds, but more selectively than two-year-olds, a fact which makes this age group especially interesting, since the processes underlying selective vs. exact imitation are largely debated. The question, for example, if selective attention to certain kinds of target actions accounts for preferential imitation of these actions in young infants is still open. Additionally, relations between memory capabilities and selective imitation processes, as well as their role in shaping 18-month-olds' neither completely selective, nor completely exact imitation have not been thoroughly investigated yet. The present study, therefore, assessed 18-month-olds' gaze toward two types of actions (functional vs. arbitrary target actions) and the model's face during target action demonstration, as well as infants' deferred imitation performance. Although infants' fixation times to functional target actions were not longer than to arbitrary target actions, they imitated the functional target actions more frequently than the arbitrary ones. This suggests that selective imitation does not rely on selective gaze toward functional target actions during the demonstration phase. In addition, a post hoc analysis of interindividual differences suggested that infants' attention to the model's social-communicative cues might play an important role in exact imitation, meaning the imitation

  14. Zoonotic helminths affecting the human eye

    Science.gov (United States)

    2011-01-01

    Nowaday, zoonoses are an important cause of human parasitic diseases worldwide and a major threat to the socio-economic development, mainly in developing countries. Importantly, zoonotic helminths that affect human eyes (HIE) may cause blindness with severe socio-economic consequences to human communities. These infections include nematodes, cestodes and trematodes, which may be transmitted by vectors (dirofilariasis, onchocerciasis, thelaziasis), food consumption (sparganosis, trichinellosis) and those acquired indirectly from the environment (ascariasis, echinococcosis, fascioliasis). Adult and/or larval stages of HIE may localize into human ocular tissues externally (i.e., lachrymal glands, eyelids, conjunctival sacs) or into the ocular globe (i.e., intravitreous retina, anterior and or posterior chamber) causing symptoms due to the parasitic localization in the eyes or to the immune reaction they elicit in the host. Unfortunately, data on HIE are scant and mostly limited to case reports from different countries. The biology and epidemiology of the most frequently reported HIE are discussed as well as clinical description of the diseases, diagnostic considerations and video clips on their presentation and surgical treatment. Homines amplius oculis, quam auribus credunt Seneca Ep 6,5 Men believe their eyes more than their ears PMID:21429191

  15. Zoonotic helminths affecting the human eye

    Directory of Open Access Journals (Sweden)

    Eberhard Mark L

    2011-03-01

    Full Text Available Abstract Nowaday, zoonoses are an important cause of human parasitic diseases worldwide and a major threat to the socio-economic development, mainly in developing countries. Importantly, zoonotic helminths that affect human eyes (HIE may cause blindness with severe socio-economic consequences to human communities. These infections include nematodes, cestodes and trematodes, which may be transmitted by vectors (dirofilariasis, onchocerciasis, thelaziasis, food consumption (sparganosis, trichinellosis and those acquired indirectly from the environment (ascariasis, echinococcosis, fascioliasis. Adult and/or larval stages of HIE may localize into human ocular tissues externally (i.e., lachrymal glands, eyelids, conjunctival sacs or into the ocular globe (i.e., intravitreous retina, anterior and or posterior chamber causing symptoms due to the parasitic localization in the eyes or to the immune reaction they elicit in the host. Unfortunately, data on HIE are scant and mostly limited to case reports from different countries. The biology and epidemiology of the most frequently reported HIE are discussed as well as clinical description of the diseases, diagnostic considerations and video clips on their presentation and surgical treatment. Homines amplius oculis, quam auribus credunt Seneca Ep 6,5 Men believe their eyes more than their ears

  16. Theoretical investigation of aberrations upon ametropic human eyes

    Science.gov (United States)

    Tan, Bo; Chen, Ying-Ling; Lewis, J. W. L.; Baker, Kevin

    2003-11-01

    The human eye aberrations are important for visual acuity and ophthalmic diagnostics and surgical procedures. Reported monochromatic aberration data of the normal 20/20 human eyes are scarce. There exist even fewer reports of the relation between ametropic conditions and aberrations. We theoretically investigate the monochromatic and chromatic aberrations of human eyes for refractive errors of -10 to +10 diopters. Schematic human eye models are employed using optical design software for axial, index, and refractive types of ametropia.

  17. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    Science.gov (United States)

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  18. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children.

    Directory of Open Access Journals (Sweden)

    Anaïs Racca

    Full Text Available Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions.

  19. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children.

    Science.gov (United States)

    Racca, Anaïs; Guo, Kun; Meints, Kerstin; Mills, Daniel S

    2012-01-01

    Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions.

  20. Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition?

    Science.gov (United States)

    Estudillo, Alejandro J; Bindemann, Markus

    2017-05-01

    This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed.

  1. Higher order monochromatic aberrations of the human infant eye

    OpenAIRE

    Wang, Jingyun; Candy, T. Rowan

    2005-01-01

    The monochromatic optical aberrations of the eye degrade retinal image quality. Any significant aberrations during postnatal development could contribute to infants’ immature visual performance and provide signals for the control of eye growth. Aberrations of human infant eyes from 5 to 7 weeks old were compared with those of adult subjects using a model of an adultlike infant eye that accounted for differences in both eye and pupil size. Data were collected using the COAS Shack-Hartmann wave...

  2. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  3. Simulating interaction: Using gaze-contingent eye-tracking to measure the reward value of social signals in toddlers with and without autism.

    Science.gov (United States)

    Vernetti, Angelina; Senju, Atsushi; Charman, Tony; Johnson, Mark H; Gliga, Teodora

    2018-01-01

    Several accounts have been proposed to explain difficulties with social interaction in autism spectrum disorder (ASD), amongst which atypical social orienting, decreased social motivation or difficulties with understanding the regularities driving social interaction. This study uses gaze-contingent eye-tracking to tease apart these accounts by measuring reward related behaviours in response to different social videos. Toddlers at high or low familial risk for ASD took part in this study at age 2 and were categorised at age 3 as low risk controls (LR), high-risk with no ASD diagnosis (HR-no ASD), or with a diagnosis of ASD (HR-ASD). When the on-demand social interaction was predictable, all groups, including the HR-ASD group, looked longer and smiled more towards a person greeting them compared to a mechanical Toy (Condition 1) and also smiled more towards a communicative over a non-communicative person (Condition 2). However, all groups, except the HR-ASD group, selectively oriented towards a person addressing the child in different ways over an invariant social interaction (Condition 3). These findings suggest that social interaction is intrinsically rewarding for individuals with ASD, but the extent to which it is sought may be modulated by the specific variability of naturalistic social interaction. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Cyclooxygenase-2 expression in the normal human eye and its expression pattern in selected eye tumours

    DEFF Research Database (Denmark)

    Wang, Jinmei; Wu, Yazhen; Heegaard, Steffen

    2011-01-01

    Purpose: Cyclooxygenase-2 (COX-2) is an enzyme involved in neoplastic processes. The purpose of the present study is to investigate COX-2 expression in the normal human eye and the expression pattern in selected eye tumours involving COX-2 expressing cells. Methods: Immunohistochemical staining...... using antibodies against COX-2 was performed on paraffin sections of normal human eyes and selected eye tumours arising from cells expressing COX-2. Results: Cyclooxygenase-2 expression was found in various structures of the normal eye. Abundant expression was seen in the cornea, iris, ciliary body...... and retina. The COX-2 expression was less in tumours deriving from the ciliary epithelium and also in retinoblastoma. Conclusion: Cyclooxygenase-2 is constitutively expressed in normal human eyes. The expression of COX-2 is much lower in selected eye tumours involving COX-2 expressing cells....

  5. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    Science.gov (United States)

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  6. Latvijas gaze

    International Nuclear Information System (INIS)

    1994-04-01

    A collection of photocopies of materials (such as overheads etc.) used at a seminar (organized by the Board of Directors of the company designated ''Latvijas Gaze'' in connection with The National Oil and Gas Company of Denmark, DONG) comprising an analysis of training needs with regard to marketing of gas technology and consultancy to countries in Europe, especially with regard to Latvia. (AB)

  7. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    Energy Technology Data Exchange (ETDEWEB)

    Tourassi, Georgia [ORNL; Voisin, Sophie [ORNL; Paquit, Vincent C [ORNL; Krupinski, Elizabeth [University of Arizona

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By pooling the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.

  8. Investigating the link between radiologists’ gaze, diagnostic decision, and image content

    Science.gov (United States)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent; Krupinski, Elizabeth

    2013-01-01

    Objective To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Image analysis was performed in mammographic regions that attracted radiologists’ attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results By pooling the data from all readers, machine learning produced highly accurate predictive models linking image content, gaze, and cognition. Potential linking of those with diagnostic error was also supported to some extent. Merging readers’ gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the readers’ diagnostic errors while confirming 97.3% of their correct diagnoses. The readers’ individual perceptual and cognitive behaviors could be adequately predicted by modeling the behavior of others. However, personalized tuning was in many cases beneficial for capturing more accurately individual behavior. Conclusions There is clearly an interaction between radiologists’ gaze, diagnostic decision, and image content which can be modeled with machine learning algorithms. PMID:23788627

  9. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    Science.gov (United States)

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  10. Mobile gaze input system for pervasive interaction

    DEFF Research Database (Denmark)

    2017-01-01

    feedback to the user in response to the received command input. The unit provides feedback to the user on how to position the mobile unit in front of his eyes. The gaze tracking unit interacts with one or more controlled devices via wireless or wired communications. Example devices include a lock......, a thermostat, a light or a TV. The connection between the gaze tracking unit may be temporary or longer-lasting. The gaze tracking unit may detect features of the eye that provide information about the identity of the user....

  11. Facilitated orienting underlies fearful face-enhanced gaze cueing of spatial location

    Directory of Open Access Journals (Sweden)

    Joshua M. Carlson

    2016-12-01

    Full Text Available Faces provide a platform for non-verbal communication through emotional expression and eye gaze. Fearful facial expressions are salient indicators of potential threat within the environment, which automatically capture observers’ attention. However, the degree to which fearful facial expressions facilitate attention to others’ gaze is unresolved. Given that fearful gaze indicates the location of potential threat, it was hypothesized that fearful gaze facilitates location processing. To test this hypothesis, a gaze cueing study with fearful and neutral faces assessing target localization was conducted. The task consisted of leftward, rightward, and forward/straight gaze trials. The inclusion of forward gaze trials allowed for the isolation of orienting and disengagement components of gaze-directed attention. The results suggest that both neutral and fearful gaze modulates attention through orienting and disengagement components. Fearful gaze, however, resulted in quicker orienting than neutral gaze. Thus, fearful faces enhance gaze cueing of spatial location through facilitated orienting.

  12. Virtual pharmacokinetic model of human eye.

    Science.gov (United States)

    Kotha, Sreevani; Murtomäki, Lasse

    2014-07-01

    A virtual pharmacokinetic 3D model of the human eye is built using Comsol Multiphysics® software, which is based on the Finite Element Method (FEM). The model considers drug release from a polymer patch placed on sclera. The model concentrates on the posterior part of the eye, retina being the target tissue, and comprises the choroidal blood flow, partitioning of the drug between different tissues and active transport at the retina pigment epithelium (RPE)-choroid boundary. Although most straightforward, in order to check the mass balance, no protein binding or metabolism is yet included. It appeared that the most important issue in obtaining reliable simulation results is the finite element mesh, while time stepping has hardly any significance. Simulations were extended to 100,000 s. The concentration of a drug is shown as a function of time at various points of retina, as well as its average value, varying several parameters in the model. This work demonstrates how anybody with basic knowledge of calculus is able to build physically meaningful models of quite complex biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Gaze-based rehearsal in children under 7: a developmental investigation of eye movements during a serial spatial memory task.

    Science.gov (United States)

    Morey, Candice C; Mareva, Silvana; Lelonkiewicz, Jaroslaw R; Chevalier, Nicolas

    2018-05-01

    The emergence of strategic verbal rehearsal at around 7 years of age is widely considered a major milestone in descriptions of the development of short-term memory across childhood. Likewise, rehearsal is believed by many to be a crucial factor in explaining why memory improves with age. This apparent qualitative shift in mnemonic processes has also been characterized as a shift from passive visual to more active verbal mnemonic strategy use, but no investigation of the development of overt spatial rehearsal has informed this explanation. We measured serial spatial order reconstruction in adults and groups of children 5-7 years old and 8-11 years old, while recording their eye movements. Children, particularly the youngest children, overtly fixated late-list spatial positions longer than adults, suggesting that younger children are less likely to engage in covert rehearsal during stimulus presentation than older children and adults. However, during retention the youngest children overtly fixated more of the to-be-remembered sequences than any other group, which is inconsistent with the idea that children do nothing to try to remember. Altogether, these data are inconsistent with the notion that children under 7 do not engage in any attempts to remember. They are most consistent with proposals that children's style of remembering shifts around age 7 from reactive cue-driven methods to proactive, covert methods, which may include cumulative rehearsal. © 2017 John Wiley & Sons Ltd.

  14. Spatial updating depends on gaze direction even after loss of vision.

    Science.gov (United States)

    Reuschel, Johanna; Rösler, Frank; Henriques, Denise Y P; Fiehler, Katja

    2012-02-15

    Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.

  15. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking

    Science.gov (United States)

    Gauvin, Hanna S.; Hartsuiker, Robert J.; Huettig, Falk

    2013-01-01

    The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. PMID:24339809

  16. Aquaporins 6-12 in the human eye

    DEFF Research Database (Denmark)

    Tran, Thuy Linh; Bek, Toke; Holm, Lars

    2012-01-01

    Purpose: Aquaporins (AQPs) are widely expressed and have diverse distribution patterns in the eye. AQPs 0-5 have been localized at the cellular level in human eyes. We investigated the presence of the more recently discovered AQPs 6-12 in the human eye. Methods: RT-PCR was performed on fresh tissue...... from two human eyes divided into the cornea, corneal limbus, ciliary body and iris, lens, choroid, optic nerve, retina and sclera. Each structure was examined to detect the mRNA of AQPs 6-12. Twenty-one human eyes were examined using immunohistochemical and immunofluorescence techniques to determine...... was detected in the corneal epithelium, corneal endothelium, trabecular meshwork endothelium, ciliary epithelia, lens epithelium, the inner and outer limiting membrane of the retina, the retinal pigment epithelium and the capillary endothelium of all parts of the eye. AQP9 immunolabelling was detected...

  17. Comparison of dogs and humans in visual scanning of social interaction

    OpenAIRE

    Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M.; Vainio, Outi; Kujala, Miiamaaria V.

    2015-01-01

    Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations w...

  18. Model-driven gaze simulation for the blind person in face-to-face communication

    NARCIS (Netherlands)

    Qiu, S.; Anas, S.A.B.; Osawa, H.; Rauterberg, G.W.M.; Hu, J.

    2016-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses

  19. Determination of positions of optical elements of the human eye

    International Nuclear Information System (INIS)

    Galetskii, S O; Cherezova, T Yu

    2009-01-01

    An original method for noninvasive determining the positions of elements of intraocular optics is proposed. The analytic dependence of the measurement error on the optical-scheme parameters and the restriction in distance from the element being measured are determined within the framework of the method proposed. It is shown that the method can be efficiently used for determining the position of elements in the classical Gullstrand eye model and personalised eye models. The positions of six optical surfaces of the Gullstrand eye model and four optical surfaces of the personalised eye model can be determined with an error of less than 0.25 mm. (human eye optics)

  20. Gaze-Following and Reaction to an Aversive Social Interaction Have Corresponding Associations with Variation in the OXTR Gene in Dogs but Not in Human Infants.

    Science.gov (United States)

    Oláh, Katalin; Topál, József; Kovács, Krisztina; Kis, Anna; Koller, Dóra; Young Park, Soon; Virányi, Zsófia

    2017-01-01

    It has been suggested that dogs' remarkable capacity to use human communicative signals lies in their comparable social cognitive skills; however, this view has been questioned recently. The present study investigated associations between oxytocin receptor gene (OXTR) polymorphisms and social behavior in human infants and dogs with the aim to unravel potentially differential mechanisms behind their responsiveness to human gaze. Sixteen-month-old human infants ( N = 99) and adult Border Collie dogs ( N = 71) participated in two tasks designed to test (1) their use of gaze-direction as a cue to locate a hidden object, and (2) their reactions to an aversive social interaction (using the still face task for children and a threatening approach task for dogs). Moreover, we obtained DNA samples to analyze associations between single nucleotide polymorphisms (SNP) in the OXTR (dogs: -213AG, -94TC, -74CG, rs8679682, children: rs53576, rs1042778, rs2254298) and behavior. We found that OXTR genotype was significantly associated with reactions to an aversive social interaction both in dogs and children, confirming the anxiolytic effect of oxytocin in both species. In dogs, the genotypes linked to less fearful behavior were associated also with a higher willingness to follow gaze whereas in children, OXTR gene polymorphisms did not affect gaze following success. This pattern of gene-behavior associations suggests that for dogs the two situations are more alike (potentially fear-inducing or competitive) than for human children. This raises the possibility that, in contrast to former studies proposing human-like cooperativeness in dogs, dogs may perceive human gaze in an object-choice task in a more antagonistic manner than children.

  1. Gaze-Following and Reaction to an Aversive Social Interaction Have Corresponding Associations with Variation in the OXTR Gene in Dogs but Not in Human Infants

    Directory of Open Access Journals (Sweden)

    Katalin Oláh

    2017-12-01

    Full Text Available It has been suggested that dogs' remarkable capacity to use human communicative signals lies in their comparable social cognitive skills; however, this view has been questioned recently. The present study investigated associations between oxytocin receptor gene (OXTR polymorphisms and social behavior in human infants and dogs with the aim to unravel potentially differential mechanisms behind their responsiveness to human gaze. Sixteen-month-old human infants (N = 99 and adult Border Collie dogs (N = 71 participated in two tasks designed to test (1 their use of gaze-direction as a cue to locate a hidden object, and (2 their reactions to an aversive social interaction (using the still face task for children and a threatening approach task for dogs. Moreover, we obtained DNA samples to analyze associations between single nucleotide polymorphisms (SNP in the OXTR (dogs: −213AG, −94TC, −74CG, rs8679682, children: rs53576, rs1042778, rs2254298 and behavior. We found that OXTR genotype was significantly associated with reactions to an aversive social interaction both in dogs and children, confirming the anxiolytic effect of oxytocin in both species. In dogs, the genotypes linked to less fearful behavior were associated also with a higher willingness to follow gaze whereas in children, OXTR gene polymorphisms did not affect gaze following success. This pattern of gene-behavior associations suggests that for dogs the two situations are more alike (potentially fear-inducing or competitive than for human children. This raises the possibility that, in contrast to former studies proposing human-like cooperativeness in dogs, dogs may perceive human gaze in an object-choice task in a more antagonistic manner than children.

  2. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  3. Gamma crystallins of the human eye lens.

    Science.gov (United States)

    Vendra, Venkata Pulla Rao; Khan, Ismail; Chandani, Sushil; Muniyandi, Anbukkarasi; Balasubramanian, Dorairajan

    2016-01-01

    Protein crystallins co me in three types (α, β and γ) and are found predominantly in the eye, and particularly in the lens, where they are packed into a compact, plastic, elastic, and transparent globule of proper refractive power range that aids in focusing incoming light on to the retina. Of these, the γ-crystallins are found largely in the nuclear region of the lens at very high concentrations (>400 mg/ml). The connection between their structure and inter-molecular interactions and lens transparency is an issue of particular interest. We review the origin and phylogeny of the gamma crystallins, their special structure involving the use of Greek key supersecondary structural motif, and how they aid in offering the appropriate refractive index gradient, intermolecular short range attractive interactions (aiding in packing them into a transparent ball), the role that several of the constituent amino acid residues play in this process, the thermodynamic and kinetic stability and how even single point mutations can upset this delicate balance and lead to intermolecular aggregation, forming light-scattering particles which compromise transparency. We cite several examples of this, and illustrate this by cloning, expressing, isolating and comparing the properties of the mutant protein S39C of human γS-crystallin (associated with congenital cataract-microcornea), with those of the wild type molecule. In addition, we note that human γ-crystallins are also present in other parts of the eye (e.g., retina), where their functions are yet to be understood. There are several 'crucial' residues in and around the Greek key motifs which are essential to maintain the compact architecture of the crystallin molecules. We find that a mutation that replaces even one of these residues can lead to reduction in solubility, formation of light-scattering particles and loss of transparency in the molecular assembly. Such a molecular understanding of the process helps us construct the

  4. Olhar e contato ocular: desenvolvimento típico e comparação na Síndrome de Down Gaze and eye contact: typical development and comparison in Down syndrome

    Directory of Open Access Journals (Sweden)

    Aline Elise Gerbelli Belini

    2008-03-01

    Full Text Available OBJETIVO: Investigar o desenvolvimento do olhar e do contato ocular em bebê portadora de síndrome de Down, comparando a freqüência de seu olhar para diferentes alvos ao comportamento visual de bebês em desenvolvimento típico. MÉTODOS: Um bebê, do gênero feminino, portador de Síndrome de Down, sem distúrbios visuais diagnosticados até a conclusão da coleta, e 17 bebês em desenvolvimento típico, foram filmados mensal e domiciliarmente, em interação livre com suas mães, do primeiro ao quinto mês de vida. Foi contabilizada a freqüência do olhar dirigido a 11 alvos, entre eles "olhar para os olhos da mãe". RESULTADOS: Os bebês em desenvolvimento típico apresentaram evolução estatisticamente significante, ao longo do período, nas freqüências de "olhos fechados" e de seu olhar para "objetos", "a pesquisadora", "o ambiente", "o próprio corpo", "o rosto da mãe" e "os olhos da mãe". Houve estabilidade estatística da amostra em "olhar para outra pessoa", "olhar para o corpo da mãe" e "abrir e fechar os olhos". O desenvolvimento do olhar e do contato ocular ocorreu de forma estatisticamente muito semelhante no bebê com síndrome de Down, em comparação com as médias dos demais bebês (teste qui-quadrado e com sua variabilidade individual (análise por aglomerados significativos. CONCLUSÕES: A interação precoce entre o bebê e sua mãe parece interferir mais na comunicação não-verbal da dupla do que limitações geneticamente influenciadas. Isto pode ter refletido nas semelhanças encontradas entre o desenvolvimento do comportamento e do contato visuais no bebê com síndrome de Down e nas crianças sem alterações de desenvolvimento.PURPOSE: To assess gaze and eye contact development of a baby girl with Down syndrome and to compare the frequency of gaze directed to different targets to that of babies with normal development. METHODS: A female baby with Down syndrome, without any detected eye conditions and 17

  5. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  6. Attention to gaze and emotion in schizophrenia.

    Science.gov (United States)

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  7. Between Gazes

    DEFF Research Database (Denmark)

    Elias, Camelia

    2009-01-01

    In the film documentary Zizek! (2006) Astra Taylor, the film’s director, introduces Slavoj Zizek and his central notions of Lacanian psychoanalysis as they tie in with Marxism, ideology, and culture. Apart from following Zizek from New York to his home in Ljubljana, the documentary presents...... delivers his thoughts on philosophy while in bed or in the bathroom. It is clear that one of the devices that the documentary uses in its portrayal of Zizek is the palimpsest, and what is being layered is the gaze. My essay introduces the idea of layering as a case of intermediality between different art...

  8. Indentation and needle insertion properties of the human eye.

    Science.gov (United States)

    Matthews, A; Hutnik, C; Hill, K; Newson, T; Chan, T; Campbell, G

    2014-07-01

    Characterization of the biomechanical properties of the human eye has a number of potential utilities. One novel purpose is to provide the basis for development of suitable tissue-mimicking material. The purpose of this study was to determine the indentation and needle insertion characteristics on human eye globes and tissue strips. An indenter assessed the elastic response of human eye globes and tissue strips under increasing compressive loads. Needle insertion determined the force (N) needed to penetrate various areas of the eye wall. The results demonstrated that globes underwent slightly greater indentation at the midline than at the central cornea, and corneal strips indented twofold more than scleral strips, although neither difference was significant (P=0.400 and P=0.100, respectively). Significant differences were observed among various areas of needle insertion (Phuman eye construct with potential utility as a model for use in ophthalmology research and surgical teaching.

  9. Real-time gaze estimation via pupil center tracking

    Directory of Open Access Journals (Sweden)

    Cazzato Dario

    2018-02-01

    Full Text Available Automatic gaze estimation not based on commercial and expensive eye tracking hardware solutions can enable several applications in the fields of human computer interaction (HCI and human behavior analysis. It is therefore not surprising that several related techniques and methods have been investigated in recent years. However, very few camera-based systems proposed in the literature are both real-time and robust. In this work, we propose a real-time user-calibration-free gaze estimation system that does not need person-dependent calibration, can deal with illumination changes and head pose variations, and can work with a wide range of distances from the camera. Our solution is based on a 3-D appearance-based method that processes the images from a built-in laptop camera. Real-time performance is obtained by combining head pose information with geometrical eye features to train a machine learning algorithm. Our method has been validated on a data set of images of users in natural environments, and shows promising results. The possibility of a real-time implementation, combined with the good quality of gaze tracking, make this system suitable for various HCI applications.

  10. Human eye and the sun hot and cold light

    CERN Document Server

    Vavilov, S I

    1965-01-01

    The Human Eye and the Sun, """"Hot"""" and """"Cold"""" Light is a translation from the Russian language and is a reproduction of texts from Volume IV of S.I. Vavilov, president of the U.S.S.R. Academy of Sciences. The book deals with theoretical and practical developments in lighting techniques. The text gives a brief introduction on the relationship of the human eye and the sun, describing the properties of light, of the sun, and of the human eye. The book describes hot (incandescence) and cold light (luminescence) as coming from different sources. These two types of light are compared. The

  11. Human eye localization using the modified Hough transform

    Czech Academy of Sciences Publication Activity Database

    Dobeš, M.; Martínek, J.; Skoupil, D.; Dobešová, Z.; Pospíšil, Jaroslav

    2006-01-01

    Roč. 117, - (2006), s. 468-473 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : human eye localization * modified Hough transform * eye iris and eyelid shape determination Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.585, year: 2006

  12. Relating Eye Activity Measures to Human Controller Remnant Characteristics

    NARCIS (Netherlands)

    Popovici, A; Zaal, P.M.T.; Pool, D.M.; Mulder, M.; Sawaragi, T

    2016-01-01

    This study attempts to partially explain the characteristics of the human perceptual remnant, following Levison’s representation of the remnant as an equivalent observation noise. Eye activity parameters are recorded using an eye tracker in two compensatory tracking tasks in which the visual

  13. Human eye colour and HERC2, OCA2 and MATP

    DEFF Research Database (Denmark)

    Mengel-From, Jonas; Børsting, Claus; Sanchez, Juan J

    2010-01-01

    Prediction of human eye colour by forensic genetic methods is of great value in certain crime investigations. Strong associations between blue/brown eye colour and the SNP loci rs1129038 and rs12913832 in the HERC2 gene were recently described. Weaker associations between eye colour and other...... genetic markers also exist. In 395 randomly selected Danes, we investigated the predictive values of various combinations of SNP alleles in the HERC2, OCA2 and MATP (SLC45A2) genes and compared the results to the eye colours as they were described by the individuals themselves. The highest predictive...

  14. Stress free configuration of the human eye.

    Science.gov (United States)

    Elsheikh, Ahmed; Whitford, Charles; Hamarashid, Rosti; Kassem, Wael; Joda, Akram; Büchler, Philippe

    2013-02-01

    Numerical simulations of eye globes often rely on topographies that have been measured in vivo using devices such as the Pentacam or OCT. The topographies, which represent the form of the already stressed eye under the existing intraocular pressure, introduce approximations in the analysis. The accuracy of the simulations could be improved if either the stress state of the eye under the effect of intraocular pressure is determined, or the stress-free form of the eye estimated prior to conducting the analysis. This study reviews earlier attempts to address this problem and assesses the performance of an iterative technique proposed by Pandolfi and Holzapfel [1], which is both simple to implement and promises high accuracy in estimating the eye's stress-free form. A parametric study has been conducted and demonstrated reliance of the error level on the level of flexibility of the eye model, especially in the cornea region. However, in all cases considered 3-4 analysis iterations were sufficient to produce a stress-free form with average errors in node location <10(-6)mm and a maximal error <10(-4)mm. This error level, which is similar to what has been achieved with other methods and orders of magnitude lower than the accuracy of current clinical topography systems, justifies the use of the technique as a pre-processing step in ocular numerical simulations. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  15. Embodied social robots trigger gaze following in real-time

    OpenAIRE

    Wiese, Eva; Weis, Patrick; Lofaro, Daniel

    2018-01-01

    In human-human interaction, we use information from gestures, facial expressions and gaze direction to make inferences about what interaction partners think, feel or intend to do next. Observing changes in gaze direction triggers shifts of attention to gazed-at locations and helps establish shared attention between gazer and observer - a prerequisite for more complex social skills like mentalizing, action understanding and joint action. The ability to follow others’ gaze develops early in lif...

  16. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  17. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  18. Eye Contact Is Crucial for Referential Communication in Pet Dogs.

    Science.gov (United States)

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.

  19. Use of Eye Tracking as an Innovative Instructional Method in Surgical Human Anatomy.

    Science.gov (United States)

    Sánchez-Ferrer, María Luísa; Grima-Murcia, María Dolores; Sánchez-Ferrer, Francisco; Hernández-Peñalver, Ana Isabel; Fernández-Jover, Eduardo; Sánchez Del Campo, Francisco

    Tobii glasses can record corneal infrared light reflection to track pupil position and to map gaze focusing in the video recording. Eye tracking has been proposed for use in training and coaching as a visually guided control interface. The aim of our study was to test the potential use of these glasses in various situations: explanations of anatomical structures on tablet-type electronic devices, explanations of anatomical models and dissected cadavers, and during the prosection thereof. An additional aim of the study was to test the use of the glasses during laparoscopies performed on Thiel-embalmed cadavers (that allows pneumoinsufflation and exact reproduction of the laparoscopic surgical technique). The device was also tried out in actual surgery (both laparoscopy and open surgery). We performed a pilot study using the Tobii glasses. Dissection room at our School of Medicine and in the operating room at our Hospital. To evaluate usefulness, a survey was designed for use among students, instructors, and practicing physicians. The results were satisfactory, with the usefulness of this tool supported by more than 80% positive responses to most questions. There was no inconvenience for surgeons and that patient safety was ensured in the real laparoscopy. To our knowledge, this is the first publication to demonstrate the usefulness of eye tracking in practical instruction of human anatomy, as well as in teaching clinical anatomy and surgical techniques in the dissection and operating rooms. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. The Eye Gaze Direction of an Observed Person Can Bias Perception, Memory, and Attention in Adolescents with and without Autism Spectrum Disorder

    Science.gov (United States)

    Freeth, M.; Ropar, D.; Chapman, P.; Mitchell, P.

    2010-01-01

    The reported experiments aimed to investigate whether a person and his or her gaze direction presented in the context of a naturalistic scene cause perception, memory, and attention to be biased in typically developing adolescents and high-functioning adolescents with autism spectrum disorder (ASD). A novel computerized image manipulation program…

  1. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and

  2. Simulation of wave propagation inside a human eye: acoustic eye model (AEM)

    Science.gov (United States)

    Požar, T.; Halilovič, M.; Horvat, D.; Petkovšek, R.

    2018-02-01

    The design and development of the acoustic eye model (AEM) is reported. The model consists of a computer-based simulation that describes the propagation of mechanical disturbance inside a simplified model of a human eye. The capabilities of the model are illustrated with examples, using different laser-induced initial loading conditions in different geometrical configurations typically occurring in ophthalmic medical procedures. The potential of the AEM is to predict the mechanical response of the treated eye tissue in advance, thus complementing other preliminary procedures preceding medical treatments.

  3. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    Science.gov (United States)

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  4. Human eye colour and HERC2, OCA2 and MATP

    DEFF Research Database (Denmark)

    Mengel-From, Jonas; Børsting, Claus; Sanchez, Juan J.

    2010-01-01

    Prediction of human eye colour by forensic genetic methods is of great value in certain crime investigations. Strong associations between blue/brown eye colour and the SNP loci rs1129038 and rs12913832 in the HERC2 gene were recently described. Weaker associations between eye colour and other...... value of typing either the HERC2 SNPs rs1129038 and/or rs12913832 that are in strong linkage disequilibrium was observed when eye colour was divided into two groups, (1) blue, grey and green (light) and (2) brown and hazel (dark). Sequence variations in rs11636232 and rs7170852 in HERC2, rs1800407...... genetic markers also exist. In 395 randomly selected Danes, we investigated the predictive values of various combinations of SNP alleles in the HERC2, OCA2 and MATP (SLC45A2) genes and compared the results to the eye colours as they were described by the individuals themselves. The highest predictive...

  5. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Alireza Haji Fathaliyan

    2018-04-01

    Full Text Available Human–robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze–object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring may require more visual attention than other actions (reach, pick up, set down, and move. 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The “gaze object sequence” was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that

  6. Owners' direct gazes increase dogs' attention-getting behaviors.

    Science.gov (United States)

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    Science.gov (United States)

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  8. Lens oscillations in the human eye. Implications for post-saccadic suppression of vision.

    Directory of Open Access Journals (Sweden)

    Juan Tabernero

    Full Text Available The eye changes gaze continuously from one visual stimulus to another. Using a high speed camera to record eye and lens movements we demonstrate how the crystalline lens sustains an inertial oscillatory decay movement immediately after every change of gaze. This behavior fit precisely with the movement of a classical damped harmonic oscillator. The time course of the oscillations range from 50 to 60 msec with an oscillation frequency of around 20 Hz. That has dramatic implications on the image quality at the retina on the very short times (∼50 msec that follow the movement. However, it is well known that our vision is nearly suppressed on those periods (post-saccadic suppression. Both phenomenon follow similar time courses and therefore might be synchronized to avoid the visual impairment.

  9. Conjugate Gaze Palsies

    Science.gov (United States)

    ... version Home Brain, Spinal Cord, and Nerve Disorders Cranial Nerve Disorders Conjugate Gaze Palsies Horizontal gaze palsy Vertical ... Version. DOCTORS: Click here for the Professional Version Cranial Nerve Disorders Overview of the Cranial Nerves Internuclear Ophthalmoplegia ...

  10. Speaker gaze increases information coupling between infant and adult brains.

    Science.gov (United States)

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  11. Discussion and Future Directions for Eye Tracker Development

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Mulvey, Fiona; Mardanbegi, Diako

    2011-01-01

    Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking.......Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking....

  12. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2017-06-01

    Full Text Available A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered. In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze, Control Saccade (remember original target location relative to gaze fixation, independent of the landmark, and a non-spatial control, Color Report (report target color. During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally

  13. Human rather than ape-like orbital morphology allows much greater lateral visual field expansion with eye abduction

    Science.gov (United States)

    Denion, Eric; Hitier, Martin; Levieil, Eric; Mouriaux, Frédéric

    2015-01-01

    While convergent, the human orbit differs from that of non-human apes in that its lateral orbital margin is significantly more rearward. This rearward position does not obstruct the additional visual field gained through eye motion. This additional visual field is therefore considered to be wider in humans than in non-human apes. A mathematical model was designed to quantify this difference. The mathematical model is based on published computed tomography data in the human neuro-ocular plane (NOP) and on additional anatomical data from 100 human skulls and 120 non-human ape skulls (30 gibbons; 30 chimpanzees / bonobos; 30 orangutans; 30 gorillas). It is used to calculate temporal visual field eccentricity values in the NOP first in the primary position of gaze then for any eyeball rotation value in abduction up to 45° and any lateral orbital margin position between 85° and 115° relative to the sagittal plane. By varying the lateral orbital margin position, the human orbit can be made “non-human ape-like”. In the Pan-like orbit, the orbital margin position (98.7°) was closest to the human orbit (107.1°). This modest 8.4° difference resulted in a large 21.1° difference in maximum lateral visual field eccentricity with eyeball abduction (Pan-like: 115°; human: 136.1°). PMID:26190625

  14. Gaze beats mouse

    DEFF Research Database (Denmark)

    Mateo, Julio C.; San Agustin, Javier; Hansen, John Paulin

    2008-01-01

    Facial EMG for selection is fast, easy and, combined with gaze pointing, it can provide completely hands-free interaction. In this pilot study, 5 participants performed a simple point-and-select task using mouse or gaze for pointing and a mouse button or a facial-EMG switch for selection. Gaze...

  15. Basement membrane abnormalities in human eyes with diabetic retinopathy

    DEFF Research Database (Denmark)

    Ljubimov, A V; Burgeson, R E; Butkowski, R J

    1996-01-01

    Vascular and parenchymal basement membranes (BMs) are thickened in diabetes, but alterations in individual BM components in diabetic eyes, especially in diabetic retinopathy (DR), are obscure. To identify abnormalities in the distribution of specific constituents, we analyzed cryostat sections...... of human eyes obtained at autopsy (seven normal, five diabetic without DR, and 13 diabetic with DR) by immunofluorescence with antibodies to 30 BM and extracellular matrix components. In non-DR eyes, no qualitative changes of ocular BM components were seen. In some DR corneas, epithelial BM was stained...... discontinuously for laminin-1, entactin/nidogen, and alpha3-alpha4 Type IV collagen, in contrast to non-DR corneas. Major BM alterations were found in DR retinas compared to normals and non-DR diabetics. The inner limiting membrane (retinal BM) of DR eyes had accumulations of fibronectin (including cellular...

  16. Effect of aberrations in human eye on contrast sensitivity function

    Science.gov (United States)

    Quan, Wei; Wang, Feng-lin; Wang, Zhao-qi

    2011-06-01

    The quantitative analysis of the effect of aberrations in human eye on vision has important clinical value in the correction of aberrations. The wave-front aberrations of human eyes were measured with the Hartmann-Shack wave-front sensor and modulation transfer function (MTF) was computed from the wave-front aberrations. Contrast sensitivity function (CSF) was obtained from MTF and the retinal aerial image modulation (AIM). It is shown that the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations deteriorate contrast sensitivity function. When the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations are corrected high contrast sensitivity function can be obtained.

  17. Molecular Diagnosis of Human Taenia martis Eye Infection.

    Science.gov (United States)

    Koch, Till; Schoen, Christoph; Muntau, Birgit; Addo, Marylyn; Ostertag, Helmut; Wiechens, Burkhard; Tappe, Dennis

    2016-05-04

    Taenia martis, a tapeworm harbored in the intestine of mustelids, is a rarely encountered zoonotic cysticercosis pathogen. The larval stage closely resembles the Taenia solium cysticercus, but the natural host and thus the epidemiology of the disease is different. We here report a human eye infection diagnosed molecularly in a previously healthy female German patient. The case represents the third human infection described worldwide; the two previous cases were also European, involving eye and brain. © The American Society of Tropical Medicine and Hygiene.

  18. Eye Movements Affect Postural Control in Young and Older Females.

    Science.gov (United States)

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  19. Method of Menu Selection by Gaze Movement Using AC EOG Signals

    Science.gov (United States)

    Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu

    A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.

  20. Stay tuned: Inter-individual neural synchronization during mutual gaze and joint attention

    Directory of Open Access Journals (Sweden)

    Daisuke N Saito

    2010-11-01

    Full Text Available Eye contact provides a communicative link between humans, prompting joint attention. As spontaneous brain activity may have an important role in coordination of neuronal processing within the brain, their inter-subject synchronization may occur during eye contact. To test this, we conducted simultaneous functional MRI in pairs of adults. Eye contact was maintained at baseline while the subjects engaged in real-time gaze exchange in a joint attention task. Averted gaze activated the bilateral occipital pole extending to the right posterior superior temporal sulcus, the dorso-medial prefrontal cortex, and bilateral inferior frontal gyrus. Following a partner’s gaze towards an object activated the left intraparietal sulcus. After all task-related effects were modeled out, inter-individual correlation analysis of residual time-courses was performed. Paired subjects showed more prominent correlations than non-paired subjects in the right inferior frontal gyrus, suggesting that this region is involved in sharing intention during eye contact that provides the context for joint attention.

  1. Off-the-Shelf Gaze Interaction

    DEFF Research Database (Denmark)

    San Agustin, Javier

    People with severe motor-skill disabilities are often unable to use standard input devices such as a mouse or a keyboard to control a computer and they are, therefore, in strong need for alternative input devices. Gaze tracking offers them the possibility to use the movements of their eyes to int...

  2. Gaze interaction with textual user interface

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Lund, Haakon; Madsen, Janus Askø

    2015-01-01

    ” option for text navigation. People readily understood how to execute RSVP command prompts and a majority of them preferred gaze input to a pen pointer. We present the concept of a smartwatch that can track eye movements and mediate command options whenever in proximity of intelligent devices...

  3. Human volunteer study with PGME: Eye irritation during vapour exposure

    NARCIS (Netherlands)

    Emmen, H.H.; Muijser, H.; Arts, J.H.E.; Prinsen, M.K.

    2003-01-01

    The objective of this study was to establish the possible occurrence of eye irritation and subjective symptoms in human volunteers exposed to propylene glycol monomethyl ether (PGME) vapour at concentrations of 0, 100 and 150 ppm. Testing was conducted in 12 healthy male volunteers using a repeated

  4. Human rights: eye for cultural diversity

    NARCIS (Netherlands)

    Donders, Y.M.

    2012-01-01

    The relationship and interaction between international human rights law and cultural diversity is a current topic, as is shown by the recent debates in The Netherlands on, for instance, the proposed ban on wearing facial coverage, or burqas, and the proposed ban on ritual slaughter without

  5. Prediction of human eye fixations using symmetry

    NARCIS (Netherlands)

    Kootstra, Gert; Schomaker, Lambert

    2009-01-01

    Humans are very sensitive to symmetry in visual patterns. Reaction time experiments show that symmetry is detected and recognized very rapidly. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of

  6. Multiphoton tomography of the human eye

    Science.gov (United States)

    König, Karsten; Batista, Ana; Hager, Tobias; Seitz, Berthold

    2017-02-01

    Multiphoton tomography (MPT) is a novel label-free clinical imaging method for non-invasive tissue imaging with high spatial (300 nm) and temporal (100 ps) resolutions. In vivo optical histology can be realized due to the nonlinear excitation of endogenous fluorophores and second-harmonic generation (SHG) of collagen. Furthermore, optical metabolic imaging (OMI) is performed by two-photon autofluorescence lifetime imaging (FLIM). So far, applications of the multiphoton tomographs DermaInspect and MPTflex were limited to dermatology. Novel applications include intraoperative brain tumor imaging as well as cornea imaging. In this work we describe two-photon imaging of ex vivo human corneas unsuitable for transplantation. Furthermore, the cross-linking (CXL) process of corneal collagen based on UVA exposure and 0.1 % riboflavin was studied. The pharmacokinetics of the photosensitizer could be detected with high spatial resolution. Interestingly, an increase in the stromal autofluorescence intensity and modifications of the autofluorescence lifetimes were observed in the human corneal samples within a few days following CXL.

  7. Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs.

    Science.gov (United States)

    Liang, Jiali; Wilkinson, Krista

    2018-04-18

    A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.

  8. Molecular restrictions for human eye irritation by chemical vapors

    International Nuclear Information System (INIS)

    Cometto-Muniz, J. Enrique; Cain, William S.; Abraham, Michael H.

    2005-01-01

    Previous research showed a cut-off along homologous volatile organic compounds (VOCs) in their ability to produce acute human mucosal irritation. The present study sought to specify the particular cut-off homolog for sensory eye irritation in an acetate and n-alcohol series. A 1900-ml glass vessel system and a three-alternative forced-choice procedure served to test nonyl, decyl, and dodecyl acetate, and 1-nonanol, 1-decanol, and 1-undecanol. Flowrate to the eye ranged from 2 to 8 L/min and time of exposure from 3 to 24 s. Decyl acetate and 1-undecanol were the shortest homologs that failed to produce eye irritation under all conditions, producing a cut-off effect. Increasing the vapor concentration of decyl acetate and 1-undecanol by 3 and 8 times, respectively, via heating them to 37 deg C made either or both VOCs detectable to only half of the 12 subjects tested, even though the higher vapor concentration was well above a predicted eye irritation threshold. When eye irritation thresholds for homologous acetates and n-alcohols were plotted as a function of the longest unfolded length of the molecule, the values for decyl acetate and 1-undecanol fell within a restricted range of 18 to 19 A. The outcome suggests that the basis for the cut-off is biological, that is, the molecule lacks a key size or structure to trigger transduction, rather than physical, that is, the vapor concentration is too low to precipitate detection

  9. Gaze-based interaction with public displays using off-the-shelf components

    DEFF Research Database (Denmark)

    San Agustin, Javier; Hansen, John Paulin; Tall, Martin Henrik

    Eye gaze can be used to interact with high-density information presented on large displays. We have built a system employing off-the-shelf hardware components and open-source gaze tracking software that enables users to interact with an interface displayed on a 55” screen using their eye movement...

  10. Coding gaze tracking data with chromatic gradients for VR Exposure Therapy

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Grillon, Helena; De Heras Ciechomski, Pablo

    2007-01-01

    This article presents a simple and intuitive way to represent the eye-tracking data gathered during immersive virtual reality exposure therapy sessions. Eye-tracking technology is used to observe gaze movements during vir- tual reality sessions and the gaze-map chromatic gradient coding allows to...... is fully compatible with different VR exposure systems and provides clinically meaningful data....

  11. Robustifying eye interaction

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin

    2006-01-01

    This paper presents a gaze typing system based on consumer hardware. Eye tracking based on consumer hardware is subject to several unknown factors. We propose methods using robust statistical principles to accommodate uncertainties in image data as well as in gaze estimates to improve accuracy. We...

  12. Gaze Tracking Through Smartphones

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Hansen, John Paulin; Møllenbach, Emilie

    Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest.......Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest....

  13. ANALYSIS OF THE GAZE BEHAVIOUR OF THE WORKER ON THE CARBURETOR ASSEMBLY TASK

    Directory of Open Access Journals (Sweden)

    Novie Susanto

    2015-06-01

    Full Text Available This study presents analysis of the area of interest (AOI and the gaze behavior of human during assembly task. This study aims at investigating the human behavior in detail using an eye‐tracking system during assembly task using LEGO brick and an actual manufactured product, a carburetor. An analysis using heat map data based on the recorded videos from the eye-tracking system is taken into account to examine and investigate the gaze behavior of human. The results of this study show that the carburetor assembly requires more attention than the product made from LEGO bricks. About 50% of the participants experience the necessity to visually inspect the interim state of the work object during the simulation of the assembly sequence on the screen. They also show the tendency to want to be more certain about part fitting in the actual work object.

  14. Gaze Bias in Preference Judgments by Younger and Older Adults

    Directory of Open Access Journals (Sweden)

    Toshiki Saito

    2017-08-01

    Full Text Available Individuals’ gaze behavior reflects the choice they will ultimately make. For example, people confronting a choice among multiple stimuli tend to look longer at stimuli that are subsequently chosen than at other stimuli. This tendency, called the gaze bias effect, is a key aspect of visual decision-making. Nevertheless, no study has examined the generality of the gaze bias effect in older adults. Here, we used a two-alternative forced-choice task (2AFC to compare the gaze behavior reflective of different stages of decision processes demonstrated by younger and older adults. Participants who had viewed two faces were instructed to choose the one that they liked/disliked or the one that they judged to be more/less similar to their own face. Their eye movements were tracked while they chose. The results show that the gaze bias effect occurred during the remaining time in both age groups irrespective of the decision type. However, no gaze bias effect was observed for the preference judgment during the first dwell time. Our study demonstrated that the gaze bias during the remaining time occurred regardless of decision-making task and age. Further study using diverse participants, such as clinic patients or infants, may help to generalize the gaze bias effect and to elucidate the mechanisms underlying the gaze bias.

  15. Ray tracing for inhomogeneous media applied to the human eye

    Science.gov (United States)

    Diaz-Gonzalez, G.; Iturbe-Castillo, M. D.; Juarez-Salazar, R.

    2017-08-01

    Inhomogeneous or gradient index media exhibit a refractive index varying with the position. This kind of media are very interesting because they can be found in both synthetic as well as real life optical devices such as the human lens. In this work we present the development of a computational tool for ray tracing in refractive optical systems. Particularly, the human eye is used as the optical system under study. An inhomogeneous medium with similar characteristics to the human lens is introduced and modeled by the so-called slices method. The useful of our proposal is illustrated by several graphical results.

  16. Region of eye contact of humanoid Nao robot is similar to that of a human

    NARCIS (Netherlands)

    Cuijpers, R.H.; Pol, van der D.; Herrmann, G.; Pearson, M.J.; Lenz, A.; Bremner, P.; Spiers, A.; Leonards, U.

    2013-01-01

    Eye contact is an important social cue in human-human interaction, but it is unclear how easily it carries over to humanoid robots. In this study we investigated whether the tolerance of making eye contact is similar for the Nao robot as compared to human lookers. We measured the region of eye

  17. Wavefront sensorless adaptive optics ophthalmoscopy in the human eye

    Science.gov (United States)

    Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason

    2011-01-01

    Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779

  18. Head movements evoked in alert rhesus monkey by vestibular prosthesis stimulation: implications for postural and gaze stabilization.

    Directory of Open Access Journals (Sweden)

    Diana E Mitchell

    Full Text Available The vestibular system detects motion of the head in space and in turn generates reflexes that are vital for our daily activities. The eye movements produced by the vestibulo-ocular reflex (VOR play an essential role in stabilizing the visual axis (gaze, while vestibulo-spinal reflexes ensure the maintenance of head and body posture. The neuronal pathways from the vestibular periphery to the cervical spinal cord potentially serve a dual role, since they function to stabilize the head relative to inertial space and could thus contribute to gaze (eye-in-head + head-in-space and posture stabilization. To date, however, the functional significance of vestibular-neck pathways in alert primates remains a matter of debate. Here we used a vestibular prosthesis to 1 quantify vestibularly-driven head movements in primates, and 2 assess whether these evoked head movements make a significant contribution to gaze as well as postural stabilization. We stimulated electrodes implanted in the horizontal semicircular canal of alert rhesus monkeys, and measured the head and eye movements evoked during a 100 ms time period for which the contribution of longer latency voluntary inputs to the neck would be minimal. Our results show that prosthetic stimulation evoked significant head movements with latencies consistent with known vestibulo-spinal pathways. Furthermore, while the evoked head movements were substantially smaller than the coincidently evoked eye movements, they made a significant contribution to gaze stabilization, complementing the VOR to ensure that the appropriate gaze response is achieved. We speculate that analogous compensatory head movements will be evoked when implanted prosthetic devices are transitioned to human patients.

  19. Gaze interaction from bed

    DEFF Research Database (Denmark)

    Hansen, John Paulin; San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner

    2011-01-01

    This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assis...

  20. Gazing and Performing

    DEFF Research Database (Denmark)

    Larsen, Jonas; Urry, John

    2011-01-01

    The Tourist Gaze [Urry J, 1990 (Sage, London)] is one of the most discussed and cited tourism books (with about 4000 citations on Google scholar). Whilst wide ranging in scope, the book is known for the Foucault-inspired concept of the tourist gaze that brings out the fundamentally visual and image...

  1. [Eye contact effects: A therapeutic issue?

    Science.gov (United States)

    Baltazar, M; Conty, L

    2016-12-01

    The perception of a direct gaze - that is, of another individual's gaze directed at the observer that leads to eye contact - is known to influence a wide range of cognitive processes and behaviors. We stress that these effects mainly reflect positive impacts on human cognition and may thus be used as relevant tools for therapeutic purposes. In this review, we aim (1) to provide an exhaustive review of eye contact effects while discussing the limits of the dominant models used to explain these effects, (2) to illustrate the therapeutic potential of eye contact by targeting those pathologies that show both preserved gaze processing and deficits in one or several functions that are targeted by the eye contact effects, and (3) to propose concrete ways in which eye contact could be employed as a therapeutic tool. (1) We regroup the variety of eye contact effects into four categories, including memory effects, activation of prosocial behavior, positive appraisals of self and others and the enhancement of self-awareness. We emphasize that the models proposed to account for these effects have a poor predictive value and that further descriptions of these effects is needed. (2) We then emphasize that people with pathologies that affect memory, social behavior, and self and/or other appraisal, and self-awareness could benefit from eye contact effects. We focus on depression, autism and Alzheimer's disease to illustrate our proposal. To our knowledge, no anomaly of eye contact has been reported in depression. Patients suffering from Alzheimer disease, at the early and moderate stage, have been shown to maintain a normal amount of eye contact with their interlocutor. We take into account that autism is controversial regarding whether gaze processing is preserved or altered. In the first view, individuals are thought to elude or omit gazing at another's eyes while in the second, individuals are considered to not be able to process the gaze of others. We adopt the first stance

  2. ESTIMATION OF MELANIN CONTENT IN IRIS OF HUMAN EYE

    Directory of Open Access Journals (Sweden)

    E. A. Genina

    2008-12-01

    Full Text Available Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of color images the digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel methods and optimization of already existing ones for non-invasive glaucoma diagnostics.

  3. Constructing a Computer Model of the Human Eye Based on Tissue Slice Images

    OpenAIRE

    Dai, Peishan; Wang, Boliang; Bao, Chunbo; Ju, Ying

    2010-01-01

    Computer simulation of the biomechanical and biological heat transfer in ophthalmology greatly relies on having a reliable computer model of the human eye. This paper proposes a novel method on the construction of a geometric model of the human eye based on tissue slice images. Slice images were obtained from an in vitro Chinese human eye through an embryo specimen processing methods. A level set algorithm was used to extract contour points of eye tissues while a principle component analysi...

  4. Directional asymmetries in human smooth pursuit eye movements.

    Science.gov (United States)

    Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam

    2013-06-27

    Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.

  5. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  6. Isoplanatic patch of the human eye for arbitrary wavelengths

    Science.gov (United States)

    Han, Guoqing; Cao, Zhaoliang; Mu, Quanquan; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Xu, Zihao; Wu, Daosheng; Hu, Lifa; Xuan, Li

    2018-03-01

    The isoplanatic patch of the human eye is a key parameter for the adaptive optics system (AOS) designed for retinal imaging. The field of view (FOV) usually sets to the same size as the isoplanatic patch to obtain high resolution images. However, it has only been measured at a specific wavelength. Here we investigate the wavelength dependence of this important parameter. An optical setup is initially designed and established in a laboratory to measure the isoplanatic patch at various wavelengths (655 nm, 730 nm and 808 nm). We established the Navarro wide-angle eye model in Zemax software to further validate our results, which suggested high consistency between the two. The isoplanatic patch as a function of wavelength was obtained within the range of visible to near-infrared, which can be expressed as: θ=0.0028 λ - 0 . 74. This work is beneficial for the AOS design for retinal imaging.

  7. A new mapping function in table-mounted eye tracker

    Science.gov (United States)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  8. Eye Movements in Darkness Modulate Self-Motion Perception.

    Science.gov (United States)

    Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter

    2017-01-01

    During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.

  9. Self-Monitoring of Gaze in High Functioning Autism

    Science.gov (United States)

    Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques

    2012-01-01

    Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…

  10. The Relationship between Children's Gaze Reporting and Theory of Mind

    Science.gov (United States)

    D'Entremont, Barbara; Seamans, Elizabeth; Boudreau, Elyse

    2012-01-01

    Seventy-nine 3- and 4-year-old children were tested on gaze-reporting ability and Wellman and Liu's (2004) continuous measure of theory of mind (ToM). Children were better able to report where someone was looking when eye and head direction were provided as a cue compared with when only eye direction cues were provided. With the exception of…

  11. Depth Compensation Model for Gaze Estimation in Sport Analysis

    DEFF Research Database (Denmark)

    Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2015-01-01

    is tested in a totally controlled environment with aim to check the influences of eye tracker parameters and ocular biometric parameters on its behavior. We also present a gaze estimation method based on epipolar geometry for binocular eye tracking setups. The depth compensation model has shown very...

  12. Interaction between gaze and visual and proprioceptive position judgements.

    Science.gov (United States)

    Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P

    2010-06-01

    There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.

  13. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    Directory of Open Access Journals (Sweden)

    Muhammad Ilhamdi Rusydi

    2014-06-01

    Full Text Available Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2 produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  14. Affine transform to reform pixel coordinates of EOG signals for controlling robot manipulators using gaze motions.

    Science.gov (United States)

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-06-10

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  15. [Left lateral gaze paresis due to subcortical hematoma in the right precentral gyrus].

    Science.gov (United States)

    Sato, K; Takamori, M

    1998-03-01

    We report a case of transient left lateral gaze paresis due to a hemorrhagic lesion restricted in the right precentral gyrus. A 74-year-old female experienced a sudden clumsiness of the left upper extremity. A neurological examination revealed a left central facial paresis, distal dominant muscle weakness in the left upper limb and left lateral gaze paresis. There were no other focal neurological signs. Laboratory data were all normal. Brain CTs and MRIs demonstrated a subcortical hematoma in the right precentral gyrus. The neurological symptoms and signs disappeared over seven days. A recent physiological study suggested that the human frontal eye field (FEF) is located in the posterior part of the middle frontal gyrus (Brodmann's area 8) and the precentral gyrus around the precentral sulcus. More recent studies stressed the role of the precentral sulcus and the precentral gyrus. Our case supports those physiological findings. The hematoma affected both the FEF and its underlying white matter in our case. We assume the lateral gaze paresis is attributable to the disruption of the fibers from the FEF. It is likely that fibers for motor control of the face, upper extremity, and lateral gaze lie adjacently in the subcortical area.

  16. Face age modulates gaze following in young adults

    OpenAIRE

    Francesca Ciardo; Barbara F. M. Marino; Rossana Actis-Grosso; Angela Rossetti; Paola Ricciardelli

    2014-01-01

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18–25; 35–45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6–10; 18–25; 35–45; over 70). The results show ...

  17. Just one look: Direct gaze briefly disrupts visual working memory.

    Science.gov (United States)

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  18. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...... mobile phone screens. The user study shows that the method detects a set of defined gestures reliably.......A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  19. The Benslimane's Artistic Model for Females' Gaze Beauty: An Original Assessment Tool.

    Science.gov (United States)

    Benslimane, Fahd; van Harpen, Laura; Myers, Simon R; Ingallina, Fabio; Ghanem, Ali M

    2017-02-01

    The aim of this paper is to analyze the aesthetic characteristics of the human females' gaze using anthropometry and to present an artistic model to represent it: "The Frame Concept." In this model, the eye fissure represents a painting, and the most peripheral shadows around it represent the frame of this painting. The narrower the frame, the more aesthetically pleasing and youthful the gaze appears. This study included a literature review of the features that make the gaze appear attractive. Photographs of models with attractive gazes were examined, and old photographs of patients were compared to recent photographs. The frame ratio was defined by anthropometric measurements of modern portraits of twenty consecutive Miss World winners. The concept was then validated for age and attractiveness across centuries by analysis of modern female photographs and works of art acknowledged for portraying beautiful young and older women in classical paintings. The frame height inversely correlated with attractiveness in modern female portrait photographs. The eye fissure frame ratio of modern idealized female portraits was similar to that of beautiful female portraits idealized by classical artists. In contrast, the eye fissure frames of classical artists' mothers' portraits were significantly wider than those of beautiful younger women. The Frame Concept is a valid artistic tool that provides an understanding of both the aesthetic and aging characteristics of the female periorbital region, enabling the practitioner to plan appropriate aesthetic interventions. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the A3 online Instructions to Authors. www.springer.com/00266 .

  20. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  1. Reading beyond the glance: eye tracking in neurosciences.

    Science.gov (United States)

    Popa, Livia; Selejan, Ovidiu; Scott, Allan; Mureşanu, Dafin F; Balea, Maria; Rafila, Alexandru

    2015-05-01

    From an interdisciplinary approach, the neurosciences (NSs) represent the junction of many fields (biology, chemistry, medicine, computer science, and psychology) and aim to explore the structural and functional aspects of the nervous system. Among modern neurophysiological methods that "measure" different processes of the human brain to salience stimuli, a special place belongs to eye tracking (ET). By detecting eye position, gaze direction, sequence of eye movement and visual adaptation during cognitive activities, ET is an effective tool for experimental psychology and neurological research. It provides a quantitative and qualitative analysis of the gaze, which is very useful in understanding choice behavior and perceptual decision making. In the high tech era, ET has several applications related to the interaction between humans and computers. Herein, ET is used to evaluate the spatial orienting of attention, the performance in visual tasks, the reactions to information on websites, the customer response to advertising, and the emotional and cognitive impact of various spurs to the brain.

  2. Adaptive gaze stabilization through cerebellar internal models in a humanoid robot

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Tolu, Silvia; Falotico, Egidio

    2016-01-01

    Two main classes of reflexes relying on the vestibular system are involved in the stabilization of the human gaze: The vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR...... on the coordination of VCR and VOR and OKR. The model, inspired on neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. Tests on a simulated humanoid platform confirm the effectiveness of our approach....... works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism for moving the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we present the first complete model of gaze stabilization based...

  3. [Autoshaping of a button-push response and eye movement in human subjects].

    Science.gov (United States)

    Kimura, H; Fukui, I; Inaki, K

    1990-12-01

    Two experiments were conducted with human subjects to investigate the similarities and differences between animal and human behaviors under autoshaping procedures. In these experiments, light served as CS, and display on TV served as US. Whether the pushing button response or gazing response to CS could be obtained in human subjects under Pavlovian conditioning procedure was examined. In Experiment 1, uninstructed naive subjects were placed in a room containing a push-button and a TV display. Within the experimental sessions, the push-button was lit for 8 s as CS, and then paired with the display of a soft pornographic program on TV for 10 s. The result indicated that the modeling of pushing button promoted the increase of response probability among the subjects. The trials conducted after the rest period indicated an increase of response probability. In Experiment 2, a 4 cm square translucent panel was lit for 20 s as CS, and then paired with the display of a computer graphic picture on TV for 8 s as US. Some subjects started gazing at the CS for several seconds. These results indicated that some subjects could acquire the gazing response under the autoshaping procedure.

  4. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Directory of Open Access Journals (Sweden)

    Per Olav Folgerø

    2016-09-01

    Full Text Available This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise experimental art history. Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation, and the valence of facial expression.We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a

  5. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus.

    Directory of Open Access Journals (Sweden)

    Sayoko Ueda

    Full Text Available As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear, B-type (only the eye position is clear, and C-type (both the pupil and eye position are unclear. A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  6. A boundary element model for investigating the effects of eye tumor on the temperature distribution inside the human eye.

    Science.gov (United States)

    Ooi, E H; Ang, W T; Ng, E Y K

    2009-08-01

    A three-dimensional boundary element model of the human eye is developed to investigate the thermal effects of eye tumor on the ocular temperature distribution. The human eye is modeled as comprising several regions which have different thermal properties. The tumor is one of these regions. The thermal effects of the tumor are simulated by taking it to have a very high metabolic heat generation and blood perfusion rate. Inside the tumor, the steady state temperature is governed by the Pennes bioheat equation. Elsewhere, in normal tissues of the eye, the temperature satisfies the Laplace's equation. To compute the temperature on the corneal surface, the surface boundary of each region is divided into triangular elements.

  7. Predicting gaze direction from head pose yaw and pitch

    NARCIS (Netherlands)

    Johnson, D.O.; Cuijpers, R.H.; Arabnia, H.R.; Deligiannidis, L.; Lu, J.; Tinetti, F.G.; You, J.

    2013-01-01

    Abstract - Socially assistive robots (SARs) must be able to interpret non-verbal communication from a human. A person’s gaze direction informs the observer where the visual attention is directed to. Therefore it is useful if a robot can interpret the gaze direction, so that it can assess whether a

  8. Gaze Behavior, Believability, Likability and the iCat

    NARCIS (Netherlands)

    Poel, Mannes; Heylen, Dirk K.J.; Meulemans, M.; Nijholt, Antinus; Stock, O.; Nishida, T.

    2007-01-01

    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour

  9. Gaze Behavior, Believability, Likability and the iCat

    NARCIS (Netherlands)

    Nijholt, Antinus; Poel, Mannes; Heylen, Dirk K.J.; Stock, O.; Nishida, T.; Meulemans, M.; van Bremen, A.

    2009-01-01

    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour

  10. Preliminary study of gaze toward humans in photographs by individuals with autism, Down syndrome, or other intellectual disabilities: implications for design of visual scene displays.

    Science.gov (United States)

    Wilkinson, Krista M; Light, Janice

    2014-06-01

    Visual scene displays (VSDs) are a form of augmentative and alternative communication display in which language concepts are embedded into an image of a naturalistic event. VSDs are based on the theory that language learning occurs through interactions with other people, and recommendations for VSD design have emphasized using images of these events that include humans. However, many VSDs also include other items that could potentially be distracting. We examined gaze fixation in 18 school-aged participants with and without severe intellectual/developmental disabilities (i.e., individuals with typical development, autism, Down syndrome and other intellectual disabilities) while they viewed photographs with human figures of various sizes and locations in the image, appearing alongside other interesting, and potentially distracting items. In all groups, the human figures attracted attention rapidly (within 1.5 seconds). The proportions of each participant's own fixation time spent on the human figures were similar across all groups, as were the proportions of total fixations made to the human figures. Although the findings are preliminary, this initial evidence supports the inclusion of humans in VSD images.

  11. Remote gaze tracking system for 3D environments.

    Science.gov (United States)

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  12. Comparative Analysis of Gene Expression for Convergent Evolution of Camera Eye Between Octopus and Human

    Science.gov (United States)

    Ogura, Atsushi; Ikeo, Kazuho; Gojobori, Takashi

    2004-01-01

    Although the camera eye of the octopus is very similar to that of humans, phylogenetic and embryological analyses have suggested that their camera eyes have been acquired independently. It has been known as a typical example of convergent evolution. To study the molecular basis of convergent evolution of camera eyes, we conducted a comparative analysis of gene expression in octopus and human camera eyes. We sequenced 16,432 ESTs of the octopus eye, leading to 1052 nonredundant genes that have matches in the protein database. Comparing these 1052 genes with 13,303 already-known ESTs of the human eye, 729 (69.3%) genes were commonly expressed between the human and octopus eyes. On the contrary, when we compared octopus eye ESTs with human connective tissue ESTs, the expression similarity was quite low. To trace the evolutionary changes that are potentially responsible for camera eye formation, we also compared octopus-eye ESTs with the completed genome sequences of other organisms. We found that 1019 out of the 1052 genes had already existed at the common ancestor of bilateria, and 875 genes were conserved between humans and octopuses. It suggests that a larger number of conserved genes and their similar gene expression may be responsible for the convergent evolution of the camera eye. PMID:15289475

  13. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    Science.gov (United States)

    Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen

    2018-01-01

    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  14. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    Directory of Open Access Journals (Sweden)

    Ľuboš Hládek

    Full Text Available The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG, which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  15. The Metamorphosis of Polyphemus's Gaze in Marij Pregelj's Painting (1913-1967

    Directory of Open Access Journals (Sweden)

    Jure Mikuž

    2015-04-01

    Full Text Available In 1949-1951 Marij Pregelj, one of the most interesting Slovenian modernist painters, illustrated his version of Homer's Iliad and Odsssey. His illustrations were presented in the time of socialist realist aesthetics announce a reintegration of Slovenian art into the global (Western context. Among the illustrations is the figure of Cyclops devouring Odysseus' comrades. The image of the one-eyed giant Polyphemus is one which concerned Pregelj all his life: the painter, whose vocation is most dependent on the gaze, can show one eye in profile. And the profiles of others' faces and of his own face interested Pregelj his whole life through. Not only people but also objects were one-eyed: the rosette of a cathedral, which changes into a human figure, a washing machine door, a meat grinder's orifice, a blind “windeye” or window, and so on. The themes of his final two paintings, which he, already more than a year before his boding senseless death at the age of 54, executed but did not complete, are Polyphemus and the Portrait of His Son Vasko. In the first, blood flows from the pricked-out eye towards a stylized camera, in the second, the gaze of the son, an enthusiastic filmmaker, extends to the camera that will displace the father's brush.

  16. Comparison of dogs and humans in visual scanning of social interaction.

    Science.gov (United States)

    Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M; Vainio, Outi; Kujala, Miiamaaria V

    2015-09-01

    Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.

  17. The effects of social pressure and emotional expression on the cone of gaze in patients with social anxiety disorder.

    Science.gov (United States)

    Harbort, Johannes; Spiegel, Julia; Witthöft, Michael; Hecht, Heiko

    2017-06-01

    Patients with social anxiety disorder suffer from pronounced fears in social situations. As gaze perception is crucial in these situations, we examined which factors influence the range of gaze directions where mutual gaze is experienced (the cone of gaze). The social stimulus was modified by changing the number of people (heads) present and the emotional expression of their faces. Participants completed a psychophysical task, in which they had to adjust the eyes of a virtual head to gaze at the edge of the range where mutual eye-contact was experienced. The number of heads affected the width of the gaze cone: the more heads, the wider the gaze cone. The emotional expression of the virtual head had no consistent effect on the width of the gaze cone, it did however affect the emotional state of the participants. Angry expressions produced the highest arousal values. Highest valence emerged from happy faces, lowest valence from angry faces. These results suggest that the widening of the gaze cone in social anxiety disorder is not primarily mediated by their altered emotional reactivity. Implications for gaze assessment and gaze training in therapeutic contexts are discussed. Due to interindividual variability, enlarged gaze cones are not necessarily indicative of social anxiety disorder, they merely constitute a correlate at the group level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Genetic analyses of the human eye colours using a novel objective method for eye colour classification

    DEFF Research Database (Denmark)

    Andersen, Jeppe D.; Johansen, Peter; Harder, Stine

    2013-01-01

    In this study, we present a new objective method for measuring the eye colour on a continuous scale that allows researchers to associate genetic markers with different shades of eye colour. With the use of the custom designed software Digital Iris Analysis Tool (DIAT), the iris was automatically...... and TYR rs1393350) on the eye colour. We evaluated the two published prediction models for eye colour (IrisPlex [1] and Snipper[2]) and compared the predictions with the PIE-scores. We found good concordance with the prediction from individuals typed as HERC2 rs12913832 G. However, both methods had......-score ranged from −1 to 1 (brown to blue). The software eliminated the need for user based interpretation and qualitative eye colour categories. In 94% (570) of 605 analyzed eye images, the iris region was successfully extracted and a PIE-score was calculated. A very high correlation between the PIE...

  19. Tracking Eyes using Shape and Appearance

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nielsen, Mads; Hansen, John Paulin

    2002-01-01

    to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes. We use a Gaussian Process interpolation method for gaze determination, which facilitates stability feedback from the system. The use of a learning method for gaze estimation gives more flexibility...

  20. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    Science.gov (United States)

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes. © 2015 Wiley Periodicals, Inc.

  1. Gaze inspired subtitle position evaluation for MOOCs videos

    Science.gov (United States)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  2. E-gaze : create gaze communication for peoplewith visual disability

    NARCIS (Netherlands)

    Qiu, S.; Osawa, H.; Hu, J.; Rauterberg, G.W.M.

    2015-01-01

    Gaze signals are frequently used by the sighted in social interactions as visual cues. However, these signals and cues are hardly accessible for people with visual disability. A conceptual design of E-Gaze glasses is proposed, assistive to create gaze communication between blind and sighted people

  3. Human vertical eye movement responses to earth horizontal pitch

    Science.gov (United States)

    Wall, C. 3rd; Petropoulos, A. E.

    1993-01-01

    The vertical eye movements in humans produced in response to head-over-heels constant velocity pitch rotation about a horizontal axis resemble those from other species. At 60 degrees/s these are persistent and tend to have non-reversing slow components that are compensatory to the direction of rotation. In most, but not all subjects, the slow component velocity was well characterized by a rapid build-up followed by an exponential decay to a non-zero baseline. Super-imposed was a cyclic or modulation component whose frequency corresponded to the time for one revolution and whose maximum amplitude occurred during a specific head orientation. All response components (exponential decay, baseline and modulation) were larger during pitch backward compared to pitch forward runs. Decay time constants were shorter during the backward runs, thus, unlike left to right yaw axis rotation, pitch responses display significant asymmetries between paired forward and backward runs.

  4. Noninvasive detection of macular pigments in the human eye.

    Science.gov (United States)

    Gellermann, Werner; Bernstein, Paul S

    2004-01-01

    There is currently strong interest in developing noninvasive technologies for the detection of macular carotenoid pigments in the human eye. These pigments, consisting of lutein and zeaxanthin, are taken up from the diet and are thought to play an important role in the prevention of age-related macular degeneration, the leading cause of blindness in the elderly in the Western world. It may be possible to prevent or delay the onset of this debilitating disease with suitable dietary intervention strategies. We review the most commonly used detection techniques based on heterochromatic flicker photometry, fundus reflectometry, and autofluorescense techniques and put them in perspective with recently developed more molecule-specific Raman detection methods. (c) 2004 Society of Photo-Optical Instrumentation Engineers.

  5. Composite modified Luneburg model of human eye lens.

    Science.gov (United States)

    Gómez-Correa, J E; Balderas-Mata, S E; Pierscionek, B K; Chávez-Cerda, S

    2015-09-01

    A new lens model based on the gradient-index Luneburg lens and composed of two oblate half spheroids of different curvatures is presented. The spherically symmetric Luneburg lens is modified to create continuous isoindicial contours and to incorporate curvatures that are similar to those found in a human lens. The imaging capabilities of the model and the changes in the gradient index profile are tested for five object distances, for a fixed geometry and for a fixed image distance. The central refractive index decreases with decreasing object distance. This indicates that in order to focus at the same image distance as is required in the eye, a decrease in refractive power is needed for rays from closer objects that meet the lens surface at steeper angles compared to rays from more distant objects. This ensures a highly focused image with no spherical aberration.

  6. A GazeWatch Prototype

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Biermann, Florian; Møllenbach, Emile

    2015-01-01

    We demonstrate potentials of adding a gaze tracking unit to a smartwatch, allowing hands-free interaction with the watch itself and control of the environment. Users give commands via gaze gestures, i.e. looking away and back to the GazeWatch. Rapid presentation of single words on the watch displ...... provides a rich and effective textual interface. Finally, we exemplify how the GazeWatch can be used as a ubiquitous pointer on large displays....

  7. Simultaneous measurement of eye stiffness and contact area for living human eyes.

    Science.gov (United States)

    Kurita, Yuichi; Iida, Yoshichika; Kaneko, Makoto; Mishima, Hiromu K; Katakura, Seiki; Kiuchi, Yoshiaki

    2007-01-01

    Goldmann applanation tonometry is commonly used for measuring IOP (IntraOcular Pressure) to diagnose glaucoma. However, the measured IOP by the applanation tonometry is valid only under the assumption that all the subjects have the same structural eye stiffness. Abnormal eye stiffness makes abnormal corneal deformation and thus the current applanation tonometer misestimates the IOP. This study challenges to measure the eye stiffness in vivo with a non-invasive approach for detecting the abnormal deformation. The deformation of the cornea and the contact area between the probe and the cornea are simultaneously captured by cameras during the experiment. Experimental results show that some subjects have different relationship among the force, the displacement and the contact area even with same IOP. The proposed eye stiffness measurement can help detecting the abnormal deformation and the eyes with misestimated IOP.

  8. Human behavioral biology: commentary on Lerner and von Eye's sociobiology and human development

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Burgess, R.L.

    1993-01-01

    Contends that in their examination of arguments forwarded by sociobiologists to account for key features of human development, R. M. Lerner and A. von Eye (see record 1992-23071-001) misunderstand the role of general theory in science. They also fail to characterize the work of sociobiologists

  9. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    Science.gov (United States)

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  10. Gazes and Performances

    DEFF Research Database (Denmark)

    Larsen, Jonas

    ethnographic studies I spell out the embodied, hybridised, mobile and performative nature of tourist gazing especially with regard to tourist photography. The talk draws on my recent book Tourism, Performance and the Everyday: Consuming the Orient (Routledge, 2009, With M. Haldrup) and the substantially......Abstract: Recent literature has critiqued this notion of the 'tourist gaze' for reducing tourism to visual experiences 'sightseeing' and neglecting other senses and bodily experiences of doing tourism. A so-called 'performance turn' within tourist studies highlights how tourists experience places...... to onceptualise the corporeality of tourist bodies and the embodied actions of and interactions between tourist workers, tourists and 'locals' on various stages. It has been suggested that it is necessary to choose between gazing and performing as the tourism paradigm (Perkin and Thorns 2001). Rather than...

  11. Clinician's gaze behaviour in simulated paediatric emergencies.

    Science.gov (United States)

    McNaughten, Ben; Hart, Caroline; Gallagher, Stephen; Junk, Carol; Coulter, Patricia; Thompson, Andrew; Bourke, Thomas

    2018-03-07

    Differences in the gaze behaviour of experts and novices are described in aviation and surgery. This study sought to describe the gaze behaviour of clinicians from different training backgrounds during a simulated paediatric emergency. Clinicians from four clinical areas undertook a simulated emergency. Participants wore SMI (SensoMotoric Instruments) eye tracking glasses. We measured the fixation count and dwell time on predefined areas of interest and the time taken to key clinical interventions. Paediatric intensive care unit (PICU) consultants performed best and focused longer on the chest and airway. Paediatric consultants and trainees spent longer looking at the defibrillator and algorithm (51 180 ms and 50 551 ms, respectively) than the PICU and paediatric emergency medicine consultants. This study is the first to describe differences in the gaze behaviour between experts and novices in a resuscitation. They mirror those described in aviation and surgery. Further research is needed to evaluate the potential use of eye tracking as an educational tool. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. The Epistemology of the Gaze

    DEFF Research Database (Denmark)

    Kramer, Mette

    2007-01-01

    In psycho-semiotic film theory the gaze is often considered to be a straitjacket for the female spectator. If we approach the gaze from an empiric so-called ‘naturalised’ lens, it is possible to regard the gaze as a functional devise through which the spectator can obtain knowledge essential for ...... for her self-preservation....

  13. Blue eyes in lemurs and humans: same phenotype, different genetic mechanism

    DEFF Research Database (Denmark)

    Bradley, Brenda J; Pedersen, Anja; Mundy, Nicholas I

    2009-01-01

    Almost all mammals have brown or darkly-pigmented eyes (irises), but among primates, there are some prominent blue-eyed exceptions. The blue eyes of some humans and lemurs are a striking example of convergent evolution of a rare phenotype on distant branches of the primate tree. Recent work...... on humans indicates that blue eye color is associated with, and likely caused by, a single nucleotide polymorphism (rs12913832) in an intron of the gene HERC2, which likely regulates expression of the neighboring pigmentation gene OCA2. This raises the immediate question of whether blue eyes in lemurs might...... have a similar genetic basis. We addressed this by sequencing the homologous genetic region in the blue-eyed black lemur (Eulemur macaco flavifrons; N = 4) and the closely-related black lemur (Eulemur macaco macaco; N = 4), which has brown eyes. We then compared a 166-bp segment corresponding...

  14. Gaze stabilization reflexes in the mouse: New tools to study vision and sensorimotor

    NARCIS (Netherlands)

    B. van Alphen (Bart)

    2010-01-01

    markdownabstract__abstract__ Gaze stabilization reflexes are a popular model system in neuroscience for connecting neurophysiology and behavior as well as studying the neural correlates of behavioral plasticity. These compensatory eye movements are one of the simplest motor behaviors,

  15. Towards gaze-controlled platform games

    DEFF Research Database (Denmark)

    Muñoz, Jorge; Yannakakis, Georgios N.; Mulvey, Fiona

    2011-01-01

    This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial...... analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character...... controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game...

  16. Controlled delivery of antiangiogenic drug to human eye tissue using a MEMS device

    KAUST Repository

    Pirmoradi, Fatemeh Nazly; Ou, Kevin; Jackson, John K.; Letchford, Kevin; Cui, Jing; Wolf, Ki Tae; Graber, Florian; Zhao, Tom; Matsubara, Joanne A.; Burt, Helen; Chiao, Mu; Lin, Liwei

    2013-01-01

    We demonstrate an implantable MEMS drug delivery device to conduct controlled and on-demand, ex vivo drug transport to human eye tissue. Remotely operated drug delivery to human post-mortem eyes was performed via a MEMS device. The developed curved

  17. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Rizwan Ali Naqvi

    2018-02-01

    Full Text Available A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB. The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  18. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  19. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    Science.gov (United States)

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

  20. Intact unconscious processing of eye contact in schizophrenia

    Directory of Open Access Journals (Sweden)

    Kiley Seymour

    2016-03-01

    Full Text Available The perception of eye gaze is crucial for social interaction, providing essential information about another person’s goals, intentions, and focus of attention. People with schizophrenia suffer a wide range of social cognitive deficits, including abnormalities in eye gaze perception. For instance, patients have shown an increased bias to misjudge averted gaze as being directed toward them. In this study we probed early unconscious mechanisms of gaze processing in schizophrenia using a technique known as continuous flash suppression. Previous research using this technique to render faces with direct and averted gaze initially invisible reveals that direct eye contact gains privileged access to conscious awareness in healthy adults. We found that patients, as with healthy control subjects, showed the same effect: faces with direct eye gaze became visible significantly faster than faces with averted gaze. This suggests that early unconscious processing of eye gaze is intact in schizophrenia and implies that any misjudgments of gaze direction must manifest at a later conscious stage of gaze processing where deficits and/or biases in attributing mental states to gaze and/or beliefs about being watched may play a role.

  1. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.

    Science.gov (United States)

    Wilson, Mark R; Vine, Samuel J; Bright, Elizabeth; Masters, Rich S W; Defriend, David; McGrath, John S

    2011-12-01

    The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks.

  2. Different cellular effects of four anti-inflammatory eye drops on human corneal epithelial cells: independent in active components

    OpenAIRE

    Qu, Mingli; Wang, Yao; Yang, Lingling; Zhou, Qingjun

    2011-01-01

    Purpose To evaluate and compare the cellular effects of four commercially available anti-inflammatory eye drops and their active components on human corneal epithelial cells (HCECs) in vitro. Methods The cellular effects of four eye drops (Bromfenac Sodium Hydrate Eye Drops, Pranoprofen Eye Drops, Diclofenac Sodium Eye Drops, and Tobramycin & Dex Eye Drops) and their corresponding active components were evaluated in an HCEC line with five in vitro assays. Cell proliferation and migration were...

  3. Attention, Exposure Duration, and Gaze Shifting in Naming Performance

    Science.gov (United States)

    Roelofs, Ardi

    2011-01-01

    Two experiments are reported in which the role of attribute exposure duration in naming performance was examined by tracking eye movements. Participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to…

  4. Learning to interact with a computer by gaze

    DEFF Research Database (Denmark)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    that inefficient eye movements was dramatically reduced after only 15 to 25 sentences of typing, equal to approximately 3-4 hours of practice. The performance data fits a general learning model based on the power law of practice. The learning model can be used to estimate further improvements in gaze typing...

  5. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Science.gov (United States)

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  6. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Directory of Open Access Journals (Sweden)

    Helene Kreysa

    Full Text Available A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins". Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  7. A scalable and deformable stylized model of the adult human eye for radiation dose assessment.

    Science.gov (United States)

    El Basha, Daniel; Furuta, Takuya; Iyer, Siva S R; Bolch, Wesley E

    2018-03-23

    With recent changes in the recommended annual limit on eye lens exposures to ionizing radiation, there is considerable interest in predictive computational dosimetry models of the human eye and its various ocular structures including the crystalline lens, ciliary body, cornea, retina, optic nerve, and central retinal artery. Computational eye models to date have been constructed as stylized models, high-resolution voxel models, and polygon mesh models. Their common feature, however, is that they are typically constructed of nominal size and of a roughly spherical shape associated with the emmetropic eye. In this study, we present a geometric eye model that is both scalable (allowing for changes in eye size) and deformable (allowing for changes in eye shape), and that is suitable for use in radiation transport studies of ocular exposures and radiation treatments of eye disease. The model allows continuous and variable changes in eye size (axial lengths from 20 to 26 mm) and eye shape (diopters from -12 to +6). As an explanatory example of its use, five models (emmetropic eyes of small, average, and large size, as well as average size eyes of -12D and +6D) were constructed and subjected to normally incident beams of monoenergetic electrons and photons, with resultant energy-dependent dose coefficients presented for both anterior and posterior eye structures. Electron dose coefficients were found to vary with changes to both eye size and shape for the posterior eye structures, while their values for the eye crystalline lens were found to be sensitive to changes in only eye size. No dependence upon eye size or eye shape was found for photon dose coefficients at energies below 2 MeV. Future applications of the model can include more extensive tabulations of dose coefficients to all ocular structures (not only the lens) as a function of eye size and shape, as well as the assessment of x-ray therapies for ocular disease for patients with non-emmetropic eyes. © 2018

  8. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    OpenAIRE

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in bot...

  9. Experimental tests of a superposition hypothesis to explain the relationship between the vestibuloocular reflex and smooth pursuit during horizontal combined eye-head tracking in humans

    Science.gov (United States)

    Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.

    1992-01-01

    1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible

  10. Investigating gaze of children with ASD in naturalistic settings.

    Directory of Open Access Journals (Sweden)

    Basilio Noris

    Full Text Available BACKGROUND: Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD. Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii whether new atypical elements appear when studying visual behavior across the whole field of view. METHODOLOGY/PRINCIPAL FINDINGS: Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS. The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. CONCLUSIONS/SIGNIFICANCE: The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment.

  11. Attention to eye contact in the West and East: autonomic responses and evaluative ratings.

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Uibo, Helen; Kikuchi, Yukiko; Hasegawa, Toshikazu; Hietanen, Jari K

    2013-01-01

    Eye contact has a fundamental role in human social interaction. The special appearance of the human eye (i.e., white sclera contrasted with a coloured iris) implies the importance of detecting another person's face through eye contact. Empirical studies have demonstrated that faces making eye contact are detected quickly and processed preferentially (i.e., the eye contact effect). Such sensitivity to eye contact seems to be innate and universal among humans; however, several studies suggest that cultural norms affect eye contact behaviours. For example, Japanese individuals exhibit less eye contact than do individuals from Western European or North American cultures. However, how culture modulates eye contact behaviour is unclear. The present study investigated cultural differences in autonomic correlates of attentional orienting (i.e., heart rate) and looking time. Additionally, we examined evaluative ratings of eye contact with another real person, displaying an emotionally neutral expression, between participants from Western European (Finnish) and East Asian (Japanese) cultures. Our results showed that eye contact elicited stronger heart rate deceleration responses (i.e., attentional orienting), shorter looking times, and higher ratings of subjective feelings of arousal as compared to averted gaze in both cultures. Instead, cultural differences in the eye contact effect were observed in various evaluative responses regarding the stimulus faces (e.g., facial emotion, approachability etc.). The rating results suggest that individuals from an East Asian culture perceive another's face as being angrier, unapproachable, and unpleasant when making eye contact as compared to individuals from a Western European culture. The rating results also revealed that gaze direction (direct vs. averted) could influence perceptions about another person's facial affect and disposition. These results suggest that cultural differences in eye contact behaviour emerge from differential

  12. Attention to eye contact in the West and East: autonomic responses and evaluative ratings.

    Directory of Open Access Journals (Sweden)

    Hironori Akechi

    Full Text Available Eye contact has a fundamental role in human social interaction. The special appearance of the human eye (i.e., white sclera contrasted with a coloured iris implies the importance of detecting another person's face through eye contact. Empirical studies have demonstrated that faces making eye contact are detected quickly and processed preferentially (i.e., the eye contact effect. Such sensitivity to eye contact seems to be innate and universal among humans; however, several studies suggest that cultural norms affect eye contact behaviours. For example, Japanese individuals exhibit less eye contact than do individuals from Western European or North American cultures. However, how culture modulates eye contact behaviour is unclear. The present study investigated cultural differences in autonomic correlates of attentional orienting (i.e., heart rate and looking time. Additionally, we examined evaluative ratings of eye contact with another real person, displaying an emotionally neutral expression, between participants from Western European (Finnish and East Asian (Japanese cultures. Our results showed that eye contact elicited stronger heart rate deceleration responses (i.e., attentional orienting, shorter looking times, and higher ratings of subjective feelings of arousal as compared to averted gaze in both cultures. Instead, cultural differences in the eye contact effect were observed in various evaluative responses regarding the stimulus faces (e.g., facial emotion, approachability etc.. The rating results suggest that individuals from an East Asian culture perceive another's face as being angrier, unapproachable, and unpleasant when making eye contact as compared to individuals from a Western European culture. The rating results also revealed that gaze direction (direct vs. averted could influence perceptions about another person's facial affect and disposition. These results suggest that cultural differences in eye contact behaviour emerge from

  13. Human eye modelling for ophthalmic simulators project for clinic applications

    International Nuclear Information System (INIS)

    Sanchez, Andrea; Santos, Adimir dos; Yoriyaz, Helio

    2002-01-01

    Most of eye tumors are treated by surgical means, which involves the enucleation of affected eyes. In terms of treatment and control of diseases, there is brachytherapy, which often utilizes small applicator of Co-60, I-125, Ru-106, Ir-192, etc. These methods are shown to be very efficient but highly cost. The objective of this work is propose a detailed simulator modelling for eye characterization. Additionally, this study can contribute to design and build a new applicator in order to reduce the cost and to allow more patients to be treated

  14. Human secretory phospholipase A(2), group IB in normal eyes and in eye diseases

    DEFF Research Database (Denmark)

    Kolko, Miriam; Prause, Jan U; Bazan, Nicolas G

    2007-01-01

    , retinitis pigmentosa and glaucoma were evaluated. RESULTS: Expression of hGIB was found in various cells of the eye. The most abundant expression was found in retinal pigment epithelium (RPE) cells, the inner photoreceptor segments, ganglion cells and the corneal endothelium. We explored diseases involving...

  15. A scalable and deformable stylized model of the adult human eye for radiation dose assessment

    Science.gov (United States)

    El Basha, Daniel; Furuta, Takuya; Iyer, Siva S. R.; Bolch, Wesley E.

    2018-05-01

    With recent changes in the recommended annual limit on eye lens exposures to ionizing radiation, there is considerable interest in predictive computational dosimetry models of the human eye and its various ocular structures including the crystalline lens, ciliary body, cornea, retina, optic nerve, and central retinal artery. Computational eye models to date have been constructed as stylized models, high-resolution voxel models, and polygon mesh models. Their common feature, however, is that they are typically constructed of nominal size and of a roughly spherical shape associated with the emmetropic eye. In this study, we present a geometric eye model that is both scalable (allowing for changes in eye size) and deformable (allowing for changes in eye shape), and that is suitable for use in radiation transport studies of ocular exposures and radiation treatments of eye disease. The model allows continuous and variable changes in eye size (axial lengths from 20 to 26 mm) and eye shape (diopters from  ‑12 to  +6). As an explanatory example of its use, five models (emmetropic eyes of small, average, and large size, as well as average size eyes of  ‑12D and  +6D) were constructed and subjected to normally incident beams of monoenergetic electrons and photons, with resultant energy-dependent dose coefficients presented for both anterior and posterior eye structures. Electron dose coefficients were found to vary with changes to both eye size and shape for the posterior eye structures, while their values for the crystalline lens were found to be sensitive to changes in only eye size. No dependence upon eye size or eye shape was found for photon dose coefficients at energies below 2 MeV. Future applications of the model can include more extensive tabulations of dose coefficients to all ocular structures (not only the lens) as a function of eye size and shape, as well as the assessment of x-ray therapies for ocular disease for patients with non

  16. Dating the time of birth: A radiocarbon calibration curve for human eye-lens crystallines

    DEFF Research Database (Denmark)

    Kjeldsen, Henrik; Heinemeier, Jan; Heegaard, Steffen

    2010-01-01

    Radiocarbon bomb-pulse dating has been used to measure the formation age of human eye-lens crystallines. Lens crystallines are special proteins in the eye-lens that consist of virtually inert tissue. The experimental data show that the radiocarbon ages to a large extent reflect the time of birth...

  17. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  18. Effects of eye contact and iconic gestures on message retention in human-robot interaction

    NARCIS (Netherlands)

    Dijk, van E.T.; Torta, E.; Cuijpers, R.H.

    2013-01-01

    The effects of iconic gestures and eye contact on message retention in human-robot interaction were investigated in a series of experiments. A humanoid robot gave short verbal messages to participants, accompanied either by iconic gestures or no gestures while making eye contact with the participant

  19. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    Science.gov (United States)

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  20. Toward understanding social cues and signals in human?robot interaction: effects of robot gaze and proxemic behavior

    OpenAIRE

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relatio...

  1. Imaging shear stress distribution and evaluating the stress concentration factor of the human eye

    Science.gov (United States)

    Joseph Antony, S.

    2015-03-01

    Healthy eyes are vital for a better quality of human life. Historically, for man-made materials, scientists and engineers use stress concentration factors to characterise the effects of structural non-homogeneities on their mechanical strength. However, such information is scarce for the human eye. Here we present the shear stress distribution profiles of a healthy human cornea surface in vivo using photo-stress analysis tomography, which is a non-intrusive and non-X-ray based method. The corneal birefringent retardation measured here is comparable to that of previous studies. Using this, we derive eye stress concentration factors and the directional alignment of major principal stress on the surface of the cornea. Similar to thermometers being used for monitoring the general health in humans, this report provides a foundation to characterise the shear stress carrying capacity of the cornea, and a potential bench mark for validating theoretical modelling of stresses in the human eye in future.

  2. Vergence-mediated changes in the axis of eye rotation during the human vestibulo-ocular reflex can occur independent of eye position.

    Science.gov (United States)

    Migliaccio, Americo A; Cremer, Phillip D; Aw, Swee T; Halmagyi, G Michael; Curthoys, Ian S; Minor, Lloyd B; Todd, Michael J

    2003-07-01

    The aim of this study was to determine whether vergence-mediated changes in the axis of eye rotation in the human vestibulo-ocular reflex (VOR) would obey Listing's Law (normally associated with saccadic eye movements) independent of the initial eye position. We devised a paradigm for disassociating the saccadic velocity axis from eye position by presenting near and far targets that were centered with respect to one eye. We measured binocular 3-dimensional eye movements using search coils in ten normal subjects and 3-dimensional linear head acceleration using Optotrak in seven normal subjects. The stimuli consisted of passive, unpredictable, pitch head rotations with peak acceleration of approximately 2000 degrees /s(2 )and amplitude of approximately 20 degrees. During the pitch head rotation, each subject fixated straight ahead with one eye, whereas the other eye was adducted 4 degrees during far viewing (94 cm) and 25 degrees during near viewing (15 cm). Our data showed expected compensatory pitch rotations in both eyes, and a vergence-mediated horizontal rotation only in the adducting eye. In addition, during near viewing we observed torsional eye rotations not only in the adducting eye but also in the eye looking straight ahead. In the straight-ahead eye, the change in torsional eye velocity between near and far viewing, which began approximately 40 ms after the start of head rotation, was 10+/-6 degrees /s (mean +/- SD). This change in torsional eye velocity resulted in a 2.4+/-1.5 degrees axis tilt toward Listing's plane in that eye. In the adducting eye, the change in torsional eye velocity between near and far viewing was 16+/-6 degrees /s (mean +/- SD) and resulted in a 4.1+/-1.4 degrees axis tilt. The torsional eye velocities were conjugate and both eyes partially obeyed Listing's Law. The axis of eye rotation tilted in the direction of the line of sight by approximately one-third of the angle between the line of sight and a line orthogonal to Listing

  3. Relationship between eye dominance and pattern electroretinograms in normal human subjects.

    Science.gov (United States)

    Kamis, Umit; Gunduz, Kemal; Okudan, Nilsel; Gokbel, Hakki; Bodur, Sait; Tan, Uner

    2005-02-01

    The authors conducted a study in 100 non-smoker healthy normal human subjects to find a relationship between eye dominance and macular function as tested by using transient stimulus and electroretinography. Eye preference procedure was carried out using two reference points and pattern electroretinograms (PERGs) were recorded using black and white checks, each check subtending 23'. Trace averager was retriggered every 300 milliseconds (ms) with data collection time of 150 ms. The difference in PERG P50 amplitudes between right and left eyes was analyzed using Student's t test. There was no significant difference in PERG P50 amplitudes between the right and left eye dominant subjects as well as no significant differences between the right and left eyes in right eye dominants and left eye dominants, but in the left-eye dominant group the left eye PERG P50 amplitudes were significantly higher in females than males. Although pattern-reversal visual evoked potentials of healthy subjects provide electrophysiological evidence of lateralization in the nervous system, sensory eye dominance seems to have no correlation with macular function.

  4. A comprehensive gaze stabilization controller based on cerebellar internal models

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Falotico, Egidio; Tolu, Silvia

    2017-01-01

    . The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows to move the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we implement on a humanoid robot a model of gaze stabilization...... based on the coordination of VCR and VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot...

  5. Strange-face Illusions During Interpersonal-Gazing and Personality Differences of Spirituality.

    Science.gov (United States)

    Caputo, Giovanni B

    Strange-face illusions are produced when two individuals gaze at each other in the eyes in low illumination for more than a few minutes. Usually, the members of the dyad perceive numinous apparitions, like the other's face deformations and perception of a stranger or a monster in place of the other, and feel a short lasting dissociation. In the present experiment, the influence of the spirituality personality trait on strength and number of strange-face illusions was investigated. Thirty participants were preliminarily tested for superstition (Paranormal Belief Scale, PBS) and spirituality (Spiritual Transcendence Scale, STS); then, they were randomly assigned to 15 dyads. Dyads performed the intersubjective gazing task for 10 minutes and, finally, strange-face illusions (measured through the Strange-Face Questionnaire, SFQ) were evaluated. The first finding was that SFQ was independent of PBS; hence, strange-face illusions during intersubjective gazing are authentically perceptual, hallucination-like phenomena, and not due to superstition. The second finding was that SFQ depended on the spiritual-universality scale of STS (a belief in the unitive nature of life; e.g., "there is a higher plane of consciousness or spirituality that binds all people") and the two variables were negatively correlated. Thus, strange-face illusions, in particular monstrous apparitions, could potentially disrupt binding among human beings. Strange-face illusions can be considered as 'projections' of the subject's unconscious into the other's face. In conclusion, intersubjective gazing at low illumination can be a tool for conscious integration of unconscious 'shadows of the Self' in order to reach completeness of the Self. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Demo of Gaze Controlled Flying

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Hansen, John Paulin; Scott MacKenzie, I.

    2012-01-01

    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV.......Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV....

  7. Direct gaze elicits atypical activation of the theory-of-mind network in autism spectrum conditions.

    Science.gov (United States)

    von dem Hagen, Elisabeth A H; Stoyanova, Raliza S; Rowe, James B; Baron-Cohen, Simon; Calder, Andrew J

    2014-06-01

    Eye contact plays a key role in social interaction and is frequently reported to be atypical in individuals with autism spectrum conditions (ASCs). Despite the importance of direct gaze, previous functional magnetic resonance imaging in ASC has generally focused on paradigms using averted gaze. The current study sought to determine the neural processing of faces displaying direct and averted gaze in 18 males with ASC and 23 matched controls. Controls showed an increased response to direct gaze in brain areas implicated in theory-of-mind and gaze perception, including medial prefrontal cortex, temporoparietal junction, posterior superior temporal sulcus region, and amygdala. In contrast, the same regions showed an increased response to averted gaze in individuals with an ASC. This difference was confirmed by a significant gaze direction × group interaction. Relative to controls, participants with ASC also showed reduced functional connectivity between these regions. We suggest that, in the typical brain, perceiving another person gazing directly at you triggers spontaneous attributions of mental states (e.g. he is "interested" in me), and that such mental state attributions to direct gaze may be reduced or absent in the autistic brain.

  8. Real-time non-invasive eyetracking and gaze-point determination for human-computer interaction and biomedicine

    Science.gov (United States)

    Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.

    2004-01-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.

  9. Proton absorbed dose distribution in human eye simulated by SRNA-2KG code

    International Nuclear Information System (INIS)

    Ilic, R. D.; Pavlovic, R.

    2004-01-01

    The model of Monte Carlo SRNA code is described together with some numerical experiments to show feasibility of this code to be used in proton therapy, especially for tree dimensional proton absorption dose calculation in human eye. (author) [sr

  10. Retinal images in the human eye with implanted intraocular lens

    Science.gov (United States)

    Zając, Marek; Siedlecki, Damian; Nowak, Jerzy

    2007-04-01

    A typical proceeding in cataract is based on the removal of opaque crystalline lens and inserting in its place the artificial intraocular lens (IOL). The quality of retinal image after such procedure depends, among others, on the parameters of the IOL, so the design of the implanted lens is of great importance. An appropriate choice of the IOL material, especially in relation to its biocompatibility, is often considered. However the parameter, which is often omitted during the IOL design is its chromatic aberration. In particular lack of its adequacy to the chromatic aberration of a crystalline lens may cause problems. In order to fit better chromatic aberration of the eye with implanted IOL to that of the healthy eye we propose a hybrid - refractive-diffractive IOL. It can be designed in such way that the total longitudinal chromatic aberration of an eye with implanted IOL equals the total longitudinal chromatic aberration of a healthy eye. In this study we compare the retinal image quality calculated numerically on the basis of the well known Liou-Brennan eye model with typical IOL implanted with that obtained if the IOL is done as hybrid (refractive-diffractive) design.

  11. Pharmacokinetics of bevacizumab after topical and intravitreal administration in human eyes

    OpenAIRE

    Moisseiev, Elad; Waisbourd, Michael; Ben-Artsi, Elad; Levinger, Eliya; Barak, Adiel; Daniels, Tad; Csaky, Karl; Loewenstein, Anat; Barequet, Irina S.

    2013-01-01

    Background Topical bevacizumab is a potential treatment modality for corneal neovascularization, and several recent studies have demonstrated its efficacy. No previous study of the pharmacokinetics of topical bevacizumab has been performed in human eyes. The purpose of this study is to investigate the pharmacokinetics of topical administration of bevacizumab in human eyes, and also to compare the pharmacokinetics of intravitreal bevacizumab injections with previously reported data. Methods Tw...

  12. The Optical Design of the Human Eye: a Critical Review

    Directory of Open Access Journals (Sweden)

    Rafael Navarro

    2009-01-01

    Full Text Available Cornea, lens and eye models are analyzed and compared to experimental findings to assess properties and eventually unveil optical design principles involved in the structure and function of the optical system of the eye. Models and data often show good match but also some paradoxes. The optical design seems to correspond to a wide angle lens. Compared to conventional optical systems, the eye presents a poor optical quality on axis, but a relatively good quality off-axis, thus yielding higher homogeneity for a wide visual field. This seems the result of an intriguing combination of the symmetry design principle with a total lack of rotational symmetry, decentrations and misalignments of the optical surfaces.

  13. A computational model of blast loading on the human eye.

    Science.gov (United States)

    Bhardwaj, Rajneesh; Ziegler, Kimberly; Seo, Jung Hee; Ramesh, K T; Nguyen, Thao D

    2014-01-01

    Ocular injuries from blast have increased in recent wars, but the injury mechanism associated with the primary blast wave is unknown. We employ a three-dimensional fluid-structure interaction computational model to understand the stresses and deformations incurred by the globe due to blast overpressure. Our numerical results demonstrate that the blast wave reflections off the facial features around the eye increase the pressure loading on and around the eye. The blast wave produces asymmetric loading on the eye, which causes globe distortion. The deformation response of the globe under blast loading was evaluated, and regions of high stresses and strains inside the globe were identified. Our numerical results show that the blast loading results in globe distortion and large deviatoric stresses in the sclera. These large deviatoric stresses may be indicator for the risk of interfacial failure between the tissues of the sclera and the orbit.

  14. A new human eye model for ophthalmic brachytherapy dosimetry

    International Nuclear Information System (INIS)

    Yoriyaz, H.; Sanchez, A.; Dos Santos, A.

    2005-01-01

    The present work proposes a new mathematical eye model for ophthalmic brachytherapy dosimetry. This new model includes detailed description of internal structures that were not treated in previous works, allowing dose determination in different regions of the eye for a more adequate clinical analysis. Dose calculations were determined with the MCNP-4C Monte Carlo particle transport code running n parallel environment using PVM. The Amersham CKA4 ophthalmic applicator has been chosen and the depth dose distribution has been determined and compared to those provide by the manufacturer. The results have shown excellent agreement. Besides, absorbed dose values due to both 125 I seeds and 60 Co plaques were obtained for each one of the different structures which compose the eye model and can give relevant information in eventual clinical analyses. (authors)

  15. Comprehension and utilisation of pointing gestures and gazing in dog-human communication in relatively complex situations.

    Science.gov (United States)

    Lakatos, Gabriella; Gácsi, Márta; Topál, József; Miklósi, Adám

    2012-03-01

    The aim of the present investigation was to study the visual communication between humans and dogs in relatively complex situations. In the present research, we have modelled more lifelike situations in contrast to previous studies which often relied on using only two potential hiding locations and direct association between the communicative signal and the signalled object. In Study 1, we have provided the dogs with four potential hiding locations, two on each side of the experimenter to see whether dogs are able to choose the correct location based on the pointing gesture. In Study 2, dogs had to rely on a sequence of pointing gestures displayed by two different experimenters. We have investigated whether dogs are able to recognise an 'indirect signal', that is, a pointing toward a pointer. In Study 3, we have examined whether dogs can understand indirect information about a hidden object and direct the owner to the particular location. Study 1 has revealed that dogs are unlikely to rely on extrapolating precise linear vectors along the pointing arm when relying on human pointing gestures. Instead, they rely on a simple rule of following the side of the human gesturing. If there were more targets on the same side of the human, they showed a preference for the targets closer to the human. Study 2 has shown that dogs are able to rely on indirect pointing gestures but the individual performances suggest that this skill may be restricted to a certain level of complexity. In Study 3, we have found that dogs are able to localise the hidden object by utilising indirect human signals, and they are able to convey this information to their owner.

  16. Dating the time of birth: A radiocarbon calibration curve for human eye-lens crystallines

    Energy Technology Data Exchange (ETDEWEB)

    Kjeldsen, Henrik, E-mail: kjeldsen@phys.au.d [AMS 14C Dating Centre, Department of Physics and Astronomy, University of Aarhus, Aarhus (Denmark); Heinemeier, Jan [AMS 14C Dating Centre, Department of Physics and Astronomy, University of Aarhus, Aarhus (Denmark); Heegaard, Steffen [Eye Pathology Section, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen (Denmark); Jacobsen, Christina; Lynnerup, Niels [Department of Forensic Medicine, University of Copenhagen, Copenhagen (Denmark)

    2010-04-15

    Radiocarbon bomb-pulse dating has been used to measure the formation age of human eye-lens crystallines. Lens crystallines are special proteins in the eye-lens that consist of virtually inert tissue. The experimental data show that the radiocarbon ages to a large extent reflect the time of birth, in accordance with expectations. Moreover, it has been possible to develop an age model for the formation of the eye-lens crystallines. From this model a radiocarbon calibration curve for lens crystallines has been calculated. As a consequence, the time of birth of humans can be determined with an accuracy of a few years by radiocarbon dating.

  17. Dating the time of birth: A radiocarbon calibration curve for human eye-lens crystallines

    International Nuclear Information System (INIS)

    Kjeldsen, Henrik; Heinemeier, Jan; Heegaard, Steffen; Jacobsen, Christina; Lynnerup, Niels

    2010-01-01

    Radiocarbon bomb-pulse dating has been used to measure the formation age of human eye-lens crystallines. Lens crystallines are special proteins in the eye-lens that consist of virtually inert tissue. The experimental data show that the radiocarbon ages to a large extent reflect the time of birth, in accordance with expectations. Moreover, it has been possible to develop an age model for the formation of the eye-lens crystallines. From this model a radiocarbon calibration curve for lens crystallines has been calculated. As a consequence, the time of birth of humans can be determined with an accuracy of a few years by radiocarbon dating.

  18. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, Casey Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rice, Brandon Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bower, Gordon Ross [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hill, Rachael Ann [Idaho National Lab. (INL), Idaho Falls, ID (United States); LeBlanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  19. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    International Nuclear Information System (INIS)

    Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross; Spielman, Zachary Alexander; Hill, Rachael Ann; LeBlanc, Katya Lee

    2015-01-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator's eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  20. Dynamical models of the human eye and strabismus

    International Nuclear Information System (INIS)

    Pascolo, P.; Carniel, R.; Grimaz, S.

    2009-01-01

    In this work, the applicability of a recently published dynamical model of the eye to the case of strabismus is investigated. Although the basic scheme of the original model remains valid, the simulation of the pathological dynamics requires a more suitable coverage of the space of the physiological rotations of the eye. This requisite is reached by developing the original model and by taking into account the contributions of connective tissues that were originally neglected. Possible wider fields of application of the model are then discussed.

  1. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  2. Intact unconscious processing of eye contact in schizophrenia

    NARCIS (Netherlands)

    Seymour, K.; Rhodes, G.; Stein, T.; Langdon, R.

    The perception of eye gaze is crucial for social interaction, providing essential information about another person's goals, intentions, and focus of attention. People with schizophrenia suffer a wide range of social cognitive deficits, including abnormalities in eye gaze perception. For instance,

  3. Eye Contact Facilitates Awareness of Faces during Interocular Suppression

    Science.gov (United States)

    Stein, Timo; Senju, Atsushi; Peelen, Marius V.; Sterzer, Philipp

    2011-01-01

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than…

  4. Eye Detection and Tracking for Intelligent Human Computer Interaction

    National Research Council Canada - National Science Library

    Yin, Lijun

    2006-01-01

    .... In this project, Dr. Lijun Yin has developed a new algorithm for detecting and tracking eyes under an unconstrained environment using a single ordinary camera or webcam. The new algorithm is advantageous in that it works in a non-intrusive way based on a socalled Topographic Context approach.

  5. Objective measurement of postocclusion surge during phacoemulsification in human eye-bank eyes.

    Science.gov (United States)

    Georgescu, Dan; Payne, Marielle; Olson, Randall J

    2007-03-01

    To objectively compare the postocclusion vacuum surge among different phacoemulsification machines and devices. Experimental study. Infiniti, Legacy, Millennium, and Sovereign were tested in an eye-bank eye. All the machines were tested with 20-gauge non-ABS tips, 430 mm Hg vacuum pressure, 24 ml/minute aspiration rate, peristaltic pump, and 75 cm bottle height. In addition, Infiniti and Legacy were also tested with 20-gauge bypass tips (ABS), 125 cm bottle height, and 40 ml/minute flow rate. We also tested 19-gauge tips with Infiniti and Sovereign and the venturi pump for Millennium. Significant differences were found between all the machines tested with Millennium peristaltic generating the least and Millennium Venturi the most surge. ABS tips significantly decreased the surge for Legacy but not for Infiniti. Cruise Control (CC) had a significant effect on Sovereign but not on Millennium. Increasing the bottle height decreased surge while increasing the flow increased surge for both Infiniti and Legacy. The 19-gauge tips increased surge for both Infiniti and Sovereign. Surge varied over a range of 40 microm to more than 2 mm. ABS and CC decrease surge, especially when the machine is not functioning near the limits of surge prevention. Certain parameters, such as a 19-gauge tip and high flow, dramatically increased surge, whereas elevating the bottle ameliorates it. Understanding the impact of all these features will help in minimizing the problem.

  6. Image system analysis of human eye wave-front aberration on the basis of HSS

    Science.gov (United States)

    Xu, Ancheng

    2017-07-01

    Hartmann-Shack sensor (HSS) has been used in objective measurement of human eye wave-front aberration, but the research on the effects of sampling point size on the accuracy of the result has not been reported. In this paper, point spread function (PSF) of the whole system mathematical model was obtained via measuring the optical imaging system structure of human eye wave-front aberration measurement. The impact of Airy spot size on the accuracy of system was analyzed. Statistics study show that the geometry of Airy spot size of the ideal light source sent from eye retina formed on the surface of HSS is far smaller than the size of the HSS sample point image used in the experiment. Therefore, the effect of Airy spot on the precision of the system can be ignored. This study theoretically and experimentally justifies the reliability and accuracy of human eye wave-front aberration measurement based on HSS.

  7. Investigating social gaze as an action-perception online performance

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2012-04-01

    Full Text Available In interpersonal interactions, linguistic information is complemented by non-linguistic information originating largely from facial expressions. The study of online face-to-face social interaction thus entails investigating the multimodal simultaneous processing of oral and visual percepts. Moreover, gaze in and of itself functions as a powerful communicative channel. In this respect, gaze should not be examined as a purely perceptive process but also as an active social performance. We designed a task involving multimodal deciphering of social information based on virtual characters, embedded in naturalistic backgrounds, who directly address the participant with non-literal speech and meaningful facial expressions. Eighteen adult participants were to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking to them face-to-face. To examine self-control and self-awareness of gaze in this context, visual feedback is provided to the participant by a real-time gaze-contingent viewing window centered on the focal point, while the rest of the display is blurred. Eye-tracking data showed that the viewing window induced changes in gaze behaviour, notably longer visual fixations. Notwithstanding, only half the participants ascribed the window displacements to their eye movements. These results highlight the dissociation between non volitional gaze adaptation and self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, especially concerning Autism Spectrum Disorders (ASD where poor pragmatic understanding of oral speech are considered linked to visual peculiarities that impede face exploration.

  8. Face age modulates gaze following in young adults.

    Science.gov (United States)

    Ciardo, Francesca; Marino, Barbara F M; Actis-Grosso, Rossana; Rossetti, Angela; Ricciardelli, Paola

    2014-04-22

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).

  9. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    Science.gov (United States)

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  10. Facial Expressions Modulate the Ontogenetic Trajectory of Gaze-Following among Monkeys

    Science.gov (United States)

    Teufel, Christoph; Gutmann, Anke; Pirow, Ralph; Fischer, Julia

    2010-01-01

    Gaze-following, the tendency to direct one's attention to locations looked at by others, is a crucial aspect of social cognition in human and nonhuman primates. Whereas the development of gaze-following has been intensely studied in human infants, its early ontogeny in nonhuman primates has received little attention. Combining longitudinal and…

  11. Evaluation of a low-cost open-source gaze tracker

    DEFF Research Database (Denmark)

    San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner; Møllenbach, Emilie

    2010-01-01

    This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user's eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending...... on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The user successfully typed on a wall-projected interface using his eye movements....

  12. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    Science.gov (United States)

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  13. Multiconjugate adaptive optics applied to an anatomically accurate human eye model.

    Science.gov (United States)

    Bedggood, P A; Ashman, R; Smith, G; Metha, A B

    2006-09-04

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.

  14. Multiconjugate adaptive optics applied to an anatomically accurate human eye model

    Science.gov (United States)

    Bedggood, P. A.; Ashman, R.; Smith, G.; Metha, A. B.

    2006-09-01

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.

  15. The Human Eye Position Control System in a Rehabilitation Setting

    Directory of Open Access Journals (Sweden)

    Yvonne Nolan

    2005-01-01

    Full Text Available Our work at Ireland’s National Rehabilitation Hospital involves designing communication systems for people suffering from profound physical disabilities. One such system uses the electro-oculogram, which is an (x,y system of voltages picked up by pairs of electrodes placed, respectively, above and below and on either side of the eyes. The eyeball has a dc polarisation between cornea and back, arising from the photoreceptor rods and cones in the retina. As the eye rotates, the varying voltages projected onto the electrodes drive a cursor over a mimic keyboard on a computer screen. Symbols are selected with a switching action derived, for example, from a blink. Experience in using this mode of communication has given us limited facilities to study the eye position control system. We present here a resulting new feedback model for rotation in either the vertical or the horizontal plane, which involves the eyeball controlled by an agonist-antagonist muscle pair, modelled by a single equivalent bidirectional muscle with torque falling off linearly with angular velocity. We have incorporated muscle spindles and have tuned them by pole assignment associated with an optimum stability criterion.

  16. A generalised porous medium approach to study thermo-fluid dynamics in human eyes.

    Science.gov (United States)

    Mauro, Alessandro; Massarotti, Nicola; Salahudeen, Mohamed; Romano, Mario R; Romano, Vito; Nithiarasu, Perumal

    2018-03-22

    The present work describes the application of the generalised porous medium model to study heat and fluid flow in healthy and glaucomatous eyes of different subject specimens, considering the presence of ocular cavities and porous tissues. The 2D computational model, implemented into the open-source software OpenFOAM, has been verified against benchmark data for mixed convection in domains partially filled with a porous medium. The verified model has been employed to simulate the thermo-fluid dynamic phenomena occurring in the anterior section of four patient-specific human eyes, considering the presence of anterior chamber (AC), trabecular meshwork (TM), Schlemm's canal (SC), and collector channels (CC). The computational domains of the eye are extracted from tomographic images. The dependence of TM porosity and permeability on intraocular pressure (IOP) has been analysed in detail, and the differences between healthy and glaucomatous eye conditions have been highlighted, proving that the different physiological conditions of patients have a significant influence on the thermo-fluid dynamic phenomena. The influence of different eye positions (supine and standing) on thermo-fluid dynamic variables has been also investigated: results are presented in terms of velocity, pressure, temperature, friction coefficient and local Nusselt number. The results clearly indicate that porosity and permeability of TM are two important parameters that affect eye pressure distribution. Graphical abstract Velocity contours and vectors for healthy eyes (top) and glaucomatous eyes (bottom) for standing position.

  17. Eyes Wide Open

    Directory of Open Access Journals (Sweden)

    Zoi Manesi

    2016-04-01

    Full Text Available Research from evolutionary psychology suggests that the mere presence of eye images can promote prosocial behavior. However, the “eye images effect” is a source of considerable debate, and findings across studies have yielded somewhat inconsistent support. We suggest that one critical factor may be whether the eyes really need to be watching to effectively enhance prosocial behavior. In three experiments, we investigated the impact of eye images on prosocial behavior, assessed in a laboratory setting. Participants were randomly assigned to view an image of watching eyes (eyes with direct gaze, an image of nonwatching eyes (i.e., eyes closed for Study 1 and averted eyes for Studies 2 and 3, or an image of flowers (control condition. Upon exposure to the stimuli, participants decided whether or not to help another participant by completing a dull cognitive task. Three independent studies produced somewhat mixed results. However, combined analysis of all three studies, with a total of 612 participants, showed that the watching component of the eyes is important for decision-making in this context. Images of watching eyes led to significantly greater inclination to offer help as compared to images of nonwatching eyes (i.e., eyes closed and averted eyes or images of flowers. These findings suggest that eyes gazing at an individual, rather than any proxy to social presence (e.g., just the eyes, serve as a reminder of reputation. Taken together, we conclude that it is “eyes that pay attention” that can lift the veil of anonymity and potentially facilitate prosocial behavior.

  18. A closer look at the size of the gaze-liking effect: a preregistered replication.

    Science.gov (United States)

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  19. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    Science.gov (United States)

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  20. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    Directory of Open Access Journals (Sweden)

    Cesco Willemse

    2018-02-01

    Full Text Available Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition and in the other condition it looked at the opposite object 80% of the time (disjoint condition. Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  1. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  2. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  3. Effects of intraocular lenses with different diopters on chromatic aberrations in human eye models

    OpenAIRE

    Song, Hui; Yuan, Xiaoyong; Tang, Xin

    2016-01-01

    Background In this study, the effects of intraocular lenses (IOLs) with different diopters (D) on chromatic aberration were investigated in human eye models, and the influences of the central thickness of IOLs on chromatic aberration were compared. Methods A Liou-Brennan-based IOL eye model was constructed using ZEMAX optical design software. Spherical IOLs with different diopters (AR40e, AMO Company, USA) were implanted; modulation transfer function (MTF) values at 3?mm of pupil diameter and...

  4. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    Science.gov (United States)

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  5. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    Science.gov (United States)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  6. Human eye analytical and mesh-geometry models for ophthalmic dosimetry using MCNP6

    International Nuclear Information System (INIS)

    Angelocci, Lucas V.; Fonseca, Gabriel P.; Yoriyaz, Helio

    2015-01-01

    Eye tumors can be treated with brachytherapy using Co-60 plaques, I-125 seeds, among others materials. The human eye has regions particularly vulnerable to ionizing radiation (e.g. crystalline) and dosimetry for this region must be taken carefully. A mathematical model was proposed in the past [1] for the eye anatomy to be used in Monte Carlo simulations to account for dose distribution in ophthalmic brachytherapy. The model includes the description for internal structures of the eye that were not treated in previous works. The aim of this present work was to develop a new eye model based on the Mesh geometries of the MCNP6 code. The methodology utilized the ABAQUS/CAE (Simulia 3DS) software to build the Mesh geometry. For this work, an ophthalmic applicator containing up to 24 model Amersham 6711 I-125 seeds (Oncoseed) was used, positioned in contact with a generic tumor defined analytically inside the eye. The absorbed dose in eye structures like cornea, sclera, choroid, retina, vitreous body, lens, optical nerve and optical nerve wall were calculated using both models: analytical and MESH. (author)

  7. Human eye analytical and mesh-geometry models for ophthalmic dosimetry using MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Angelocci, Lucas V.; Fonseca, Gabriel P.; Yoriyaz, Helio, E-mail: hyoriyaz@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Eye tumors can be treated with brachytherapy using Co-60 plaques, I-125 seeds, among others materials. The human eye has regions particularly vulnerable to ionizing radiation (e.g. crystalline) and dosimetry for this region must be taken carefully. A mathematical model was proposed in the past [1] for the eye anatomy to be used in Monte Carlo simulations to account for dose distribution in ophthalmic brachytherapy. The model includes the description for internal structures of the eye that were not treated in previous works. The aim of this present work was to develop a new eye model based on the Mesh geometries of the MCNP6 code. The methodology utilized the ABAQUS/CAE (Simulia 3DS) software to build the Mesh geometry. For this work, an ophthalmic applicator containing up to 24 model Amersham 6711 I-125 seeds (Oncoseed) was used, positioned in contact with a generic tumor defined analytically inside the eye. The absorbed dose in eye structures like cornea, sclera, choroid, retina, vitreous body, lens, optical nerve and optical nerve wall were calculated using both models: analytical and MESH. (author)

  8. Goat′s eye integrated with a human cataractous lens: A training model for phacoemulsification

    Directory of Open Access Journals (Sweden)

    Sabyasachi Sengupta

    2015-01-01

    Full Text Available A relatively simple and inexpensive technique to train surgeons in phacoemulsification using a goat′s eye integrated with a human cataractous nucleus is described. The goat′s eye is placed on a bed of cotton within the lumen of a cylindrical container. This is then mounted on a rectangular thermocol so that the limbus is presented at the surgical field. After making a clear corneal entry with a keratome, the trainer makes a 5-5.5 mm continuous curvilinear capsulorhexis in the anterior lens capsule, creates a crater of adequate depth in the cortex and inserts the human nucleus within this crater in the goat′s capsular bag. The surgical wound is sutured, and the goat′s eye is ready for training. Creating the capsulorhexis with precision and making the crater of adequate depth to snugly accommodate the human nucleus are the most important steps to prevent excessive wobbling of the nucleus while training.

  9. Goat's eye integrated with a human cataractous lens: A training model for phacoemulsification.

    Science.gov (United States)

    Sengupta, Sabyasachi; Dhanapal, Praveen; Nath, Manas; Haripriya, Aravind; Venkatesh, Rengaraj

    2015-03-01

    A relatively simple and inexpensive technique to train surgeons in phacoemulsification using a goat's eye integrated with a human cataractous nucleus is described. The goat's eye is placed on a bed of cotton within the lumen of a cylindrical container. This is then mounted on a rectangular thermocol so that the limbus is presented at the surgical field. After making a clear corneal entry with a keratome, the trainer makes a 5-5.5 mm continuous curvilinear capsulorhexis in the anterior lens capsule, creates a crater of adequate depth in the cortex and inserts the human nucleus within this crater in the goat's capsular bag. The surgical wound is sutured, and the goat's eye is ready for training. Creating the capsulorhexis with precision and making the crater of adequate depth to snugly accommodate the human nucleus are the most important steps to prevent excessive wobbling of the nucleus while training.

  10. Dosimetric Comparison of Simulated Human Eye And Water Phantom in Investigation of Iodine Source Effects on Tumour And Healthy Tissues

    International Nuclear Information System (INIS)

    Sadi, A.S.; Masoudi, F.S. K.N.Toosi University of Technology

    2011-01-01

    For better clinical analysis in ophthalmic brachytherapy dosimetry, there is a need for the dose determination in different parts of the eye, so simulating the eye and defining the material of any parts of that, is helpful for better investigating dosimetry in human eye. However in brachytherapy dosimetry, it is common to consider the water phantom as human eye globe. In this work, a full human eye is simulated with MCNP-4C code by considering all parts of the eye like; lens, cornea, retina, choroid, sclera, anterior chamber, optic nerve, bulk of the eye comprising vitreous body and tumour. The average dose in different parts of this full model of human eye is determined and the results are compared with the dose calculated in water phantom. The central axes depth dose and the dose in whole of the tumour for these two simulated eye model are calculated too, and the results are compared. At long last, as the aim of this work is comparing the result of investigating dosimetry between two water phantom as human eye and simulated eye globe, the ratios of the absorbed dose by the healthy tissues to the absorbed dose by the tumour are calculated in these simulations and the comparison between results is done eventually.

  11. Horizontal gaze palsy with progressive scoliosis: CT and MR findings

    Energy Technology Data Exchange (ETDEWEB)

    Bomfim, Rodrigo C.; Tavora, Daniel G.F.; Nakayama, Mauro; Gama, Romulo L. [Sarah Network of Rehabilitation Hospitals, Department of Radiology, Ceara (Brazil)

    2009-02-15

    Horizontal gaze palsy with progressive scoliosis (HGPPS) is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and progressive scoliosis developing in childhood and adolescence. We present a child with clinical and neuroimaging findings typical of HGPPS. CT and MRI of the brain demonstrated pons hypoplasia, absence of the facial colliculi, butterfly configuration of the medulla and a deep midline pontine cleft. We briefly discuss the imaging aspects of this rare entity in light of the current literature. (orig.)

  12. Geometrical theory to predict eccentric photorefraction intensity profiles in the human eye

    Science.gov (United States)

    Roorda, Austin; Campbell, Melanie C. W.; Bobier, W. R.

    1995-08-01

    In eccentric photorefraction, light returning from the retina of the eye is photographed by a camera focused on the eye's pupil. We use a geometrical model of eccentric photorefraction to generate intensity profiles across the pupil image. The intensity profiles for three different monochromatic aberration functions induced in a single eye are predicted and show good agreement with the measured eccentric photorefraction intensity profiles. A directional reflection from the retina is incorporated into the calculation. Intensity profiles for symmetric and asymmetric aberrations are generated and measured. The latter profile shows a dependency on the source position and the meridian. The magnitude of the effect of thresholding on measured pattern extents is predicted. Monochromatic aberrations in human eyes will cause deviations in the eccentric photorefraction measurements from traditional crescents caused by defocus and may cause misdiagnoses of ametropia or anisometropia. Our results suggest that measuring refraction along the vertical meridian is preferred for screening studies with the eccentric photorefractor.

  13. Importance of non-synonymous OCA2 variants in human eye colour prediction

    DEFF Research Database (Denmark)

    Andersen, Jeppe Dyrberg; Pietroni, Carlotta; Johansen, Peter

    2016-01-01

    in the promotor region of OCA2 (OMIM #611409). Nevertheless, many eye colors cannot be explained by only considering rs12913832:A>G. Methods: In this study, we searched for additional variants in OCA2 to explain human eye color by sequencing a 500 kbp region, encompassing OCA2 and its promotor region. Results: We...... identified three nonsynonymous OCA2 variants as important for eye color, including rs1800407:G>A (p.Arg419Gln) and two variants, rs74653330:A>T (p.Ala481Thr) and rs121918166:G>A (p.Val443Ile), not previously described as important for eye color variation. It was shown that estimated haplotypes consisting...

  14. Towards Quantum Experiments with Human Eye Detectors Based on Cloning via Stimulated Emission ?

    Science.gov (United States)

    De Martini, Francesco

    2010-05-01

    In a recent theoretical paper published in Physical Review Letters, Sekatsky, Brunner, Branciard, Gisin, Simon report an extended investigation on some properties of the human eye that affect its behavior as a quantum detector. We believe that the content of this work, albeit appealing at fist sight, is highly questionable simply because the human eye cannot be adopted as a sensing device within any quantum measurement apparatus. Furthermore, the criticism raised by these Authors against a real experiment on Micro—Macro entanglement recently published in Physical Review Letters (100, 253601, 2008) is found misleading and misses its target.

  15. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie [ORNL; Pinto, Frank M [ORNL; Morin-Ducote, Garnetta [University of Tennessee, Knoxville (UTK); Hudson, Kathy [University of Tennessee, Knoxville (UTK); Tourassi, Georgia [ORNL

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  16. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie; Tourassi, Georgia D. [Biomedical Science and Engineering Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pinto, Frank [School of Engineering, Science, and Technology, Virginia State University, Petersburg, Virginia 23806 (United States); Morin-Ducote, Garnetta; Hudson, Kathleen B. [Department of Radiology, University of Tennessee Medical Center at Knoxville, Knoxville, Tennessee 37920 (United States)

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  17. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    International Nuclear Information System (INIS)

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content

  18. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (pusers.

  19. Statistical characteristics of aberrations of human eyes after small incision lenticule extraction surgery and analysis of visual performance with individual eye model.

    Science.gov (United States)

    Lou, Qiqi; Wang, Yan; Wang, Zhaoqi; Liu, Yongji; Zhang, Lin; Fang, Hui

    2015-09-01

    Preoperative and postoperative wavefront aberrations of 73 myopic eyes with small incision lenticule extraction surgery are analyzed in this paper. Twenty-eight postoperative individual eye models are constructed to investigate the visual acuity (VA) of human eyes. Results show that in photopic condition, residual defocus, residual astigmatism, and higher-order aberrations are relatively small. 100% of eyes reach a VA of 0.8 or better, and 89.3% of eyes reach a VA of 1.0 or better. In scotopic condition, the residual defocus and the higher-order aberrations are, respectively, 1.9 and 8.5 times the amount of that in photopic condition, and the defocus becomes the main factor attenuating visual performance.

  20. APPLICATION OF EYE TRACKING FOR MEASUREMENT AND EVALUATION IN HUMAN FACTORS STUDIES IN CONTROL ROOM MODERNIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, C.; Spielman, Z.; LeBlanc, K.; Rice, B.

    2017-05-01

    An important element of human factors engineering (HFE) pertains to measurement and evaluation (M&E). The role of HFE-M&E should be integrated throughout the entire control room modernization (CRM) process and be used for human-system performance evaluation and diagnostic purposes with resolving potential human engineering deficiencies (HEDs) and other human machine interface (HMI) design issues. NUREG-0711 describes how HFE in CRM should employ a hierarchical set of measures, particularly during integrated system validation (ISV), including plant performance, personnel task performance, situation awareness, cognitive workload, and anthropometric/ physiological factors. Historically, subjective measures have been primarily used since they are easier to collect and do not require specialized equipment. However, there are pitfalls with relying solely on subjective measures in M&E such that negatively impact reliability, sensitivity, and objectivity. As part of comprehensively capturing a diverse set of measures that strengthen findings and inferences made of the benefits from emerging technologies like advanced displays, this paper discusses the value of using eye tracking as an objective method that can be used in M&E. A brief description of eye tracking technology and relevant eye tracking measures is provided. Additionally, technical considerations and the unique challenges with using eye tracking in full-scaled simulations are addressed. Finally, this paper shares preliminary findings regarding the use of a wearable eye tracking system in a full-scale simulator study. These findings should help guide future full-scale simulator studies using eye tracking as a methodology to evaluate human-system performance.

  1. Mathematical models of the dynamics of the human eye

    CERN Document Server

    Collins, Richard

    1980-01-01

    A rich and abundant literature has developed during the last half century dealing with mechanical aspects of the eye, mainly from clinical and, experimental points of view. For the most part, workers have attempted to shed light on the complex set of conditions known by the general term glaucoma. These conditions are characterised by an increase in intraocular pressure sufficient to cause de­ generation of the optic disc and concomitant defects in the visual field, which, if not controlled, lead to inevitable permanent blindness. In the United States alone, an estimated 50,000 persons are blind as a result of glaucoma, which strikes about 2% of the population over 40 years of age (Vaughan and Asbury, 1974). An understanding of the underlying mechanisms of glaucoma is hindered by the fact that elevated intraocular pressure, like a runny nose, is but a symptom which may have a variety of causes. Only by turning to the initial pathology can one hope to understand this important class of medical problems.

  2. Dose conversion coefficients for neutron exposure to the lens of the human eye

    International Nuclear Information System (INIS)

    Manger, Ryan P.; Bellamy, Michael B.; Eckerman, Keith F.

    2011-01-01

    Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 x 10 -9 to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematical phantom in the MCNPX transport calculations.

  3. Dose conversion coefficients for neutron exposure to the lens of the human eye

    International Nuclear Information System (INIS)

    Manger, R. P.; Bellamy, M. B.; Eckerman, K. F.

    2012-01-01

    Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 x 10 -9 to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematical phantom in the MCNPX transport calculations. (authors)

  4. The effect of human image in B2C website design: an eye-tracking study

    Science.gov (United States)

    Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo

    2014-09-01

    On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.

  5. Mechanical model of human eye compliance for volumetric occlusion break surge measurements.

    Science.gov (United States)

    Dyk, David W; Miller, Kevin M

    2018-02-01

    To develop a mechanical model of human eye compliance for volumetric studies. Alcon Research, Ltd., Lake Forest, California, USA. Experimental study. Enucleated human eyes underwent pressurization and depressurization cycles with peak intraocular pressures (IOPs) of 60 to 100 mm Hg; anterior chamber pressure and volume changes were measured. Average net volume change curves were calculated as a function of IOP for each eye. Overall mean volumes were computed from each eye's average results at pressure points extrapolated over the range of 5 to 90 mm Hg. A 2-term exponential function was fit to these results. A fluid chamber with a displaceable piston was created as a mechanical model of this equation. A laser confocal displacement meter was used to measure piston displacement. A test bed incorporated the mechanical model with a mounted phacoemulsification probe and allowed for simulated occlusion breaks. Surge volume was calculated from piston displacement. An exponential function, V = C 1 × exp(C 2 × IOP) + C 3  × exp(C 4  × IOP) - V 0 , where V, the volume, was fit to the final depressurization curve obtained from 15 enucleated human eyes. The C 1 through C 4 values were -0.07141, -0.23055, -0.14972, and -0.02006, respectively. The equation was modeled using a piston system with 3 parallel springs that engaged serially. The mechanical model mimicked depressurization curves observed in human cadaver eyes. The resulting mechanical compliance model measured ocular volumetric changes and thus would be helpful in characterizing the postocclusion break surge response. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. Integrating eye tracking in virtual reality for stroke rehabilitation

    OpenAIRE

    Alves, Júlio Miguel Gomes Rebelo

    2014-01-01

    This thesis reports on research done for the integration of eye tracking technology into virtual reality environments, with the goal of using it in rehabilitation of patients who suffered from stroke. For the last few years, eye tracking has been a focus on medical research, used as an assistive tool  to help people with disabilities interact with new technologies  and as an assessment tool  to track the eye gaze during computer interactions. However, tracking more complex gaze behavio...

  7. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    Science.gov (United States)

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  8. Eye Tracking Based Control System for Natural Human-Computer Interaction

    Directory of Open Access Journals (Sweden)

    Xuebai Zhang

    2017-01-01

    Full Text Available Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user’s eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  9. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    Science.gov (United States)

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  10. A role of the human thalamus in predicting the perceptual consequences of eye movements.

    Science.gov (United States)

    Ostendorf, Florian; Liebermann, Daniela; Ploner, Christoph J

    2013-01-01

    Internal monitoring of oculomotor commands may help to anticipate and keep track of changes in perceptual input imposed by our eye movements. Neurophysiological studies in non-human primates identified corollary discharge (CD) signals of oculomotor commands that are conveyed via thalamus to frontal cortices. We tested whether disruption of these monitoring pathways on the thalamic level impairs the perceptual matching of visual input before and after an eye movement in human subjects. Fourteen patients with focal thalamic stroke and 20 healthy control subjects performed a task requiring a perceptual judgment across eye movements. Subjects reported the apparent displacement of a target cue that jumped unpredictably in sync with a saccadic eye movement. In a critical condition of this task, six patients exhibited clearly asymmetric perceptual performance for rightward vs. leftward saccade direction. Furthermore, perceptual judgments in seven patients systematically depended on oculomotor targeting errors, with self-generated targeting errors erroneously attributed to external stimulus jumps. Voxel-based lesion-symptom mapping identified an area in right central thalamus as critical for the perceptual matching of visual space across eye movements. Our findings suggest that trans-thalamic CD transmission decisively contributes to a correct prediction of the perceptual consequences of oculomotor actions.

  11. A role of the human thalamus in predicting the perceptual consequences of eye movements

    Directory of Open Access Journals (Sweden)

    Florian eOstendorf

    2013-04-01

    Full Text Available Internal monitoring of oculomotor commands may help to anticipate and keep track of changes in perceptual input imposed by our eye movements. Neurophysiological studies in non-human primates identified corollary discharge signals of oculomotor commands that are conveyed via thalamus to frontal cortices. We tested whether disruption of these monitoring pathways on the thalamic level impairs the perceptual matching of visual input before and after an eye movement in human subjects. Fourteen patients with focal thalamic stroke and twenty healthy control subjects performed a task requiring a perceptual judgment across eye movements. Subjects reported the apparent displacement of a target cue that jumped unpredictably in sync with a saccadic eye movement. In a critical condition of this task, six patients exhibited clearly asymmetric perceptual performance for rightward versus leftward saccade direction. Furthermore, perceptual judgments in seven patients systematically depended on oculomotor targeting errors, with self-generated targeting errors erroneously attributed to external stimulus jumps. Voxel-based lesion-symptom mapping identified an area in right central thalamus as critical for the perceptual matching of visual space across eye movements. Our findings suggest that trans-thalamic corollary discharge transmission decisively contributes to a correct prediction of the perceptual consequences of oculomotor actions.

  12. Controlled delivery of antiangiogenic drug to human eye tissue using a MEMS device

    KAUST Repository

    Pirmoradi, Fatemeh Nazly

    2013-01-01

    We demonstrate an implantable MEMS drug delivery device to conduct controlled and on-demand, ex vivo drug transport to human eye tissue. Remotely operated drug delivery to human post-mortem eyes was performed via a MEMS device. The developed curved packaging cover conforms to the eyeball thereby preventing the eye tissue from contacting the actuating membrane. By pulsed operation of the device, using an externally applied magnetic field, the drug released from the device accumulates in a cavity adjacent to the tissue. As such, docetaxel (DTX), an antiangiogenic drug, diffuses through the eye tissue, from sclera and choroid to retina. DTX uptake by sclera and choroid were measured to be 1.93±0.66 and 7.24±0.37 μg/g tissue, respectively, after two hours in pulsed operation mode (10s on/off cycles) at 23°C. During this period, a total amount of 192 ng DTX diffused into the exposed tissue. This MEMS device shows great potential for the treatment of ocular posterior segment diseases such as diabetic retinopathy by introducing a novel way of drug administration to the eye. © 2013 IEEE.

  13. Two eyes, one vision: binocular motion perception in human visual cortex

    NARCIS (Netherlands)

    Barendregt, M.

    2016-01-01

    An important aspect of human vision is the fact that it is binocular, i.e. that we have two eyes. As a result, the brain nearly always receives two slightly different images of the same visual scene. Yet, we only perceive a single image and thus our brain has to actively combine the binocular visual

  14. A Simple Model of the Accommodating Lens of the Human Eye

    Science.gov (United States)

    Oommen, Vinay; Kanthakumar, Praghalathan

    2014-01-01

    The human eye is often discussed as optically equivalent to a photographic camera. The iris is compared with the shutter, the pupil to the aperture, and the retina to the film, and both have lens systems to focus rays of light. Although many similarities exist, a major difference between the two systems is the mechanism involved in focusing an…

  15. Examining the durability of incidentally learned trust from gaze cues.

    Science.gov (United States)

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  16. Different cellular effects of four anti-inflammatory eye drops on human corneal epithelial cells: independent in active components.

    Science.gov (United States)

    Qu, Mingli; Wang, Yao; Yang, Lingling; Zhou, Qingjun

    2011-01-01

    To evaluate and compare the cellular effects of four commercially available anti-inflammatory eye drops and their active components on human corneal epithelial cells (HCECs) in vitro. The cellular effects of four eye drops (Bromfenac Sodium Hydrate Eye Drops, Pranoprofen Eye Drops, Diclofenac Sodium Eye Drops, and Tobramycin & Dex Eye Drops) and their corresponding active components were evaluated in an HCEC line with five in vitro assays. Cell proliferation and migration were measured using 3-(4,5)-dimethylthiahiazo (-z-y1)-3 5-di-phenytetrazoliumromide (MTT) assay and transwell migration assay. Cell damage was determined with the lactate dehydrogenase (LDH) assay. Cell viability and median lethal time (LT₅₀) were measured by 7-amino-actinomycin D (7-AAD) staining and flow cytometry analysis. Cellular effects after exposure of HCECs to the four anti-inflammatory eye drops were concentration dependent. The differences of cellular toxicity on cell proliferation became significant at lower concentrations (Eye Drops showed significant increasing effects on cell damage and viability when compared with the other three solutions. Tobramycin & Dex Eye Drops inhibited the migration of HCECs significantly. Tobramycin & Dex Eye Drops showed the quickest effect on cell viability: the LT₅₀ was 3.28, 9.23, 10.38, and 23.80 min for Tobramycin & Dex Eye Drops, Diclofenac Sodium Eye Drops, Pranoprofen Eye Drops, and Bromfenac Sodium Hydrate Eye Drops, respectively. However, the comparisons of cellular toxicity revealed significant differences between the eye drops and their active components under the same concentration. The corneal epithelial toxicity differences among the active components of the four eye drops became significant as higher concentration (>0.020%). The four anti-inflammatory eye drops showed different cellular effects on HCECs, and the toxicity was not related with their active components, which provides new reference for the clinical application and drug

  17. New perspectives in gaze sensitivity research.

    Science.gov (United States)

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention.

  18. Fear of eyes: triadic relation among social anxiety, trypophobia, and discomfort for eye cluster.

    Science.gov (United States)

    Chaya, Kengo; Xue, Yuting; Uto, Yusuke; Yao, Qirui; Yamada, Yuki

    2016-01-01

    Imagine you are being gazed at by multiple individuals simultaneously. Is the provoked anxiety a learned social-specific response or related to a pathological disorder known as trypophobia? A previous study revealed that spectral properties of images induced aversive reactions in observers with trypophobia. However, it is not clear whether individual differences such as social anxiety traits are related to the discomfort associated with trypophobic images. To investigate this issue, we conducted two experiments with social anxiety and trypophobia and images of eyes and faces. In Experiment 1, participants completed a social anxiety scale and trypophobia questionnaire before evaluation of the discomfort experienced upon exposure to pictures of eye. The results showed that social anxiety had a significant indirect effect on the discomfort associated with the eye clusters, and that the effect was mediated by trypophobia. Experiment 2 replicated Experiment 1 using images of human face. The results showed that, as in Experiment 1, a significant mediation effect of trypophobia was obtained, although the relationship between social anxiety and the discomfort rating was stronger than in Experiment 1. Our findings suggest that both social anxiety and trypophobia contribute to the induction of discomfort when one is gazed at by many people.

  19. Fear of eyes: triadic relation among social anxiety, trypophobia, and discomfort for eye cluster

    Directory of Open Access Journals (Sweden)

    Kengo Chaya

    2016-05-01

    Full Text Available Imagine you are being gazed at by multiple individuals simultaneously. Is the provoked anxiety a learned social-specific response or related to a pathological disorder known as trypophobia? A previous study revealed that spectral properties of images induced aversive reactions in observers with trypophobia. However, it is not clear whether individual differences such as social anxiety traits are related to the discomfort associated with trypophobic images. To investigate this issue, we conducted two experiments with social anxiety and trypophobia and images of eyes and faces. In Experiment 1, participants completed a social anxiety scale and trypophobia questionnaire before evaluation of the discomfort experienced upon exposure to pictures of eye. The results showed that social anxiety had a significant indirect effect on the discomfort associated with the eye clusters, and that the effect was mediated by trypophobia. Experiment 2 replicated Experiment 1 using images of human face. The results showed that, as in Experiment 1, a significant mediation effect of trypophobia was obtained, although the relationship between social anxiety and the discomfort rating was stronger than in Experiment 1. Our findings suggest that both social anxiety and trypophobia contribute to the induction of discomfort when one is gazed at by many people.

  20. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    Science.gov (United States)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  1. In the eye of the beholder: eye contact increases resistance to persuasion.

    Science.gov (United States)

    Chen, Frances S; Minson, Julia A; Schöne, Maren; Heinrichs, Markus

    2013-11-01

    Popular belief holds that eye contact increases the success of persuasive communication, and prior research suggests that speakers who direct their gaze more toward their listeners are perceived as more persuasive. In contrast, we demonstrate that more eye contact between the listener and speaker during persuasive communication predicts less attitude change in the direction advocated. In Study 1, participants freely watched videos of speakers expressing various views on controversial sociopolitical issues. Greater direct gaze at the speaker's eyes was associated with less attitude change in the direction advocated by the speaker. In Study 2, we instructed participants to look at either the eyes or the mouths of speakers presenting arguments counter to participants' own attitudes. Intentionally maintaining direct eye contact led to less persuasion than did gazing at the mouth. These findings suggest that efforts at increasing eye contact may be counterproductive across a variety of persuasion contexts.

  2. Gaze stability of observers watching Op Art pictures.

    Science.gov (United States)

    Zanker, Johannes M; Doyle, Melanie; Robin, Walker

    2003-01-01

    It has been the matter of some debate why we can experience vivid dynamic illusions when looking at static pictures composed from simple black and white patterns. The impression of illusory motion is particularly strong when viewing some of the works of 'Op Artists, such as Bridget Riley's painting Fall. Explanations of the illusory motion have ranged from retinal to cortical mechanisms, and an important role has been attributed to eye movements. To assess the possible contribution of eye movements to the illusory-motion percept we studied the strength of the illusion under different viewing conditions, and analysed the gaze stability of observers viewing the Riley painting and control patterns that do not produce the illusion. Whereas the illusion was reduced, but not abolished, when watching the painting through a pinhole, which reduces the effects of accommodation, it was not perceived in flash afterimages, suggesting an important role for eye movements in generating the illusion for this image. Recordings of eye movements revealed an abundance of small involuntary saccades when looking at the Riley pattern, despite the fact that gaze was kept within the dedicated fixation region. The frequency and particular characteristics of these rapid eye movements can vary considerably between different observers, but, although there was a tendency for gaze stability to deteriorate while viewing a Riley painting, there was no significant difference in saccade frequency between the stimulus and control patterns. Theoretical considerations indicate that such small image displacements can generate patterns of motion signals in a motion-detector network, which may serve as a simple and sufficient, but not necessarily exclusive, explanation for the illusion. Why such image displacements lead to perceptual results with a group of Op Art and similar patterns, but remain invisible for other stimuli, is discussed.

  3. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study.

    Science.gov (United States)

    Tsujimura, Akira; Kiuchi, Hiroshi; Soda, Tetsuji; Takezawa, Kentaro; Fukuhara, Shinichiro; Takao, Tetsuya; Sekiguchi, Yuki; Iwasa, Atsushi; Nonomura, Norio; Miyagawa, Yasushi

    2017-09-01

    Very little has been elucidated about sexual interest in female-to-male (FtM) transsexual persons. To investigate the sexual interest of FtM transsexual persons vs that of men using an eye-tracking system. The study included 15 men and 13 FtM transsexual subjects who viewed three sexual videos (clip 1: sexy clothed young woman kissing the region of the male genitals covered by underwear; clip 2: naked actor and actress kissing and touching each other; and clip 3: heterosexual intercourse between a naked actor and actress) in which several regions were designated for eye-gaze analysis in each frame. The designation of each region was not visible to the participants. Visual attention was measured across each designated region according to gaze duration. For clip 1, there was a statistically significant sex difference in the viewing pattern between men and FtM transsexual subjects. Longest gaze time was for the eyes of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. For clip 2, there also was a statistically significant sex difference. Longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects, and there was a significant difference between regions with longest gaze time. The most apparent difference was in the gaze time for the body of the actor: the percentage of time spent gazing at the body of the actor was 8.35% in FtM transsexual subjects, whereas it was only 0.03% in men. For clip 3, there were no statistically significant differences in viewing patterns between men and FtM transsexual subjects, although longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. We suggest that the characteristics of sexual interest of FtM transsexual persons are not the same as those of biological men. Tsujimura A, Kiuchi H, Soda T, et al. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons

  4. Clinical Approach to Supranuclear Brainstem Saccadic Gaze Palsies

    Directory of Open Access Journals (Sweden)

    Alexandra Lloyd-Smith Sequeira

    2017-08-01

    Full Text Available Failure of brainstem supranuclear centers for saccadic eye movements results in the clinical presence of a brainstem-mediated supranuclear saccadic gaze palsy (SGP, which is manifested as slowing of saccades with or without range of motion limitation of eye movements and as loss of quick phases of optokinetic nystagmus. Limitation in the range of motion of eye movements is typically worse with saccades than with smooth pursuit and is overcome with vestibular–ocular reflexive eye movements. The differential diagnosis of SGPs is broad, although acute-onset SGP is most often from brainstem infarction and chronic vertical SGP is most commonly caused by the neurodegenerative condition progressive supranuclear palsy. In this review, we discuss the brainstem anatomy and physiology of the brainstem saccade-generating network; we discuss the clinical features of SGPs, with an emphasis on insights from quantitative ocular motor recordings; and we consider the broad differential diagnosis of SGPs.

  5. The Neuroergonomics of Aircraft Cockpits: The Four Stages of Eye-Tracking Integration to Enhance Flight Safety

    Directory of Open Access Journals (Sweden)

    Vsevolod Peysakhovich

    2018-02-01

    Full Text Available Commercial aviation is currently one of the safest modes of transportation; however, human error is still one major contributing cause of aeronautical accidents and incidents. One promising avenue to further enhance flight safety is Neuroergonomics, an approach at the intersection of neuroscience, cognitive engineering and human factors, which aims to create better human–system interaction. Eye-tracking technology allows users to “monitor the monitoring” by providing insights into both pilots’ attentional distribution and underlying decisional processes. In this position paper, we identify and define a framework of four stages of step-by-step integration of eye-tracking systems in modern cockpits. Stage I concerns Pilot Training and Flight Performance Analysis on-ground; stage II proposes On-board Gaze Recordings as extra data for the “black box” recorders; stage III describes Gaze-Based Flight Deck Adaptation including warning and alerting systems, and, eventually, stage IV prophesies Gaze-Based Aircraft Adaptation including authority taking by the aircraft. We illustrate the potential of these four steps with a description of incidents or accidents that we could certainly have avoided thanks to eye-tracking. Estimated milestones for the integration of each stage are also proposed together with a list of some implementation limitations. We believe that the research institutions and industrial actors of the domain will all benefit from the integration of the framework of the eye-tracking systems into cockpits.

  6. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Keiko Sakurai

    2017-01-01

    Full Text Available A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor.

  7. Analysis of the speckle properties in a laser projection system based on a human eye model.

    Science.gov (United States)

    Cui, Zhe; Wang, Anting; Ma, Qianli; Ming, Hai

    2014-03-01

    In this paper, the properties of the speckle that is observed by humans in laser projection systems are theoretically analyzed. The speckle pattern on the fovea of the human retina is numerically simulated by introducing a chromatic human eye model. The results show that the speckle contrast experienced by humans is affected by the light intensity of the projected images and the wavelength of the laser source when considering the paracentral vision. Furthermore, the image quality is also affected by these two parameters. We believe that these results are useful for evaluating the speckle noise in laser projection systems.

  8. The effects of ultraviolet-A radiation on visual evoked potentials in the young human eye

    International Nuclear Information System (INIS)

    Sanford, B.E.; Beacham, S.; Hanifin, J.P.; Hannon, P.; Streletz, L.; Sliney, D.; Brainard, G.C.

    1996-01-01

    A recent study from this laboratory using visual evoked potentials (VEPs) demonstrated that children's eyes are capable of detecting ultraviolet radiation. The aim of this study was to compare dose-response relationships in two age groups, 6-10 years (n=10) and 20-25 years (n=10). Under photopic viewing conditions (550 lux), exposures of monochromatic UV-A (339 nm) and visible radiation (502 nm) were correlated to VEPs. The results demonstrate that monochromatic UV-A can elicit age and dose dependent responses in the human visual system, suggesting that the eyes of children are more responsive to UV stimuli than the eyes of young adults. (au) 17 refs

  9. Temperature elevation in the eye of anatomically based human head models for plane-wave exposures

    International Nuclear Information System (INIS)

    Hirata, A; Watanabe, S; Fujiwara, O; Kojima, M; Sasaki, K; Shiozawa, T

    2007-01-01

    This study investigated the temperature elevation in the eye of anatomically based human head models for plane-wave exposures. The finite-difference time-domain method is used for analyzing electromagnetic absorption and temperature elevation. The eyes in the anatomic models have average dimensions and weight. Computational results show that the ratio of maximum temperature in the lens to the eye-average SAR (named 'heating factor for the lens') is almost uniform (0.112-0.147 deg. C kg W -1 ) in the frequency region below 3 GHz. Above 3 GHz, this ratio increases gradually with an increase of frequency, which is attributed to the penetration depth of an electromagnetic wave. Particular attention is paid to the difference in the heating factor for the lens between this study and earlier works. Considering causes clarified in this study, compensated heating factors in all these studies are found to be in good agreement

  10. Differences in Sequential Eye Movement Behavior between Taiwanese and American Viewers

    Directory of Open Access Journals (Sweden)

    Yen Ju eLee

    2016-05-01

    Full Text Available Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context.

  11. Using Genetic Algorithm for Eye Detection and Tracking in Video Sequence

    Directory of Open Access Journals (Sweden)

    Takuya Akashi

    2007-04-01

    Full Text Available We propose a high-speed size and orientation invariant eye tracking method, which can acquire numerical parameters to represent the size and orientation of the eye. In this paper, we discuss that high tolerance in human head movement and real-time processing that are needed for many applications, such as eye gaze tracking. The generality of the method is also important. We use template matching with genetic algorithm, in order to overcome these problems. A high speed and accuracy tracking scheme using Evolutionary Video Processing for eye detection and tracking is proposed. Usually, a genetic algorithm is unsuitable for a real-time processing, however, we achieved real-time processing. The generality of this proposed method is provided by the artificial iris template used. In our simulations, an eye tracking accuracy is 97.9% and, an average processing time of 28 milliseconds per frame.

  12. Objective measurement of intraocular forward light scatter using Hartmann-Shack spot patterns from clinical aberrometers. Model-eye and human-eye study.

    Science.gov (United States)

    Cerviño, Alejandro; Bansal, Dheeraj; Hosking, Sarah L; Montés-Micó, Robert

    2008-07-01

    To apply software-based image-analysis tools to objectively determine intraocular scatter determined from clinically derived Hartmann-Shack patterns. Aston Academy of Life Sciences, Aston University, Birmingham, United Kingdom, and Department of Optics, University of Valencia, Valencia, Spain. Purpose-designed image-analysis software was used to quantify scatter from centroid patterns obtained using a clinical Hartmann-Shack analyzer (WASCA, Zeiss/Meditec). Three scatter values, as the maximum standard deviation within a lenslet for all lenslets in the pattern, were obtained in 6 model eyes and 10 human eyes. In the model-eye sample, patterns were obtained in 4 sessions: 2 without realigning between measurements, 1 with realignment, and 1 with an angular shift of 6 degrees from the instrument axis. Three measurements were made in the human eyes with the C-Quant straylight meter (Oculus) to obtain psychometric and objective measures of retinal straylight. Analysis of variance, intraclass correlation coefficients, coefficient of repeatability (CoR), and correlations were used to determine intrasession and intersession repeatability and the relationship between measures. No significant differences were found between the sessions in the model eye (P=.234). The mean CoR was less than 10% in all model- and human-eye sessions. After incomplete patterns were removed, good correlation was achieved between psychometric and objective scatter measurements despite the small sample size (n=6; r=-0.831; P=.040). The methodology was repeatable in model and human eyes, strong against realignment and misalignment, and sensitive. Clinical application would benefit from effective use of the sensor's dynamic range.

  13. Modeling the influence of LASIK surgery on optical properties of the human eye

    Science.gov (United States)

    Szul-Pietrzak, Elżbieta; Hachoł, Andrzej; Cieślak, Krzysztof; Drożdż, Ryszard; Podbielska, Halina

    2011-11-01

    The aim was to model the influence of LASIK surgery on the optical parameters of the human eye and to ascertain which factors besides the central corneal radius of curvature and central thickness play the major role in postsurgical refractive change. Ten patients were included in the study. Pre- and postsurgical measurements included standard refraction, anterior corneal curvature and pachymetry. The optical model used in the analysis was based on the Le Grand and El Hage schematic eye, modified by the measured individual parameters of corneal geometry. A substantial difference between eye refractive error measured after LASIK and estimated from the eye model was observed. In three patients, full correction of the refractive error was achieved. However, analysis of the visual quality in terms of spot diagrams and optical transfer functions of the eye optical system revealed some differences in these measurements. This suggests that other factors besides corneal geometry may play a major role in postsurgical refraction. In this paper we investigated whether the biomechanical properties of the eyeball and changes in intraocular pressure could account for the observed discrepancies.

  14. Design of an in-vivo microscope to characterize cataracts in human eyes

    Science.gov (United States)

    Feng, Chen; Ahmad, Anees

    1996-11-01

    The design of a compact contact microscope, which can be used in-vivo to study the cataracts in human eyes is presented. This microscope has the capability to evaluate the changes in the optical density within the eye lens itself, and thus enabling an examiner to ascertain the progression of a cataractous change or at least the optical changes associated with cataractous development. The microscope has a variable focal length so it can be focused at any depth through the entire thickness of the eye lens. A separate small objective lens is spring loaded against the cornea (like a tonometer tip) so that the natural eye movements can occur safely during the examination. The distance between the objective lens and rest of the optics is variable to accommodate the movements of the eye due to pulse or breathing without affecting the image quality of the instrument. The layer by layer images can be captured on a CCD camera and stored in the computer. The reconstruction software can quantitatively display the characteristics of the cataracts, such as the location, size, and density. The optical design and performance results for the microscope are presented. The optomechanical design features of the microscope are also discussed.

  15. Analysis of toxic and heavy metals in cataract extraction from human eyes

    International Nuclear Information System (INIS)

    Tanvir, R.; Qureshi, S.A.; Ahmed, R.

    1999-01-01

    Surma and many other substances are frequently used for the treatment of eyes and for cosmetic purposes, which may contain large quantities of toxic and heavy metals particularly lead. Toxic metals may also enter into the body through different food chain system and also due to heavy traffic and contaminated dusts in the air of the overcrowded cities. Eyes being exposed part of human body has maximum chances to get in contact with polluted atmosphere. This study has been undertaken to find the role of toxic elements in the formation of cataract in eyes. Samples of eye lenses were collected and carefully digested in 3 ml of conc. HClO/sub 4/ and 1 ml of conc. HNO/sub 3/. Then analysis of Zn, Cd, Pb, Cu, was carried out in 0.02 m HClO/sub 4/ using differential pulse anodic stripping voltametry. Levels of Zn, Cd, Pb and Cu in eye lenses are from 324 - 5746 mug/g, 3 - 240 mug/g, 3 - 240 mug/g, 25 - 120 mug /g and 23 - 485 mug/g, respectively. Chemical composition of ocular fluid indicates that Pb, Cd, Cu, Zn are not present in it normally. In addition to other factors , role of heavy and toxic metals in the formation of cataract cannot be overlooked. Therefore, use of surma and other cosmetics should be discouraged. (author)

  16. Trait Anxiety Impacts the Perceived Gaze Direction of Fearful But Not Angry Faces

    Directory of Open Access Journals (Sweden)

    Zhonghua Hu

    2017-07-01

    Full Text Available Facial expression and gaze direction play an important role in social communication. Previous research has demonstrated the perception of anger is enhanced by direct gaze, whereas, it is unclear whether perception of fear is enhanced by averted gaze. In addition, previous research has shown the anxiety affects the processing of facial expression and gaze direction, but hasn’t measured or controlled for depression. As a result, firm conclusions cannot be made regarding the impact of individual differences in anxiety and depression on perceptions of face expressions and gaze direction. The current study attempted to reexamine the effect of the anxiety level on the processing of facial expressions and gaze direction by matching participants on depression scores. A reliable psychophysical index of the range of eye gaze angles judged as being directed at oneself [the cone of direct gaze (CoDG] was used as the dependent variable in this study. Participants were stratified into high/low trait anxiety groups and asked to judge the gaze of angry, fearful, and neutral faces across a range of gaze directions. The result showed: (1 the perception of gaze direction was influenced by facial expression and this was modulated by trait anxiety. For the high trait anxiety group, the CoDG for angry expressions was wider than for fearful and neutral expressions, and no significant difference emerged between fearful and neutral expressions; For the low trait anxiety group, the CoDG for both angry and fearful expressions was wider than for neutral, and no significant difference emerged between angry and fearful expressions. (2 Trait anxiety modulated the perception of gaze direction only in the fearful condition, such that the fearful CoDG for the high trait anxiety group was narrower than the low trait anxiety group. This demonstrated that anxiety distinctly affected gaze perception in expressions that convey threat (angry, fearful, such that a high trait anxiety

  17. The cortical eye proprioceptive signal modulates neural activity in higher-order visual cortex as predicted by the variation in visual sensitivity

    DEFF Research Database (Denmark)

    Balslev, Daniela; Siebner, Hartwig R; Paulson, Olaf B

    2012-01-01

    target when the right eye was rotated leftwards as compared with when it was rotated rightwards. This effect was larger after S1(EYE)-rTMS than after rTMS of a control area in the motor cortex. The neural response to retinally identical stimuli in this area could be predicted from the changes in visual......Whereas the links between eye movements and the shifts in visual attention are well established, less is known about how eye position affects the prioritization of visual space. It was recently observed that visual sensitivity varies with the direction of gaze and the level of excitability...... in the eye proprioceptive representation in human left somatosensory cortex (S1(EYE)), so that after 1Hz repetitive transcranial magnetic stimulation (rTMS) over S1(EYE), targets presented nearer the center of the orbit are detected more accurately. Here we used whole-brain functional magnetic resonance...

  18. Edge detection of iris of the eye for human biometric identification system

    Directory of Open Access Journals (Sweden)

    Kateryna O. Tryfonova

    2015-03-01

    Full Text Available Method of human biometric identification by iris of the eye is considered as one of the most accurate and reliable methods of identification. Aim of the research is to solve the problem of edge detection of digital image of the human eye iris to be able to implement human biometric identification system by means of mobile device. To achieve this aim the algorithm of edge detection by Canny is considered in work. It consists of the following steps: smoothing, finding gradients, non-maximum suppression, double thresholding with hysteresis. The software implementation of the Canny algorithm is carried out for the Android mobile platform with the use of high level programming language Java.

  19. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  20. A Gaze Interactive Textual Smartwatch Interface

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Biermann, Florian; Askø Madsen, Janus

    2015-01-01

    Mobile gaze interaction is challenged by inherent motor noise. We examined the gaze tracking accuracy and precision of twelve subjects wearing a gaze tracker on their wrist while standing and walking. Results suggest that it will be possible to detect whether people are glancing the watch, but no...

  1. TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile Tablets

    OpenAIRE

    Huang, Qiong; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    We study gaze estimation on tablets, our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet is not constrained. We collected the first large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all o...

  2. Thermal behavior of human eye in relation with change in blood perfusion, porosity, evaporation and ambient temperature.

    Science.gov (United States)

    Rafiq, Aasma; Khanday, M A

    2016-12-01

    Extreme environmental and physiological conditions present challenges for thermal processes in body tissues including multi-layered human eye. A mathematical model has been formulated in this direction to study the thermal behavior of the human eye in relation with the change in blood perfusion, porosity, evaporation and environmental temperatures. In this study, a comprehensive thermal analysis has been performed on the multi-layered eye using Pennes' bio-heat equation with appropriate boundary and interface conditions. The variational finite element method and MATLAB software were used for the solution purpose and simulation of the results. The thermoregulatory effect due to blood perfusion rate, porosity, ambient temperature and evaporation at various regions of human eye was illustrated mathematically and graphically. The main applications of this model are associated with the medical sciences while performing laser therapy and other thermoregulatory investigation on human eye. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Increased Eye Contact during Conversation Compared to Play in Children with Autism

    Science.gov (United States)

    Jones, Rebecca M.; Southerland, Audrey; Hamo, Amarelle; Carberry, Caroline; Bridges, Chanel; Nay, Sarah; Stubbs, Elizabeth; Komarow, Emily; Washington, Clay; Rehg, James M.; Lord, Catherine; Rozga, Agata

    2017-01-01

    Children with autism have atypical gaze behavior but it is unknown whether gaze differs during distinct types of reciprocal interactions. Typically developing children (N = 20) and children with autism (N = 20) (4-13 years) made similar amounts of eye contact with an examiner during a conversation. Surprisingly, there was minimal eye contact…

  4. Gaze Embeddings for Zero-Shot Image Classification

    NARCIS (Netherlands)

    Karessli, N.; Akata, Z.; Schiele, B.; Bulling, A.

    2017-01-01

    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have

  5. Effects of aqueous humor hydrodynamics on human eye heat transfer under external heat sources.

    Science.gov (United States)

    Tiang, Kor L; Ooi, Ean H

    2016-08-01

    The majority of the eye models developed in the late 90s and early 00s considers only heat conduction inside the eye. This assumption is not entirely correct, since the anterior and posterior chambers are filled aqueous humor (AH) that is constantly in motion due to thermally-induced buoyancy. In this paper, a three-dimensional model of the human eye is developed to investigate the effects AH hydrodynamics have on the human eye temperature under exposure to external heat sources. If the effects of AH flow are negligible, then future models can be developed without taking them into account, thus simplifying the modeling process. Two types of external thermal loads are considered; volumetric and surface irradiation. Results showed that heat convection due to AH flow contributes to nearly 95% of the total heat flow inside the anterior chamber. Moreover, the circulation inside the anterior chamber can cause an upward shift of the location of hotspot. This can have significant consequences to our understanding of heat-induced cataractogenesis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. From time series analysis to a biomechanical multibody model of the human eye

    International Nuclear Information System (INIS)

    Pascolo, P.; Carniel, R.

    2009-01-01

    A mechanical model of the human eye is presented aimed at estimating the level of muscular activation. The applicability of the model in the biomedical field is discussed. Human eye movements studied in the laboratory are compared with the ones produced by a virtual eye described in kinematical terms and subject to the dynamics of six actuators, as many as the muscular systems devoted to the eye motion control. The definition of an error function between the experimental and the numerical response and the application of a suitable law that links activation and muscular force are at the base of the proposed methodology. The aim is the definition of a simple conceptual tool that could help the specialist in the diagnosis of potential physiological disturbances of saccadic and nystagmic movements but can also be extended in a second phase when more sophisticated data become available. The work is part of a collaboration between the Functional Mechanics Laboratory of the University and the Neurophysiopatology Laboratory of the 'S. Maria della Misericordia' Hospital in Udine, Italy.

  7. Simulated human eye retina adaptive optics imaging system based on a liquid crystal on silicon device

    International Nuclear Information System (INIS)

    Jiang Baoguang; Cao Zhaoliang; Mu Quanquan; Hu Lifa; Li Chao; Xuan Li

    2008-01-01

    In order to obtain a clear image of the retina of model eye, an adaptive optics system used to correct the wave-front error is introduced in this paper. The spatial light modulator that we use here is a liquid crystal on a silicon device instead of a conversional deformable mirror. A paper with carbon granule is used to simulate the retina of human eye. The pupil size of the model eye is adjustable (3-7 mm). A Shack–Hartman wave-front sensor is used to detect the wave-front aberration. With this construction, a value of peak-to-valley is achieved to be 0.086 λ, where λ is wavelength. The modulation transfer functions before and after corrections are compared. And the resolution of this system after correction (691p/m) is very close to the dirraction limit resolution. The carbon granule on the white paper which has a size of 4.7 μm is seen clearly. The size of the retina cell is between 4 and 10 mu;m. So this system has an ability to image the human eye's retina. (classical areas of phenomenology)

  8. Intraoperative length and tension curves of human eye muscles. Including stiffness in passive horizontal eye movement in awake volunteers

    NARCIS (Netherlands)

    H.J. Simonsz (Huib); G.H. Kolling (Gerold); H. Kaufmann (Herbert); B. van Dijk (Bob)

    1986-01-01

    textabstractIntraoperative continuous-registration length and tension curves of attached and detached eye muscles were made in 18 strabismic patients under general anesthesia. For relaxed eye muscles, we found an exponential relation between length and tension. An increased stiffness was quantified

  9. Eye Typing using Markov and Active Appearance Models

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin; Nielsen, Mads

    2002-01-01

    We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...

  10. DIAGNOSIS OF MYASTHENIA GRAVIS USING FUZZY GAZE TRACKING SOFTWARE

    Directory of Open Access Journals (Sweden)

    Javad Rasti

    2015-04-01

    Full Text Available Myasthenia Gravis (MG is an autoimmune disorder, which may lead to paralysis and even death if not treated on time. One of its primary symptoms is severe muscular weakness, initially arising in the eye muscles. Testing the mobility of the eyeball can help in early detection of MG. In this study, software was designed to analyze the ability of the eye muscles to focus in various directions, thus estimating the MG risk. Progressive weakness in gazing at the directions prompted by the software can reveal abnormal fatigue of the eye muscles, which is an alert sign for MG. To assess the user’s ability to keep gazing at a specified direction, a fuzzy algorithm was applied to images of the user’s eyes to determine the position of the iris in relation to the sclera. The results of the tests performed on 18 healthy volunteers and 18 volunteers in early stages of MG confirmed the validity of the suggested software.

  11. Influence of Gaze Direction on Face Recognition: A Sensitive Effect

    Directory of Open Access Journals (Sweden)

    Noémy Daury

    2011-08-01

    Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.

  12. Face and gaze perception in borderline personality disorder: An electrical neuroimaging study.

    Science.gov (United States)

    Berchio, Cristina; Piguet, Camille; Gentsch, Kornelia; Küng, Anne-Lise; Rihs, Tonia A; Hasler, Roland; Aubry, Jean-Michel; Dayer, Alexandre; Michel, Christoph M; Perroud, Nader

    2017-11-30

    Humans are sensitive to gaze direction from early life, and gaze has social and affective values. Borderline personality disorder (BPD) is a clinical condition characterized by emotional dysregulation and enhanced sensitivity to affective and social cues. In this study we wanted to investigate the temporal-spatial dynamics of spontaneous gaze processing in BPD. We used a 2-back-working-memory task, in which neutral faces with direct and averted gaze were presented. Gaze was used as an emotional modulator of event-related-potentials to faces. High density EEG data were acquired in 19 females with BPD and 19 healthy women, and analyzed with a spatio-temporal microstates analysis approach. Independently of gaze direction, BPD patients showed altered N170 and P200 topographies for neutral faces. Source localization revealed that the anterior cingulate and other prefrontal regions were abnormally activated during the N170 component related to face encoding, while middle temporal deactivations were observed during the P200 component. Post-task affective ratings showed that BPD patients had difficulty to disambiguate neutral gaze. This study provides first evidence for an early neural bias toward neutral faces in BPD independent of gaze direction and also suggests the importance of considering basic aspects of social cognition in identifying biological risk factors of BPD. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  14. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    Science.gov (United States)

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  15. What interests them in the pictures?--differences in eye-tracking between rhesus monkeys and humans.

    Science.gov (United States)

    Hu, Ying-Zhou; Jiang, Hui-Hui; Liu, Ci-Rong; Wang, Jian-Hong; Yu, Cheng-Yang; Carlson, Synnöve; Yang, Shang-Chuan; Saarinen, Veli-Matti; Rizak, Joshua D; Tian, Xiao-Guang; Tan, Hen; Chen, Zhu-Yue; Ma, Yuan-Ye; Hu, Xin-Tian

    2013-10-01

    Studies estimating eye movements have demonstrated that non-human primates have fixation patterns similar to humans at the first sight of a picture. In the current study, three sets of pictures containing monkeys, humans or both were presented to rhesus monkeys and humans. The eye movements on these pictures by the two species were recorded using a Tobii eye-tracking system. We found that monkeys paid more attention to the head and body in pictures containing monkeys, whereas both monkeys and humans paid more attention to the head in pictures containing humans. The humans always concentrated on the eyes and head in all the pictures, indicating the social role of facial cues in society. Although humans paid more attention to the hands than monkeys, both monkeys and humans were interested in the hands and what was being done with them in the pictures. This may suggest the importance and necessity of hands for survival. Finally, monkeys scored lower in eye-tracking when fixating on the pictures, as if they were less interested in looking at the screen than humans. The locations of fixation in monkeys may provide insight into the role of eye movements in an evolutionary context.

  16. Effect of human milk as a treatment for dry eye syndrome in a mouse model.

    Science.gov (United States)

    Diego, Jose L; Bidikov, Luke; Pedler, Michelle G; Kennedy, Jeffrey B; Quiroz-Mercado, Hugo; Gregory, Darren G; Petrash, J Mark; McCourt, Emily A

    Dry eye syndrome (DES) affects millions of people worldwide. Homeopathic remedies to treat a wide variety of ocular diseases have previously been documented in the literature, but little systematic work has been performed to validate the remedies' efficacy using accepted laboratory models of disease. The purpose of this study was to evaluate the efficacy of human milk and nopal cactus (prickly pear), two widely used homeopathic remedies, as agents to reduce pathological markers of DES. The previously described benzalkonium chloride (BAK) dry eye mouse model was used to study the efficacy of human milk and nopal cactus (prickly pear). BAK (0.2%) was applied to the mouse ocular surface twice daily to induce dry eye pathology. Fluorescein staining was used to verify that the animals had characteristic signs of DES. After induction of DES, the animals were treated with human milk (whole and fat-reduced), nopal, nopal extract derivatives, or cyclosporine four times daily for 7 days. Punctate staining and preservation of corneal epithelial thickness, measured histologically at the end of treatment, were used as indices of therapeutic efficacy. Treatment with BAK reduced the mean corneal epithelial thickness from 36.77±0.64 μm in the control mice to 21.29±3.2 μm. Reduction in corneal epithelial thickness was largely prevented by administration of whole milk (33.2±2.5 μm) or fat-reduced milk (36.1±1.58 μm), outcomes that were similar to treatment with cyclosporine (38.52±2.47 μm), a standard in current dry eye therapy. In contrast, crude or filtered nopal extracts were ineffective at preventing BAK-induced loss of corneal epithelial thickness (24.76±1.78 μm and 27.99±2.75 μm, respectively), as were solvents used in the extraction of nopal materials (26.53±1.46 μm for ethyl acetate, 21.59±5.87 μm for methanol). Epithelial damage, as reflected in the punctate scores, decreased over 4 days of treatment with whole and fat-reduced milk but continued to

  17. Dose conversion coefficients for photon exposure of the human eye lens

    Science.gov (United States)

    Behrens, R.; Dietze, G.

    2011-01-01

    In recent years, several papers dealing with the eye lens dose have been published, because epidemiological studies implied that the induction of cataracts occurs even at eye lens doses of less than 500 mGy. Different questions were addressed: Which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens? Is a new definition of the dose quantity Hp(3) based on a cylinder phantom to represent the human head necessary? Are current conversion coefficients from fluence to equivalent dose to the lens sufficiently accurate? To investigate the latter question, a realistic model of the eye including the inner structure of the lens was developed. Using this eye model, conversion coefficients for electrons have already been presented. In this paper, the same eye model—with the addition of the whole body—was used to calculate conversion coefficients from fluence (and air kerma) to equivalent dose to the lens for photon radiation from 5 keV to 10 MeV. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are similar between 40 keV and 1 MeV and lower by up to a factor of 5 and 7 for photon energies at about 10 keV and 10 MeV, respectively. Above 1 MeV, the new values (calculated without kerma approximation) should be applied in pure photon radiation fields, while the values adopted by the ICRP in 1996 (calculated with kerma approximation) should be applied in case a significant contribution from secondary electrons originating outside the body is present.

  18. Dose conversion coefficients for photon exposure of the human eye lens

    International Nuclear Information System (INIS)

    Behrens, R; Dietze, G

    2011-01-01

    In recent years, several papers dealing with the eye lens dose have been published, because epidemiological studies implied that the induction of cataracts occurs even at eye lens doses of less than 500 mGy. Different questions were addressed: Which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens? Is a new definition of the dose quantity H p (3) based on a cylinder phantom to represent the human head necessary? Are current conversion coefficients from fluence to equivalent dose to the lens sufficiently accurate? To investigate the latter question, a realistic model of the eye including the inner structure of the lens was developed. Using this eye model, conversion coefficients for electrons have already been presented. In this paper, the same eye model-with the addition of the whole body-was used to calculate conversion coefficients from fluence (and air kerma) to equivalent dose to the lens for photon radiation from 5 keV to 10 MeV. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are similar between 40 keV and 1 MeV and lower by up to a factor of 5 and 7 for photon energies at about 10 keV and 10 MeV, respectively. Above 1 MeV, the new values (calculated without kerma approximation) should be applied in pure photon radiation fields, while the values adopted by the ICRP in 1996 (calculated with kerma approximation) should be applied in case a significant contribution from secondary electrons originating outside the body is present.

  19. Gaze direction effects on perceptions of upper limb kinesthetic coordinate system axes.

    Science.gov (United States)

    Darling, W G; Hondzinski, J M; Harper, J G

    2000-12-01

    The effects of varying gaze direction on perceptions of the upper limb kinesthetic coordinate system axes and of the median plane location were studied in nine subjects with no history of neuromuscular disorders. In two experiments, six subjects aligned the unseen forearm to the trunk-fixed anterior-posterior (a/p) axis and earth-fixed vertical while gazing at different visual targets using either head or eye motion to vary gaze direction in different conditions. Effects of support of the upper limb on perceptual errors were also tested in different conditions. Absolute constant errors and variable errors associated with forearm alignment to the trunk-fixed a/p axis and earth-fixed vertical were similar for different gaze directions whether the head or eyes were moved to control gaze direction. Such errors were decreased by support of the upper limb when aligning to the vertical but not when aligning to the a/p axis. Regression analysis showed that single trial errors in individual subjects were poorly correlated with gaze direction, but showed a dependence on shoulder angles for alignment to both axes. Thus, changes in position of the head and eyes do not influence perceptions of upper limb kinesthetic coordinate system axes. However, dependence of the errors on arm configuration suggests that such perceptions are generated from sensations of shoulder and elbow joint angle information. In a third experiment, perceptions of median plane location were tested by instructing four subjects to place the unseen right index fingertip directly in front of the sternum either by motion of the straight arm at the shoulder or by elbow flexion/extension with shoulder angle varied. Gaze angles were varied to the right and left by 0.5 radians to determine effects of gaze direction on such perceptions. These tasks were also carried out with subjects blind-folded and head orientation varied to test for effects of head orientation on perceptions of median plane location. Constant

  20. The effects of metamaterial on electromagnetic fields absorption characteristics of human eye tissues.

    Science.gov (United States)

    Gasmelseed, Akram; Yunus, Jasmy

    2014-01-01

    The interaction of a dipole antenna with a human eye model in the presence of a metamaterial is investigated in this paper. The finite difference time domain (FDTD) method with convolutional perfectly matched layer (CPML) formulation have been used. A three-dimensional anatomical model of the human eye with resolution of 1.25 mm × 1.25 mm × 1.25 mm was used in this study. The dipole antenna was driven by modulated Gaussian pulse and the numerical study is performed with dipole operating at 900 MHz. The analysis has been done by varying the size and value of electric permittivity of the metamaterial. By normalizing the peak SAR (1 g and 10 g) to 1 W for all examined cases, we observed how the SAR values are not affected by the different permittivity values with the size of the metamaterial kept fixed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Gaze-Based Controlling a Vehicle

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    ) as an example of a complex gaze-based task in environment. This paper discusses the possibilities and limitations of how gaze interaction can be performed for controlling vehicles not only using a remote gaze tracker but also in general challenging situations where the user and robot are mobile...... modality if gaze trackers are embedded into the head- mounted devices. The domain of gaze-based interactive applications increases dramatically as interaction is no longer constrained to 2D displays. This paper proposes a general framework for gaze-based controlling a non- stationary robot (vehicle...... and the movements may be governed by several degrees of freedom (e.g. flying). A case study is also introduced where the mobile gaze tracker is used for controlling a Roomba vacuum cleaner....

  2. Facilitation of tear fluid secretion by 3% diquafosol ophthalmic solution in normal human eyes.

    Science.gov (United States)

    Yokoi, Norihiko; Kato, Hiroaki; Kinoshita, Shigeru

    2014-01-01

    To evaluate the increase in tear fluid volume induced by 3% diquafosol ophthalmic solution in normal human eyes. Prospective, randomized, double-masked, comparative study. Twenty healthy adults (17 males and 3 females; mean age, 38.8 years) underwent topical instillation of 2 ophthalmic solutions, artificial tears in 1 eye and 3% diquafosol ophthalmic solution in the fellow eye, in a masked manner. The radius of curvature of the central lower tear meniscus was measured at 5, 10, 15, 30, and 60 minutes after instillation by use of reflective meniscometry, and subjects' self-evaluated symptoms of wetness and stinging using a visual analog scale. Changes after instillation in the radius of curvature from baseline (artificial tear group vs diquafosol group; mean ± standard error of the mean) were as follows: at 5 minutes, -0.008 ± 0.012 vs 0.045 ± 0.013; at 10 minutes, 0.001 ± 0.014 vs 0.057 ± 0.016; at 15 minutes, -0.012 ± 0.014 vs 0.037 ± 0.019; at 30 minutes, -0.010 ± 0.016 vs 0.030 ± 0.025; and at 60 minutes, -0.029 ± 0.012 vs -0.020 ± 0.012. The diquafosol group showed significantly greater values from 5 to 30 minutes after instillation. Of the 40 eyes, 13 showed abnormal tear film breakup time (≤5 seconds). The diquafosol group had significantly more wetness at 15 minutes after instillation than did the artificial tear group. Topical instillation of 3% diquafosol ophthalmic solution increases tear fluid on the ocular surface for up to 30 minutes in normal human eyes. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Distribution of absorbed dose in human eye simulated by SRNA-2KG computer code

    International Nuclear Information System (INIS)

    Ilic, R.; Pesic, M.; Pavlovic, R.; Mostacci, D.

    2003-01-01

    Rapidly increasing performances of personal computers and development of codes for proton transport based on Monte Carlo methods will allow, very soon, the introduction of the computer planning proton therapy as a normal activity in regular hospital procedures. A description of SRNA code used for such applications and results of calculated distributions of proton-absorbed dose in human eye are given in this paper. (author)

  4. The Disturbance of Gaze in Progressive Supranuclear Palsy (PSP: Implications for Pathogenesis

    Directory of Open Access Journals (Sweden)

    Athena L Chen

    2010-12-01

    Full Text Available Progressive supranuclear palsy (PSP is a disease of later life that is currently regarded as a form of neurodegenerative tauopathy. Disturbance of gaze is a cardinal clinical feature of PSP that often helps clinicians to establish the diagnosis. Since the neurobiology of gaze control is now well understood, it is possible to use eye movements as investigational tools to understand aspects of the pathogenesis of PSP. In this review, we summarize each disorder of gaze control that occurs in PSP, drawing on our studies of fifty patients, and on reports from other laboratories that have measured the disturbances of eye movements. When these gaze disorders are approached by considering each functional class of eye movements and its neurobiological basis, a distinct pattern of eye movement deficits emerges that provides insight into the pathogenesis of PSP. Although some aspects of all forms of eye movements are affected in PSP, the predominant defects concern vertical saccades (slow and hypometric, both up and down, impaired vergence, and inability to modulate the linear vestibulo-ocular reflex appropriately for viewing distance. These vertical and vergence eye movements habitually work in concert to enable visuomotor skills that are important during locomotion with the hands free. Taken with the prominent early feature of falls, these findings suggest that PSP tauopathy impairs a recently-evolved neural system concerned with bipedal locomotion in an erect posture and frequent gaze shifts between the distant environment and proximate hands. This approach provides a conceptual framework that can be used to address the nosological challenge posed by overlapping clinical and neuropathological features of neurodegenerative tauopathies.

  5. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study

    NARCIS (Netherlands)

    Barsingerhorn, A.D.; Boonstra, F.N.; Goossens, H.H.L.M.

    2017-01-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different

  6. Use of eye tracking equipment for human reliability analysis applied to complex system operations

    International Nuclear Information System (INIS)

    Pinheiro, Andre Ricardo Mendonça; Prado, Eugenio Anselmo Pessoa do; Martins, Marcelo Ramos

    2017-01-01

    This article will discuss the preliminary results of an evaluation methodology for the analysis and quantification of manual character errors (human), by monitoring cognitive parameters and skill levels in the operation of a complex control system based on parameters provided by a eye monitoring equipment (Eye Tracker). The research was conducted using a simulator (game) that plays concepts of operation of a nuclear reactor with a split sample for evaluation of aspects of learning, knowledge and standard operating within the context addressed. bridge operators were monitored using the EYE TRACKING, eliminating the presence of the analyst in the evaluation of the operation, allowing the analysis of the results by means of multivariate statistical techniques within the scope of system reliability. The experiments aim to observe state change situations such as stops and scheduled departures, incidents assumptions and common operating characteristics. Preliminary results of this research object indicate that technical and cognitive aspects can contribute to improving the reliability of the available techniques in human reliability, making them more realistic both in the context of quantitative approaches to regulatory and training purposes, as well as reduced incidence of human error. (author)

  7. Use of eye tracking equipment for human reliability analysis applied to complex system operations

    Energy Technology Data Exchange (ETDEWEB)

    Pinheiro, Andre Ricardo Mendonça; Prado, Eugenio Anselmo Pessoa do; Martins, Marcelo Ramos, E-mail: andrericardopinheiro@usp.br, E-mail: eugenio.prado@labrisco.usp.br, E-mail: mrmatins@usp.br [Universidade de Sao Paulo (LABRISCO/USP), Sao Paulo, SP (Brazil). Lab. de Análise, Avaliação e Gerenciamento de Risco

    2017-07-01

    This article will discuss the preliminary results of an evaluation methodology for the analysis and quantification of manual character errors (human), by monitoring cognitive parameters and skill levels in the operation of a complex control system based on parameters provided by a eye monitoring equipment (Eye Tracker). The research was conducted using a simulator (game) that plays concepts of operation of a nuclear reactor with a split sample for evaluation of aspects of learning, knowledge and standard operating within the context addressed. bridge operators were monitored using the EYE TRACKING, eliminating the presence of the analyst in the evaluation of the operation, allowing the analysis of the results by means of multivariate statistical techniques within the scope of system reliability. The experiments aim to observe state change situations such as stops and scheduled departures, incidents assumptions and common operating characteristics. Preliminary results of this research object indicate that technical and cognitive aspects can contribute to improving the reliability of the available techniques in human reliability, making them more realistic both in the context of quantitative approaches to regulatory and training purposes, as well as reduced incidence of human error. (author)

  8. Mathematical modeling of laser linear thermal effects on the anterior layer of the human eye

    Science.gov (United States)

    Rahbar, Sahar; Shokooh-Saremi, Mehrdad

    2018-02-01

    In this paper, mathematical analysis of thermal effects of excimer lasers on the anterior side of the human eye is presented, where linear effect of absorption by the human eye is considered. To this end, Argon Fluoride (ArF) and Holmium:Yttrium-Aluminum-Garent (Ho:YAG) lasers are utilized in this investigation. A three-dimensional model of the human eye with actual dimensions is employed and finite element method (FEM) is utilized to numerically solve the governing (Penne) heat transfer equation. The simulation results suggest the corneal temperature of 263 °C and 83.4 °C for ArF and Ho:YAG laser radiations, respectively, and show less heat penetration depth in comparison to the previous reports. Moreover, the heat transfer equation is solved semi-analytically in one-dimension. It is shown that the exploited simulation results are also consistent with those derived from the semi-analytical solution of the Penne heat transfer equation for both types of laser radiations.

  9. Systems and methods of eye tracking calibration

    DEFF Research Database (Denmark)

    2014-01-01

    Methods and systems to facilitate eye tracking control calibration are provided. One or more objects are displayed on a display of a device, where the one or more objects are associated with a function unrelated to a calculation of one or more calibration parameters. The one or more calibration...... parameters relate to a calibration of a calculation of gaze information of a user of the device, where the gaze information indicates where the user is looking. While the one or more objects are displayed, eye movement information associated with the user is determined, which indicates eye movement of one...... or more eye features associated with at least one eye of the user. The eye movement information is associated with a first object location of the one or more objects. The one or more calibration parameters are calculated based on the first object location being associated with the eye movement information....

  10. Caffeine increases the velocity of rapid eye movements in unfatigued humans.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Turuwhenua, Jason; Hess, Robert F; Gant, Nicholas

    2017-08-01

    Caffeine is a widely used dietary stimulant that can reverse the effects of fatigue on cognitive, motor and oculomotor function. However, few studies have examined the effect of caffeine on the oculomotor system when homeostasis has not been disrupted by physical fatigue. This study examined the influence of a moderate dose of caffeine on oculomotor control and visual perception in participants who were not fatigued. Within a placebo-controlled crossover design, 13 healthy adults ingested caffeine (5 mg·kg -1 body mass) and were tested over 3 h. Eye movements, including saccades, smooth pursuit and optokinetic nystagmus, were measured using infrared oculography. Caffeine was associated with higher peak saccade velocities (472 ± 60° s -1 ) compared to placebo (455 ± 62° s -1 ). Quick phases of optokinetic nystagmus were also significantly faster with caffeine, whereas pursuit eye movements were unchanged. Non-oculomotor perceptual tasks (global motion and global orientation processing) were unaffected by caffeine. These results show that oculomotor control is modulated by a moderate dose of caffeine in unfatigued humans. These effects are detectable in the kinematics of rapid eye movements, whereas pursuit eye movements and visual perception are unaffected. Oculomotor functions may be sensitive to changes in central catecholamines mediated via caffeine's action as an adenosine antagonist, even when participants are not fatigued.

  11. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model

    Directory of Open Access Journals (Sweden)

    Georgios Zoulinakis

    2017-01-01

    Full Text Available Purpose. To design an intraocular telescopic system (ITS for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses’ placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient’s eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  12. Simplifying numerical ray tracing for two-dimensional non circularly symmetric models of the human eye.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2015-12-01

    Ray tracing is a powerful technique to understand the light behavior through an intricate optical system such as that of a human eye. The prediction of visual acuity can be achieved through characteristics of an optical system such as the geometrical point spread function. In general, its precision depends on the number of discrete rays and the accurate surface representation of each eye's components. Recently, a method that simplifies calculation of the geometrical point spread function has been proposed for circularly symmetric systems [Appl. Opt.53, 4784 (2014)]. An extension of this method to 2D noncircularly symmetric systems is proposed. In this method, a two-dimensional ray tracing procedure for an arbitrary number of surfaces and arbitrary surface shapes has been developed where surfaces, rays, and refractive indices are all represented in functional forms being approximated by Chebyshev polynomials. The Liou and Brennan anatomically accurate eye model has been adapted and used for evaluating the method. Further, real measurements of the anterior corneal surface of normal, astigmatic, and keratoconic eyes were substituted for the first surface in the model. The results have shown that performing ray tracing, utilizing the two-dimensional Chebyshev function approximation, is possible for noncircularly symmetric models, and that such calculation can be performed with a newly created Chebfun toolbox.

  13. Intraocular Telescopic System Design: Optical and Visual Simulation in a Human Eye Model.

    Science.gov (United States)

    Zoulinakis, Georgios; Ferrer-Blasco, Teresa

    2017-01-01

    Purpose. To design an intraocular telescopic system (ITS) for magnifying retinal image and to simulate its optical and visual performance after implantation in a human eye model. Methods. Design and simulation were carried out with a ray-tracing and optical design software. Two different ITS were designed, and their visual performance was simulated using the Liou-Brennan eye model. The difference between the ITS was their lenses' placement in the eye model and their powers. Ray tracing in both centered and decentered situations was carried out for both ITS while visual Strehl ratio (VSOTF) was computed using custom-made MATLAB code. Results. The results show that between 0.4 and 0.8 mm of decentration, the VSOTF does not change much either for far or near target distances. The image projection for these decentrations is in the parafoveal zone, and the quality of the image projected is quite similar. Conclusion. Both systems display similar quality while they differ in size; therefore, the choice between them would need to take into account specific parameters from the patient's eye. Quality does not change too much between 0.4 and 0.8 mm of decentration for either system which gives flexibility to the clinician to adjust decentration to avoid areas of retinal damage.

  14. Illusory shadow person causing paradoxical gaze deviations during temporal lobe seizures

    NARCIS (Netherlands)

    Zijlmans, M.; van Eijsden, P.; Ferrier, C. H.; Kho, K. H.; van Rijen, P. C.; Leijten, F. S. S.

    Generally, activation of the frontal eye field during seizures can cause versive (forced) gaze deviation, while non-versive head deviation is hypothesised to result from ictal neglect after inactivation of the ipsilateral temporoparietal area. Almost all non-versive head deviations occurring during

  15. The head tracks and gaze predicts: how the world's best batters hit the ball

    NARCIS (Netherlands)

    Mann, D.L.; Spratford, W.; Abernethy, B.

    2013-01-01

    Hitters in fast ball-sports do not align their gaze with the ball throughout ball-flight; rather, they use predictive eye movement strategies that contribute towards their level of interceptive skill. Existing studies claim that (i) baseball and cricket batters cannot track the ball because it moves

  16. Attention to the Mouth and Gaze Following in Infancy Predict Language Development

    Science.gov (United States)

    Tenenbaum, Elena J.; Sobel, David M.; Sheinkpof, Stephen J.; Malle, Bertram F.; Morgan, James L.

    2015-01-01

    We investigated longitudinal relations among gaze following and face scanning in infancy and later language development. At 12 months, infants watched videos of a woman describing an object while their passive viewing was measured with an eye-tracker. We examined the relation between infants' face scanning behavior and their tendency to follow the…

  17. Looking for Action: Talk and Gaze Home Position in the Airline Cockpit

    Science.gov (United States)

    Nevile, Maurice

    2010-01-01

    This paper considers the embodied nature of discourse for a professional work setting. It examines language in interaction in the airline cockpit, and specifically how shifts in pilots' eye gaze direction can indicate the action of talk, that is, what talk is doing and its relative contribution to work-in-progress. Looking towards the other…

  18. EDITORIAL: Special section on gaze-independent brain-computer interfaces Special section on gaze-independent brain-computer interfaces

    Science.gov (United States)

    Treder, Matthias S.

    2012-08-01

    Restoring the ability to communicate and interact with the environment in patients with severe motor disabilities is a vision that has been the main catalyst of early brain-computer interface (BCI) research. The past decade has brought a diversification of the field. BCIs have been examined as a tool for motor rehabilitation and their benefit in non-medical applications such as mental-state monitoring for improved human-computer interaction and gaming has been confirmed. At the same time, the weaknesses of some approaches have been pointed out. One of these weaknesses is gaze-dependence, that is, the requirement that the user of a BCI system voluntarily directs his or her eye gaze towards a visual target in order to efficiently operate a BCI. This not only contradicts the main doctrine of BCI research, namely that BCIs should be independent of muscle activity, but it can also limit its real-world applicability both in clinical and non-medical settings. It is only in a scenario devoid of any motor activity that a BCI solution is without alternative. Gaze-dependencies have surfaced at two different points in the BCI loop. Firstly, a BCI that relies on visual stimulation may require users to fixate on the target location. Secondly, feedback is often presented visually, which implies that the user may have to move his or her eyes in order to perceive the feedback. This special section was borne out of a BCI workshop on gaze-independent BCIs held at the 2011 Society for Applied Neurosciences (SAN) Conference and has then been extended with additional contributions from other research groups. It compiles experimental and methodological work that aims toward gaze-independent communication and mental-state monitoring. Riccio et al review the current state-of-the-art in research on gaze-independent BCIs [1]. Van der Waal et al present a tactile speller that builds on the stimulation of the fingers of the right and left hand [2]. H¨ohne et al analyze the ergonomic aspects

  19. Numerical modeling of heat and mass transfer in the human eye under millimeter wave exposure.

    Science.gov (United States)

    Karampatzakis, Andreas; Samaras, Theodoros

    2013-05-01

    Human exposure to millimeter wave (MMW) radiation is expected to increase in the next several years. In this work, we present a thermal model of the human eye under MMW illumination. The model takes into account the fluid dynamics of the aqueous humor and predicts a frequency-dependent reversal of its flow that also depends on the incident power density. The calculated maximum fluid velocity in the anterior chamber and the temperature rise at the corneal apex are reported for frequencies from 40 to 100 GHz and different values of incident power density. Copyright © 2013 Wiley Periodicals, Inc.

  20. Ultrathin flexible piezoelectric sensors for monitoring eye fatigue

    Science.gov (United States)

    Lü, Chaofeng; Wu, Shuang; Lu, Bingwei; Zhang, Yangyang; Du, Yangkun; Feng, Xue

    2018-02-01

    Eye fatigue is a symptom induced by long-term work of both eyes and brains. Without proper treatment, eye fatigue may incur serious problems. Current studies on detecting eye fatigue mainly focus on computer vision detect technology which can be very unreliable due to occasional bad visual conditions. As a solution, we proposed a wearable conformal in vivo eye fatigue monitoring sensor that contains an array of piezoelectric nanoribbons integrated on an ultrathin flexible substrate. By detecting strains on the skin of eyelid, the sensors may collect information about eye blinking, and, therefore, reveal human’s fatigue state. We first report the design and fabrication of the piezoelectric sensor and experimental characterization of voltage responses of the piezoelectric sensors. Under bending stress, the output voltage curves yield key information about the motion of human eyelid. We also develop a theoretical model to reveal the underlying mechanism of detecting eyelid motion. Both mechanical load test and in vivo test are conducted to convince the working performance of the sensors. With satisfied durability and high sensitivity, this sensor may efficiently detect abnormal eyelid motions, such as overlong closure, high blinking frequency, low closing speed and weak gazing strength, and may hopefully provide feedback for assessing eye fatigue in time so that unexpected situations can be prevented.