WorldWideScience

Sample records for human eye gaze

  1. Gazing-detection of human eyes based on SVM

    Institute of Scientific and Technical Information of China (English)

    LI Su-mei; ZHANG Yan-xin; CHANG Sheng-jiang; SHEN Jin-yuan

    2005-01-01

    A method for gazing-detection of human eyes using Support Vector Machine (SVM) based on statistic learning theory (SLT) is proposed.According to the criteria of structural risk minimization of SVM,the errors between sample-data and model-data are minimized and the upper bound of predicting error of the model is also reduced.As a result,the generalization ability of the model is much improved.The simulation results show that,when limited training samples are used,the correct recognition rate of the tested samples can be as high as 100%,which is much better than some previous results obtained by other methods.The higher processing speed enables the system to distinguish gazing or not-gazing in real-time.

  2. EYE GAZE TRACKING

    DEFF Research Database (Denmark)

    2017-01-01

    This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of: detecting the position of at least three reflections on said eye, transforming said positions to spanning...... a normalized coordinate system spanning a frame of reference, wherein said transformation is performed based on a bilinear transformation or a non linear transformation e.g. a möbius transformation or a homographic transformation, detecting the position of said center of the eye relative to the position...... of said reflections and transforming this position to said normalized coordinate system, tracking the eye gaze by tracking the movement of said eye in said normalized coordinate system. Thereby calibration of a camera, such as knowledge of the exact position and zoom level of the camera, is avoided...

  3. Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC

    CERN Document Server

    Pallez, Denis; Baccino, Thierry

    2008-01-01

    In this paper, we firstly present what is Interactive Evolutionary Computation (IEC) and rapidly how we have combined this artificial intelligence technique with an eye-tracker for visual optimization. Next, in order to correctly parameterize our application, we present results from applying data mining techniques on gaze information coming from experiments conducted on about 80 human individuals.

  4. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements: fix...

  5. Eye Movements in Gaze Interaction

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Hansen, John Paulin; Lillholm, Martin

    2013-01-01

    Gaze as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements...

  6. Human eye-head gaze shifts in a distractor task. II. Reduced threshold for initiation of early head movements.

    Science.gov (United States)

    Corneil, B D; Munoz, D P

    1999-09-01

    This study was motivated by the observation of early head movements (EHMs) occasionally generated before gaze shifts. Human subjects were presented with a visual or auditory target, along with an accompanying stimulus of the other modality, that either appeared at the same location as the target (enhancer condition) or at the diametrically opposite location (distractor condition). Gaze shifts generated to the target in the distractor condition sometimes were preceded by EHMs directed either to the side of the target (correct EHMs) or the side of the distractor (incorrect EHMs). During EHMs, the eyes performed compensatory eye movements to keep gaze stable. Incorrect EHMs were usually between 1 and 5 degrees in amplitude and reached peak velocities generally EHMs initially followed a trajectory typical of much larger head movements. These results suggest that incorrect EHMs are head movements that initially were planned to orient to the peripheral distractor. Furthermore gaze shifts preceded by incorrect EHMs had longer reaction latencies than gaze shifts not preceded by incorrect EHMs, suggesting that the processes leading to incorrect EHMs also serve to delay gaze-shift initiation. These results demonstrate a form of distraction analogous to the incorrect gaze shifts (IGSs) described in the previous paper and suggest that a motor program encoding a gaze shift to a distractor is capable of initiating either an IGS or an incorrect EHM. A neural program not strong enough to initiate an IGS nevertheless can initiate an incorrect EHM.

  7. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  8. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  9. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  10. Remote control of mobile robots through human eye gaze: the design and evaluation of an interface

    Science.gov (United States)

    Latif, Hemin Omer; Sherkat, Nasser; Lotfi, Ahmad

    2008-10-01

    Controlling mobile robots remotely requires the operator to monitor the status of the robot through some sort of feedback. Assuming a vision based feedback system is used the operator is required to closely monitor the images while navigating the robot in real time. This will engage the eyes and the hands of the operator. Since the eyes are engaged in the monitoring task anyway, their gaze can be used to navigate the robot in order to free the hands of the operator. However, the challenge here lies in developing an interaction interface that enables an intuitive distinction to be made between monitoring and commanding. This paper presents a novel means of constructing a user interface to meet this challenge. A range of solutions are constructed by augmenting the visual feedback with command regions to investigate the extent to which a user can intuitively control the robot. An experimental platform comprising a mobile robot together with cameras and eye-gaze system is constructed. The design of the system allows control of the robot, control of onboard cameras and control of the interface through eye-gaze. A number of tasks are designed to evaluate the proposed solutions. This paper presents the design considerations and the results of the evaluation. Overall it is found that the proposed solutions provide effective means of successfully navigating the robot for a range of tasks.

  11. Eye Gaze Assistance for a Game-Like Interactive Task

    Directory of Open Access Journals (Sweden)

    Tamás (Tom D. Gedeon

    2008-01-01

    Full Text Available Human beings communicate in abbreviated ways dependent on prior interactions and shared knowledge. Furthermore, humans share information about intentions and future actions using eye gaze. Among primates, humans are unique in the whiteness of the sclera and amount of sclera shown, essential for communication via interpretation of eye gaze. This paper extends our previous work in a game-like interactive task by the use of computerised recognition of eye gaze and fuzzy signature-based interpretation of possible intentions. This extends our notion of robot instinctive behaviour to intentional behaviour. We show a good improvement of speed of response in a simple use of eye gaze information. We also show a significant and more sophisticated use of the eye gaze information, which eliminates the need for control actions on the user's part. We also make a suggestion as to returning visibility of control to the user in these cases.

  12. THE EFFECT OF GAZE ANGLE ON THE EVALUATIONS OF SAR AND TEMPERATURE RISE IN HUMAN EYE UNDER PLANE-WAVE EXPOSURES FROM 0.9 TO 10 GHZ.

    Science.gov (United States)

    Diao, Yinliang; Leung, Sai-Wing; Chan, Kwok Hung; Sun, Weinong; Siu, Yun-Ming; Kong, Richard

    2016-12-01

    This article investigates the effect of gaze angle on the specific absorption rate (SAR) and temperature rise in human eye under electromagnetic exposures from 0.9 to 10 GHz. Eye models in different gaze angles are developed based on biometric data. The spatial-average SARs in eyes are investigated using the finite-difference time-domain method, and the corresponding maximum temperature rises in lens are calculated by the finite-difference method. It is found that the changes in the gaze angle produce a maximum variation of 35, 12 and 20 % in the eye-averaged SAR, peak 10 g average SAR and temperature rise, respectively. Results also reveal that the eye-averaged SAR is more sensitive to the changes in the gaze angle than peak 10 g average SAR, especially at higher frequencies. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Eye gaze as relational evaluation: averted eye gaze leads to feelings of ostracism and relational devaluation.

    Science.gov (United States)

    Wirth, James H; Sacco, Donald F; Hugenberg, Kurt; Williams, Kipling D

    2010-07-01

    Eye gaze is often a signal of interest and, when noticed by others, leads to mutual and directional gaze. However, averting one's eye gaze toward an individual has the potential to convey a strong interpersonal evaluation. The averting of eye gaze is the most frequently used nonverbal cue to indicate the silent treatment, a form of ostracism. The authors argue that eye gaze can signal the relational value felt toward another person. In three studies, participants visualized interacting with an individual displaying averted or direct eye gaze. Compared to receiving direct eye contact, participants receiving averted eye gaze felt ostracized, signaled by thwarted basic need satisfaction, reduced explicit and implicit self-esteem, lowered relational value, and increased temptations to act aggressively toward the interaction partner.

  14. An eye model for uncalibrated eye gaze estimation under variable head pose

    Science.gov (United States)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  15. Gaze and eye-tracking solutions for psychological research.

    Science.gov (United States)

    Mele, Maria Laura; Federici, Stefano

    2012-08-01

    Eye-tracking technology is a growing field used to detect eye movements and analyze human processing of visual information for interactive and diagnostic applications. Different domains in scientific research such as neuroscience, experimental psychology, computer science and human factors can benefit from eye-tracking methods and techniques to unobtrusively investigate the quantitative evidence underlying visual processes. In order to meet the experimental requirements concerning the variety of application fields, different gaze- and eye-tracking solutions using high-speed cameras are being developed (e.g., eye-tracking glasses, head-mounted or desk-mounted systems), which are also compatible with other analysis devices such as magnetic resonance imaging. This work presents an overview of the main application fields of eye-tracking methodology in psychological research. In particular, two innovative solutions will be shown: (1) the SMI RED-M eye-tracker, a high performance portable remote eye-tracker suitable for different settings, that requires maximum mobility and flexibility; (2) a wearable mobile gaze-tracking device--the SMI eye-tracking glasses--which is suitable for real-world and virtual environment research. For each kind of technology, the functions and different possibilities of application in experimental psychology will be described by focusing on some examples of experimental tasks (i.e., visual search, reading, natural tasks, scene viewing and other information processing) and theoretical approaches (e.g., embodied cognition).

  16. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    Science.gov (United States)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  17. Non-intrusive eye gaze tracking under natural head movements.

    Science.gov (United States)

    Kim, S; Sked, M; Ji, Q

    2004-01-01

    We propose an eye gaze tracking system under natural head movements. The system consists of one CCD camera and two mirrors. Based on geometric and linear algebra calculations, the mirrors rotate to follow head movements in order to keep the eyes within the view of the camera. Our system allows the subjects head to move 30 cm horizontally and 20 cm vertically, with spatial gaze resolutions about 6 degree and 7 degree, respectively and a frame rate about 10 Hz. We also introduce a hierarchical generalized regression neural networks (H-GRNN) scheme to map eye and mirror parameters to gaze, achieving a gaze estimation accuracy of 92% under head movements. The use of H-GRNN also eliminates the need for personal calibration for new subjects since H-GRNN can generalize. Preliminary experiments show our system is accurate and robust in gaze tracking under large head movements.

  18. Follow my eyes: the gaze of politicians reflexively captures the gaze of ingroup voters.

    Directory of Open Access Journals (Sweden)

    Marco Tullio Liuzza

    Full Text Available Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention.

  19. The Development of Emotional Face and Eye Gaze Processing

    Science.gov (United States)

    Hoehl, Stefanie; Striano, Tricia

    2010-01-01

    Recent research has demonstrated that infants' attention towards novel objects is affected by an adult's emotional expression and eye gaze toward the object. The current event-related potential (ERP) study investigated how infants at 3, 6, and 9 months of age process fearful compared to neutral faces looking toward objects or averting gaze away…

  20. Combining head pose and eye location information for gaze estimation

    NARCIS (Netherlands)

    Valenti, R.; Sebe, N.; Gevers, T.

    2012-01-01

    Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye

  1. Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants

    Science.gov (United States)

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…

  2. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    Science.gov (United States)

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception.

  3. A real-time gaze position estimation method based on a 3-D eye model.

    Science.gov (United States)

    Park, Kang Ryoung

    2007-02-01

    This paper proposes a new gaze-detection method based on a 3-D eye position and the gaze vector of the human eyeball. Seven new developments compared to previous works are presented. First, a method of using three camera systems, i.e., one wide-view camera and two narrow-view cameras, is proposed. The narrow-view cameras use autozooming, focusing, panning, and tilting procedures (based on the detected 3-D eye feature position) for gaze detection. This allows for natural head and eye movement by users. Second, in previous conventional gaze-detection research, one or multiple illuminators were used. These studies did not consider specular reflection (SR) problems, which were caused by the illuminators when working with users who wore glasses. To solve this problem, a method based on dual illuminators is proposed in this paper. Third, the proposed method does not require user-dependent calibration, so all procedures for detecting gaze position operate automatically without human intervention. Fourth, the intrinsic characteristics of the human eye, such as the disparity between the pupillary and the visual axes in order to obtain accurate gaze positions, are considered. Fifth, all the coordinates obtained by the left and right narrow-view cameras, as well as the wide-view camera coordinates and the monitor coordinates, are unified. This simplifies the complex 3-D converting calculation and allows for calculation of the 3-D feature position and gaze position on the monitor. Sixth, to upgrade eye-detection performance when using a wide-view camera, the adaptive-selection method is used. This involves an IR-LED on/off scheme, an AdaBoost classifier, and a principle component analysis method based on the number of SR elements. Finally, the proposed method uses an eigenvector matrix (instead of simply averaging six gaze vectors) in order to obtain a more accurate final gaze vector that can compensate for noise. Experimental results show that the root mean square error of

  4. Visual Foraging With Fingers and Eye Gaze.

    Science.gov (United States)

    Jóhannesson, Ómar I; Thornton, Ian M; Smith, Irene J; Chetverikov, Andrey; Kristjánsson, Árni

    2016-03-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  5. The duality of gaze: Eyes extract and signal social information during sustained cooperative and competitive dyadic gaze

    Directory of Open Access Journals (Sweden)

    Michelle eJarick

    2015-09-01

    Full Text Available In contrast to nonhuman primate eyes, which have a dark sclera surrounding a dark iris, human eyes have a white sclera that surrounds a dark iris. This high contrast morphology allows humans to determine quickly and easily where others are looking and infer what they are attending to. In recent years an enormous body of work has used photos and schematic images of faces to study these aspects of social attention, e.g., the selection of the eyes of others and the shift of attention to where those eyes are directed. However, evolutionary theory holds that humans did not develop a high contrast morphology simply to use the eyes of others as attentional cues; rather they sacrificed camouflage for communication, that is, to signal their thoughts and intentions to others. In the present study we demonstrate the importance of this by taking as our starting point the hypothesis that a cornerstone of nonverbal communication is the eye contact between individuals and the time that it is held. In a single simple study we show experimentally that the effect of eye contact can be quickly and profoundly altered merely by having participants, who had never met before, play a game in a cooperative or competitive manner. After the game participants were asked to make eye contact for a prolonged period of time (10 minutes. Those who had played the game cooperatively found this terribly difficult to do, repeatedly talking and breaking gaze. In contrast, those who had played the game competitively were able to stare quietly at each other for a sustained period. Collectively these data demonstrate that when looking at the eyes of a real person one both acquires and signals information to the other person. This duality of gaze is critical to nonverbal communication, with the nature of that communication shaped by the relationship between individuals, e.g., cooperative or competitive.

  6. Gaze Stripes: Image-Based Visualization of Eye Tracking Data.

    Science.gov (United States)

    Kurzhals, Kuno; Hlawatsch, Marcel; Heimerl, Florian; Burch, Michael; Ertl, Thomas; Weiskopf, Daniel

    2016-01-01

    We present a new visualization approach for displaying eye tracking data from multiple participants. We aim to show the spatio-temporal data of the gaze points in the context of the underlying image or video stimulus without occlusion. Our technique, denoted as gaze stripes, does not require the explicit definition of areas of interest but directly uses the image data around the gaze points, similar to thumbnails for images. A gaze stripe consists of a sequence of such gaze point images, oriented along a horizontal timeline. By displaying multiple aligned gaze stripes, it is possible to analyze and compare the viewing behavior of the participants over time. Since the analysis is carried out directly on the image data, expensive post-processing or manual annotation are not required. Therefore, not only patterns and outliers in the participants' scanpaths can be detected, but the context of the stimulus is available as well. Furthermore, our approach is especially well suited for dynamic stimuli due to the non-aggregated temporal mapping. Complementary views, i.e., markers, notes, screenshots, histograms, and results from automatic clustering, can be added to the visualization to display analysis results. We illustrate the usefulness of our technique on static and dynamic stimuli. Furthermore, we discuss the limitations and scalability of our approach in comparison to established visualization techniques.

  7. Does the 'P300' speller depend on eye gaze?

    Science.gov (United States)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  8. Eye gaze estimation from the elliptical features of one iris

    Science.gov (United States)

    Zhang, Wen; Zhang, Tai-Ning; Chang, Sheng-Jiang

    2011-04-01

    The accuracy of eye gaze estimation using image information is affected by several factors which include image resolution, anatomical structure of the eye, and posture changes. The irregular movements of the head and eye create issues that are currently being researched to enable better use of this key technology. In this paper, we describe an effective way of estimating eye gaze from the elliptical features of one iris under the conditions of not using an auxiliary light source, a head fixing equipment, or multiple cameras. First, we provide preliminary estimation of the gaze direction, and then we obtain the vectors which describe the translation and rotation of the eyeball, by applying a central projection method on the plane which passes through the line-of-sight. This helps us avoid the complex computations involved in previous methods. We also disambiguate the solution based on experimental findings. Second, error correction is conducted on a back propagation neural network trained by a sample collection of translation and rotation vectors. Extensive experimental studies are conducted to assess the efficiency, and robustness of our method. Results reveal that our method has a better performance compared to a typical previous method.

  9. Compact near-to-eye display with integrated gaze tracker

    Science.gov (United States)

    Järvenpää, Toni; Aaltonen, Viljakaisa

    2008-04-01

    Near-to-Eye Display (NED) offers a big screen experience to the user anywhere, anytime. It provides a way to perceive a larger image than the physical device itself is. Commercially available NEDs tend to be quite bulky and uncomfortable to wear. However, by using very thin plastic light guides with diffractive structures on the surfaces, many of the known deficiencies can be notably reduced. These Exit Pupil Expander (EPE) light guides enable a thin, light, user friendly and high performing see-through NED, which we have demonstrated. To be able to interact with the displayed UI efficiently, we have also integrated a video-based gaze tracker into the NED. The narrow light beam of an infrared light source is divided and expanded inside the same EPEs to produce wide collimated beams out from the EPE towards the eyes. Miniature video camera images the cornea and eye gaze direction is accurately calculated by locating the pupil and the glints of the infrared beams. After a simple and robust per-user calibration, the data from the highly integrated gaze tracker reflects the user focus point in the displayed image which can be used as an input device for the NED system. Realizable applications go from eye typing to playing games, and far beyond.

  10. What Do Eye Gaze Metrics Tell Us about Motor Imagery?

    Directory of Open Access Journals (Sweden)

    Elodie Poiroux

    Full Text Available Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI, target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males, were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system. Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks. Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.

  11. Eye gaze estimation from a video

    OpenAIRE

    Merad, Djamel; Mailles-Viard Metz, Stéphanie; Miguet, Serge

    2006-01-01

    International audience; Our work focuses on the interdisciplinary field of detailed analysisof behaviors exhibited by individuals during sessions of distributedcollaboration. With a particular focus on ergonomics, wepropose new mechanisms to be integrated into existing tools toenable increased productivity in distributed learning and working.Our technique is to record ocular movements (eye tracking) to analyzevarious scenarios of distributed collaboration in the contextof computer-based train...

  12. Reduced eye gaze explains "fear blindness" in childhood psychopathic traits.

    Science.gov (United States)

    Dadds, Mark R; El Masry, Yasmeen; Wimalaweera, Subodha; Guastella, Adam J

    2008-04-01

    Damage to the amygdala produces deficits in the ability to recognize fear due to attentional neglect of other people's eyes. Interestingly, children with high psychopathic traits also show problems recognizing fear; however, the reasons for this are not known. This study tested whether psychopathic traits are associated with reduced attention to the eye region of other people's faces. Adolescent males (N = 100; age mean 12.4 years, SD 2.2) were stratified by psychopathic traits and assessed using a Tobii eye tracker to measure primacy, number, and duration of fixations to the eye and mouth regions of emotional faces presented via the UNSW Facial Emotion Task. High psychopathic traits predicted poor fear recognition (1.21 versus 1.35; p eye fixations, and fewer first foci to the eye region (1.01 versus 2.01; p region. All indices of gaze to the eye region correlated positively with accurate recognition of fear for the high psychopathy group, especially the number of times that subjects looked at the eyes first (r = .50; p eyes is reduced in young people with high psychopathic traits, thus accounting for their problems with fear recognition, and is consistent with amygdala dysfunction failing to promote attention to emotional salience in the environment.

  13. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua

    2013-01-01

    In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment the langu......In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment...

  14. "Gaze Leading": Initiating Simulated Joint Attention Influences Eye Movements and Choice Behavior

    Science.gov (United States)

    Bayliss, Andrew P.; Murphy, Emily; Naughtin, Claire K.; Kritikos, Ada; Schilbach, Leonhard; Becker, Stefanie I.

    2013-01-01

    Recent research in adults has made great use of the gaze cuing paradigm to understand the behavior of the follower in joint attention episodes. We implemented a gaze leading task to investigate the initiator--the other person in these triadic interactions. In a series of gaze-contingent eye-tracking studies, we show that fixation dwell time upon…

  15. Noncontact binocular eye-gaze tracking for point-of-gaze estimation in three dimensions.

    Science.gov (United States)

    Hennessey, Craig; Lawrence, Peter

    2009-03-01

    Binocular eye-gaze tracking can be used to estimate the point-of-gaze (POG) of a subject in real-world 3-D space using the vergence of the eyes. In this paper, a novel noncontact model-based technique for 3-D POG estimation is presented. The noncontact system allows people to select real-world objects in 3-D physical space using their eyes, without the need for head-mounted equipment. Remote 3-D POG estimation may be especially useful for persons with quadriplegia or Amyotrophic Lateral Sclerosis. It would also enable a user to select 3-D points in space generated by 3-D volumetric displays, with potential applications to medical imaging and telesurgery. Using a model-based POG estimation algorithm allows for free head motion and a single stage of calibration. It is shown that an average accuracy of 3.93 cm was achieved over a workspace volume of 30 x 23 x 25 cm (W x H x D) with a maximum latency of 1.5 s due to the digital filtering employed. The users were free to naturally move and reorient their heads while operating the system, within an allowable headspace of 3 cm x 9 cm x 14 cm.

  16. All eyes on me?! Social anxiety and self-directed perception of eye gaze.

    Science.gov (United States)

    Schulze, Lars; Lobmaier, Janek S; Arnold, Manuel; Renneberg, Babette

    2013-01-01

    To date, only little is known about the self-directed perception and processing of subtle gaze cues in social anxiety that might however contribute to excessive feelings of being looked at by others. Using a web-based approach, participants (n=174) were asked whether or not briefly (300 ms) presented facial expressions modulated in gaze direction (0°, 2°, 4°, 6°, 8°) and valence (angry, fearful, happy, neutral) were directed at them. The results demonstrate a positive, linear relationship between self-reported social anxiety and stronger self-directed perception of others' gaze directions, particularly for negative (angry, fearful) and neutral expressions. Furthermore, faster responding was found for gaze more clearly directed at socially anxious individuals (0°, 2°, and 4°) suggesting a tendency to avoid direct gaze. In sum, the results illustrate an altered self-directed perception of subtle gaze cues. The possibly amplifying effects of social stress on biased self-directed perception of eye gaze are discussed.

  17. Eye Gaze Patterns in Conversations: There is More to Conversational Agents Than Meets the Eyes

    NARCIS (Netherlands)

    Vertegaal, R.P.H.; Slagter, R.; Veer, van der G.C.; Nijholt, A.; Jacko, J.; Sears, A.; Beaudouin-Lafon, M.; Jacob, R.J.K.

    2001-01-01

    In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject ga

  18. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    Science.gov (United States)

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  19. Time course of superior temporal sulcus activity in response to eye gaze: a combined fMRI and MEG study

    Science.gov (United States)

    Kochiyama, Takanori; Uono, Shota; Yoshikawa, Sakiko

    2008-01-01

    The human superior temporal sulcus (STS) has been suggested to be involved in gaze processing, but temporal data regarding this issue are lacking. We investigated this topic by combining fMRI and MEG in four normal subjects. Photographs of faces with either averted or straight eye gazes were presented and subjects passively viewed the stimuli. First, we analyzed the brain areas involved using fMRI. A group analysis revealed activation of the STS for averted compared to straight gazes, which was confirmed in all subjects. We then measured brain activity using MEG, and conducted a 3D spatial filter analysis. The STS showed higher activity in response to averted versus straight gazes during the 150–200 ms period, peaking at around 170 ms, after stimulus onset. In contrast, the fusiform gyrus, which was detected by the main effect of stimulus presentations in fMRI analysis, exhibited comparable activity across straight and averted gazes at about 170 ms. These results indicate involvement of the human STS in rapid processing of the eye gaze of another individual. PMID:19015114

  20. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  1. Experimental test of spatial updating models for monkey eye-head gaze shifts.

    Directory of Open Access Journals (Sweden)

    Tom J Van Grootel

    Full Text Available How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static, or during (dynamic the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.

  2. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.

  3. Cheap and Easy PIN Entering Using Eye Gaze

    Directory of Open Access Journals (Sweden)

    Kasprowski Pawel

    2014-03-01

    Full Text Available PINs are one of the most popular methods to perform simple and fast user authentication. PIN stands for Personal Identification Number, which may have any number of digits or even letters. Nevertheless, 4-digit PIN is the most common and is used for instance in ATMs or cellular phones. The main advantage of the PIN is that it is easy to remember and fast to enter. There are, however, some drawbacks. One of them - addressed in this paper - is a possibility to steal PIN by a technique called `shoulder surfing'. To avoid such problems a novel method of the PIN entering was proposed. Instead of using a numerical keyboard, the PIN may be entered by eye gazes, which is a hands-free, easy and robust technique. References:

  4. Observing Third-Party Attentional Relationships Affects Infants' Gaze Following: An Eye-Tracking Study

    Science.gov (United States)

    Meng, Xianwei; Uto, Yusuke; Hashiya, Kazuhide

    2017-01-01

    Not only responding to direct social actions toward themselves, infants also pay attention to relevant information from third-party interactions. However, it is unclear whether and how infants recognize the structure of these interactions. The current study aimed to investigate how infants' observation of third-party attentional relationships influence their subsequent gaze following. Nine-month-old, 1-year-old, and 1.5-year-old infants (N = 72, 37 girls) observed video clips in which a female actor gazed at one of two toys after she and her partner either silently faced each other (face-to-face condition) or looked in opposite directions (back-to-back condition). An eye tracker was used to record the infants' looking behavior (e.g., looking time, looking frequency). The analyses revealed that younger infants followed the actor's gaze toward the target object in both conditions, but this was not the case for the 1.5-year-old infants in the back-to-back condition. Furthermore, we found that infants' gaze following could be negatively predicted by their expectation of the partner's response to the actor's head turn (i.e., they shift their gaze toward the partner immediately after they realize that the actor's head will turn). These findings suggested that the sensitivity to the difference in knowledge and attentional states in the second year of human life could be extended to third-party interactions, even without any direct involvement in the situation. Additionally, a spontaneous concern with the epistemic gap between self and other, as well as between others, develops by this age. These processes might be considered part of the fundamental basis for human communication.

  5. Coordinated Flexibility: How Initial Gaze Position Modulates Eye-Hand Coordination and Reaching

    Science.gov (United States)

    Adam, Jos J.; Buetti, Simona; Kerzel, Dirk

    2012-01-01

    Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags…

  6. Eye can see what you want: Posterior Intraparietal Sulcus encodes the object of an actor's gaze

    NARCIS (Netherlands)

    Ramsey, R.; Cross, E.S.; Hamilton, A.F.D.C.

    2011-01-01

    In a social setting, seeing Sally look at a clock means something different to seeing her gaze longingly at a slice of chocolate cake. In both cases, her eyes and face might be turned rightward, but the information conveyed is markedly different, depending on the object of her gaze. Numerous studies

  7. Do as eye say: gaze cueing and language in a real-world social interaction.

    Science.gov (United States)

    Macdonald, Ross G; Tatler, Benjamin W

    2013-03-11

    Gaze cues are important in communication. In social interactions gaze cues usually occur with spoken language, yet most previous research has used artificial paradigms without dialogue. The present study investigates the interaction between gaze and language using a real-world paradigm. Each participant followed instructions to build a series of abstract structures out of building blocks, while their eye movements were recorded. The instructor varied the specificity of the instructions (unambiguous or ambiguous) and the presence of gaze cues (present or absent) between participants. Fixations to the blocks were recorded and task performance was measured. The presence of gaze cues led to more accurate performance, more accurate visual selection of the target block and more fixations towards the instructor when ambiguous instructions were given, but not when unambiguous instructions were given. We conclude that people only utilize the gaze cues of others when the cues provide useful information.

  8. Specificity of Age-Related Differences in Eye-Gaze Following: Evidence From Social and Nonsocial Stimuli.

    Science.gov (United States)

    Slessor, Gillian; Venturini, Cristina; Bonny, Emily J; Insch, Pauline M; Rokaszewicz, Anna; Finnerty, Ailbhe N

    2016-01-01

    Eye-gaze following is a fundamental social skill, facilitating communication. The present series of studies explored adult age-related differences in this key social-cognitive ability. In Study 1 younger and older adult participants completed a cueing task in which eye-gaze cues were predictive or non-predictive of target location. Another eye-gaze cueing task, assessing the influence of congruent and incongruent eye-gaze cues relative to trials which provided no cue to target location, was administered in Study 2. Finally, in Study 3 the eye-gaze cue was replaced by an arrow. In Study 1 older adults showed less evidence of gaze following than younger participants when required to strategically follow predictive eye-gaze cues and when making automatic shifts of attention to non-predictive eye-gaze cues. Findings from Study 2 suggested that, unlike younger adults, older participants showed no facilitation effect and thus did not follow congruent eye-gaze cues. They also had significantly weaker attentional costs than their younger counterparts. These age-related differences were not found in the non-social arrow cueing task. Taken together these findings suggest older adults do not use eye-gaze cues to engage in joint attention, and have specific social difficulties decoding critical information from the eye region. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Predictive gaze cues and personality judgments: Should eye trust you?

    Science.gov (United States)

    Bayliss, Andrew P; Tipper, Steven P

    2006-06-01

    Although following another person's gaze is essential in fluent social interactions, the reflexive nature of this gaze-cuing effect means that gaze can be used to deceive. In a gaze-cuing procedure, participants were presented with several faces that looked to the left or right. Some faces always looked to the target (predictive-valid), some never looked to the target (predictive-invalid), and others looked toward and away from the target in equal proportions (nonpredictive). The standard gaze-cuing effects appeared to be unaffected by these contingencies. Nevertheless, participants tended to choose the predictive-valid faces as appearing more trustworthy than the predictive-invalid faces. This effect was negatively related to scores on a scale assessing autistic-like traits. Further, we present tentative evidence that the "deceptive" faces were encoded more strongly in memory than the "cooperative" faces. These data demonstrate the important interactions among attention, gaze perception, facial identity recognition, and personality judgments.

  10. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Science.gov (United States)

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  11. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    Directory of Open Access Journals (Sweden)

    Simon Ho

    Full Text Available Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials, which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1 validate the overall results from earlier aggregated analyses and 2 provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1 speakers end their turn with direct gaze at the listener and 2 the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  12. "Avoiding or approaching eyes"? Introversion/extraversion affects the gaze-cueing effect.

    Science.gov (United States)

    Ponari, Marta; Trojano, Luigi; Grossi, Dario; Conson, Massimiliano

    2013-08-01

    We investigated whether the extra-/introversion personality dimension can influence processing of others' eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli's eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals.

  13. Visual Gaze Estimation by Joint Head and Eye Information

    NARCIS (Netherlands)

    Valenti, R.; Lablack, A.; Sebe, N.; Djeraba, C.; Gevers, T.

    2010-01-01

    In this paper, we present an unconstrained visual gaze estimation system. The proposed method extracts the visual field of view of a person looking at a target scene in order to estimate the approximate location of interest (visual gaze). The novelty of the system is the joint use of head pose and e

  14. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    Science.gov (United States)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  15. Quantifying naturalistic social gaze in fragile X syndrome using a novel eye tracking paradigm.

    Science.gov (United States)

    Hall, Scott S; Frank, Michael C; Pusiol, Guido T; Farzin, Faraz; Lightbody, Amy A; Reiss, Allan L

    2015-10-01

    A hallmark behavioral feature of fragile X syndrome (FXS) is the propensity for individuals with the syndrome to exhibit significant impairments in social gaze during interactions with others. However, previous studies employing eye tracking methodology to investigate this phenomenon have been limited to presenting static photographs or videos of social interactions rather than employing a real-life social partner. To improve upon previous studies, we used a customized eye tracking configuration to quantify the social gaze of 51 individuals with FXS and 19 controls, aged 14-28 years, while they engaged in a naturalistic face-to-face social interaction with a female experimenter. Importantly, our control group was matched to the FXS group on age, developmental functioning, and degree of autistic symptomatology. Results showed that participants with FXS spent significantly less time looking at the face and had shorter episodes (and longer inter-episodes) of social gaze than controls. Regression analyses indicated that communication ability predicted higher levels of social gaze in individuals with FXS, but not in controls. Conversely, degree of autistic symptoms predicted lower levels of social gaze in controls, but not in individuals with FXS. Taken together, these data indicate that naturalistic social gaze in FXS can be measured objectively using existing eye tracking technology during face-to-face social interactions. Given that impairments in social gaze were specific to FXS, this paradigm could be employed as an objective and ecologically valid outcome measure in ongoing Phase II/Phase III clinical trials of FXS-specific interventions.

  16. Application of head flexion detection for enhancing eye gaze direction classification.

    Science.gov (United States)

    Al-Rahayfeh, Amer; Faezipour, Miad

    2014-01-01

    Extensive research has been conducted on the tracking and detection of the eye gaze and head movement detection as these aspects of technology can be applied as alternative approaches for various interfacing devices. This paper proposes enhancements to the classification of the eye gaze direction. Viola Jones face detector is applied to first declare the region of the eye. Circular Hough Transform is then used to detect the iris location. Support Vector Machine (SVM) is applied to classify the eye gaze direction. Accuracy of the system is enhanced by calculating the flexion angle of the head through the utilization of a microcontroller and flex sensors. In case of rotated face images, the face can be rotated back to zero degrees through the flexion angle calculation. This is while Viola Jones face detector is limited to face images with very little or no rotation angle. Accuracy is initiated by enhancing the effectiveness of the system in the overall procedure of classifying the direction of the eye gaze. Therefore, the head direction is a main determinant in enhancing the control method. Different control signals are enhanced by the eye gaze direction classification and the head direction detection.

  17. Cognitive context detection in UAS operators using eye-gaze patterns on computer screens

    Science.gov (United States)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.

  18. Wild robins (Petroica longipes) respond to human gaze.

    Science.gov (United States)

    Garland, Alexis; Low, Jason; Armstrong, Nicola; Burns, Kevin C

    2014-09-01

    Gaze following and awareness of attentional cues are hallmarks of human and non-human social intelligence. Here, we show that the North Island robin (Petroica longipes), a food-hoarding songbird endemic to New Zealand, responds to human eyes. Robins were presented with six different conditions, in which two human experimenters altered the orientation or visibility of their body, head or eyes in relation to mealworm prey. One experimenter had visual access to the prey, and the second experimenter did not. Robins were then given the opportunity to 'steal' one of two mealworms presented by each experimenter. Robins responded by preferentially choosing the mealworm in front of the experimenter who could not see, in all conditions but one. Robins failed to discriminate between experimenters who were facing the mealworm and those who had their head turned 90° to the side. This may suggest that robins do not make decisions using the same eye visibility cues that primates and corvids evince, whether for ecological, experiential or evolutionary reasons.

  19. Inspection time as mental speed in mildly mentally retarded adults: analysis of eye gaze, eye movement, and orientation.

    Science.gov (United States)

    Nettelbeck, T; Robson, L; Walwyn, T; Downing, A; Jones, N

    1986-07-01

    The effect of eye movements away from a target on accuracy of visual discrimination was examined. In Experiment I inspection time was measured for 10 mildly mentally retarded and 10 nonretarded adults under two conditions, with each trial initiated by the subject or under experimental control. Retarded subjects did not gain any advantage from controlling trial onset. Video records of eye movements revealed that retarded subjects glanced off-target more than did nonretarded controls, but this was not sufficient to explain appreciably slower inspection time of the retarded group. Experiment 2 supported this conclusion; the same subjects completed a letter-discrimination task with direction of gaze monitored automatically. Although retarded subjects' eye gaze was more scattered early during a trial, gaze was appropriately directed by the time that the target appeared. Results from both experiments supported the hypothesis that speed of central, perceptual processing is slower among retarded persons, over and above the influence of distractibility. Results from three experiments in Part II were consistent with this interpretation. Experiment 3 was designed to eradicate trials among retarded subjects in which gaze was not properly directed, but results showed that too few such events occurred to influence accuracy. Experiment 4 demonstrated that the preparatory procedure in the previous studies resulted in efficient eye gaze among retarded subjects. Experiment 5 confirmed that lower discriminative accuracy among 10 retarded adults (compared with 10 nonretarded controls) was not due to less-efficient orientation prior to discrimination.

  20. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Directory of Open Access Journals (Sweden)

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  1. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction.

    Science.gov (United States)

    Rossi, Alejandra; Parada, Francisco J; Latinus, Marianne; Puce, Aina

    2015-01-01

    Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs) to viewing gaze aversions vs. direct gaze in real faces (Puce et al., 2000). In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014), suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards) while continuous electroencephalographic (EEG) activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from Event-Related Spectral Perturbations (ERSPs) were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion. Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  2. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    This article is based on fieldwork with young Muslims in Copenhagen and focuses on how they navigate the visibility and embodiment of Muslim otherness in their everyday life. The concept of panoptical gazes are developed in regard to the young Muslims’ narratives of being looked upon, and the dif......This article is based on fieldwork with young Muslims in Copenhagen and focuses on how they navigate the visibility and embodiment of Muslim otherness in their everyday life. The concept of panoptical gazes are developed in regard to the young Muslims’ narratives of being looked upon......, and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...

  3. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  4. PENGENDALI POINTER DENGAN GAZE TRACKING MENGGUNAKAN METODE HAAR CLASSIFIER SEBAGAI ALAT BANTU PRESENTASI (EYE POINTER

    Directory of Open Access Journals (Sweden)

    Edi Satriyanto

    2013-03-01

    Full Text Available The application that builded in this research is a pointer controller using eye movement (eye pointer. This application is one of image processing applications, where the users just have to move their eye to control the computer pointer. This eye pointer is expected able to assist the usage of manual pointer during the presentation. Since the title of this research is using gaze tracking that follow the eye movement, so that is important to detect the center of the pupil. To track the gaze, it is necessary to detect the center of the pupil if the eye image is from the input camera. The gaze tracking is detected using the three-step hierarchy system. First, motion detection, object (eye detection, and then pupil detection. For motion detection, the used method is identify the movement by dynamic compare the pixel ago by current pixel at t time. The eye region is detected using the Haar-Like Feature Classifier, where the sistem must be trained first to get the cascade classifier that allow the sistem to detect the object in each frame that captured by camera. The center of pupil is detect using integral projection.The final step is mapping the position of center of pupil to the screen of monitor using comparison scale between eye resolution with screen resolution. When detecting the eye gaze on the screen, the information (the distance and angle between eyes and a screen is necessary to compute pointing coordinates on the screen. In this research, the accuracy of this application is equal to 80% at eye movement with speed 1-2 second. And the optimum mean value is between 5 and 10. The optimum distance of user and the webcam is 40 cm from webcam.

  5. Gaze estimation for off-angle iris recognition based on the biometric eye model

    Science.gov (United States)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Thompson, Joseph; Bolme, David; Boehnen, Christopher

    2013-05-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  6. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    Energy Technology Data Exchange (ETDEWEB)

    Karakaya, Mahmut [ORNL; Barstow, Del R [ORNL; Santos-Villalobos, Hector J [ORNL; Thompson, Joseph W [ORNL; Bolme, David S [ORNL; Boehnen, Chris Bensing [ORNL

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  7. Teachers' Experiences of Using Eye Gaze-Controlled Computers for Pupils with Severe Motor Impairments and without Speech

    Science.gov (United States)

    Rytterström, Patrik; Borgestig, Maria; Hemmingsson, Helena

    2016-01-01

    The purpose of this study is to explore teachers' experiences of using eye gaze-controlled computers with pupils with severe disabilities. Technology to control a computer with eye gaze is a fast growing field and has promising implications for people with severe disabilities. This is a new assistive technology and a new learning situation for…

  8. The EyeHarp: A Gaze-Controlled Digital Musical Instrument

    OpenAIRE

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities ...

  9. Gazes

    DEFF Research Database (Denmark)

    Khawaja, Iram

    , and the different strategies of positioning they utilize are studied and identified. The first strategy is to confront stereotyping prejudices and gazes, thereby attempting to position oneself in a counteracting way. The second is to transform and try to normalise external characteristics, such as clothing...... and other symbols that indicate Muslimness. A third strategy is to play along and allow the prejudice in question to remain unchallenged. A fourth is to join and participate in religious communities and develop an alternate sense of belonging to a wider community of Muslims. The concept of panoptical gazes...... are related to narratives on belonging, and the embodied experience of home which points towards new avenues of understanding the process of othering and the possibilities of negotiating the position as Other....

  10. Eye contact elicits bodily self-awareness in human adults.

    Science.gov (United States)

    Baltazar, Matias; Hazem, Nesrine; Vilarem, Emma; Beaucousin, Virginie; Picq, Jean-Luc; Conty, Laurence

    2014-10-01

    Eye contact is a typical human behaviour known to impact concurrent or subsequent cognitive processing. In particular, it has been suggested that eye contact induces self-awareness, though this has never been formally proven. Here, we show that the perception of a face with a direct gaze (that establishes eye contact), as compared to either a face with averted gaze or a mere fixation cross, led adult participants to rate more accurately the intensity of their physiological reactions induced by emotional pictures. Our data support the view that bodily self-awareness becomes more acute when one is subjected to another's gaze. Importantly, this effect was not related to a particular arousal state induced by eye contact perception. Rejecting the arousal hypothesis, we suggest that eye contact elicits a self-awareness process by enhancing self-focused attention in humans. We further discuss the implications of this proposal. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu;

    Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunately...

  12. Tell-Tale Eyes: Children's Attribution of Gaze Aversion as a Lying Cue

    Science.gov (United States)

    Einav, Shiri; Hood, Bruce M.

    2008-01-01

    This study examined whether the well-documented adult tendency to perceive gaze aversion as a lying cue is also evident in children. In Experiment 1, 6-year-olds, 9-year-olds, and adults were shown video vignettes of speakers who either maintained or avoided eye contact while answering an interviewer's questions. Participants evaluated whether the…

  13. Mutual Disambiguation of Eye Gaze and Speech for Sight Translation and Reading

    DEFF Research Database (Denmark)

    Kulkarni, Rucha; Jain, Kritika; Bansal, Himanshu

    2013-01-01

    Researchers are proposing interactive machine translation as a potential method to make language translation process more efficient and usable. Introduction of different modalities like eye gaze and speech are being explored to add to the interactivity of language translation system. Unfortunatel...

  14. Variation in the human cannabinoid receptor CNR1 gene modulates gaze duration for happy faces

    Directory of Open Access Journals (Sweden)

    Chakrabarti Bhismadev

    2011-06-01

    Full Text Available Abstract Background From an early age, humans look longer at preferred stimuli and also typically look longer at facial expressions of emotion, particularly happy faces. Atypical gaze patterns towards social stimuli are common in autism spectrum conditions (ASC. However, it is unknown whether gaze fixation patterns have any genetic basis. In this study, we tested whether variations in the cannabinoid receptor 1 (CNR1 gene are associated with gaze duration towards happy faces. This gene was selected because CNR1 is a key component of the endocannabinoid system, which is involved in processing reward, and in our previous functional magnetic resonance imaging (fMRI study, we found that variations in CNR1 modulate the striatal response to happy (but not disgust faces. The striatum is involved in guiding gaze to rewarding aspects of a visual scene. We aimed to validate and extend this result in another sample using a different technique (gaze tracking. Methods A total of 30 volunteers (13 males and 17 females from the general population observed dynamic emotional expressions on a screen while their eye movements were recorded. They were genotyped for the identical four single-nucleotide polymorphisms (SNPs in the CNR1 gene tested in our earlier fMRI study. Results Two SNPs (rs806377 and rs806380 were associated with differential gaze duration for happy (but not disgust faces. Importantly, the allelic groups associated with a greater striatal response to happy faces in the fMRI study were associated with longer gaze duration at happy faces. Conclusions These results suggest that CNR1 variations modulate the striatal function that underlies the perception of signals of social reward, such as happy faces. This suggests that CNR1 is a key element in the molecular architecture of perception of certain basic emotions. This may have implications for understanding neurodevelopmental conditions marked by atypical eye contact and facial emotion processing

  15. A 2D eye gaze estimation system with low-resolution webcam images

    Directory of Open Access Journals (Sweden)

    Kim Jin

    2011-01-01

    Full Text Available Abstract In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI algorithm. Deformable template-based 2D gaze estimation (DTBGE algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  16. EyeDroid: An Open Source Mobile Gaze Tracker on Android for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbeigi, Diako; Sintos, Ioannis

    2015-01-01

    In this paper we report on development and evaluation of a video-based mobile gaze tracker for eyewear computers. Unlike most of the previous work, our system performs all its processing workload on an Android device and sends the coordinates of the gaze point to an eyewear device through wireless...... connection. We propose a lightweight software architecture for Android to increase the efficiency of image processing needed for eye tracking. The evaluation of the system indicated an accuracy of 1:06 and a battery lifetime of approximate 4.5 hours....

  17. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    OpenAIRE

    Serchi, V.; Peruzzi, A; A. Cereatti; Della Croce, U.

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study a...

  18. Love is in the gaze: an eye-tracking study of love and sexual desire.

    Science.gov (United States)

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports.

  19. Brief Report: Using a Point-of-View Camera to Measure Eye Gaze in Young Children with Autism Spectrum Disorder during Naturalistic Social Interactions--A Pilot Study

    Science.gov (United States)

    Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.

    2017-01-01

    Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…

  20. A free geometry model-independent neural eye-gaze tracking system

    Directory of Open Access Journals (Sweden)

    Gneo Massimo

    2012-11-01

    Full Text Available Abstract Background Eye Gaze Tracking Systems (EGTSs estimate the Point Of Gaze (POG of a user. In diagnostic applications EGTSs are used to study oculomotor characteristics and abnormalities, whereas in interactive applications EGTSs are proposed as input devices for human computer interfaces (HCI, e.g. to move a cursor on the screen when mouse control is not possible, such as in the case of assistive devices for people suffering from locked-in syndrome. If the user’s head remains still and the cornea rotates around its fixed centre, the pupil follows the eye in the images captured from one or more cameras, whereas the outer corneal reflection generated by an IR light source, i.e. glint, can be assumed as a fixed reference point. According to the so-called pupil centre corneal reflection method (PCCR, the POG can be thus estimated from the pupil-glint vector. Methods A new model-independent EGTS based on the PCCR is proposed. The mapping function based on artificial neural networks allows to avoid any specific model assumption and approximation either for the user’s eye physiology or for the system initial setup admitting a free geometry positioning for the user and the system components. The robustness of the proposed EGTS is proven by assessing its accuracy when tested on real data coming from: i different healthy users; ii different geometric settings of the camera and the light sources; iii different protocols based on the observation of points on a calibration grid and halfway points of a test grid. Results The achieved accuracy is approximately 0.49°, 0.41°, and 0.62° for respectively the horizontal, vertical and radial error of the POG. Conclusions The results prove the validity of the proposed approach as the proposed system performs better than EGTSs designed for HCI which, even if equipped with superior hardware, show accuracy values in the range 0.6°-1°.

  1. Differential gaze patterns on eyes and mouth during audiovisual speech segmentation

    Directory of Open Access Journals (Sweden)

    Laina G. Lusk

    2016-02-01

    Full Text Available Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel & Weiss, 2014 established that adults can utilize facial cues (i.e. visual prosody to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014. Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

  2. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation.

    Science.gov (United States)

    Lusk, Laina G; Mitchel, Aaron D

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

  3. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  4. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition.

    Science.gov (United States)

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  5. Eye gaze during observation of static faces in deaf people.

    Directory of Open Access Journals (Sweden)

    Katsumi Watanabe

    Full Text Available Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female and 23 (11 male and 12 female normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people.

  6. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    Science.gov (United States)

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.

  7. Eye-gaze control of the computer interface: Discrimination of zoom intent

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, J.H. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Industrial Engineering; Schryver, J.C. [Oak Ridge National Lab., TN (United States)

    1993-10-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at a statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered.

  8. Keeping your eye on the rail: gaze behaviour of horse riders approaching a jump.

    Directory of Open Access Journals (Sweden)

    Carol Hall

    Full Text Available The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye. The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10 was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (p<0.05. Jump 1 was fixated on significantly earlier and more frequently than jump 2 or 3 (p<0.05. Significantly more errors were made with jump 3 than with jump 1 (p = 0.01 but there was no difference in errors made between rounds. Although no significant correlations between gaze behaviour and skill scores were found, the riders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07, when the horse was further from the jump (p = 0.09 and their first fixation on the jump was of a longer duration (p = 0.06. Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports.

  9. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans

    Science.gov (United States)

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach–avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans. PMID:26466351

  10. Gaze following in baboons (Papio anubis): juveniles adjust their gaze and body position to human's head redirections.

    Science.gov (United States)

    Parron, Carole; Meguerditchian, Adrien

    2016-12-01

    Gaze following, the ability to follow the gaze of other individuals, has been widely studied in non-human primate species, mostly in adult individuals. Yet, the literature on gaze following revealed a quite variability across the different findings, some of it might reflect true inter-species differences, while others might be related to methodological differences, or to an underestimation of the factors involved in the expression of gaze following. In the current study, we tested 54 captive olive baboons (Papio anubis), housed in social groups, to assess how juvenile and adult baboons would spontaneously react to a sudden change in the direction of a human experimenter's head. First, our results showed that juveniles, more than adult baboons, co-oriented their gaze with the experimenter's gaze. We also observed a strong habituation effect in adult baboons but not in juveniles, as the adults' response vanished at the second exposure to a change of direction of the experimenter's head. Second, our results showed that juveniles subsequently adopted an original strategy when the experimenter's head indicated some new directions: they reliably adjusted their spatial body position to keep a gaze contact with the experimenter's line of sight. We discussed how the age class and the individual expertise of the baboons could lead to some modulations in terms of attentiveness, motivation, or cognitive abilities, and thus likely influence gaze following.

  11. A Novel Eye Gaze Tracking Method Based on Saliency Maps%一种新的基于显著图的视线跟踪方法

    Institute of Scientific and Technical Information of China (English)

    黄生辉; 宋鸿陟; 吴广发; 司国东; 彭红星

    2015-01-01

    针对现有视线跟踪系统设备复杂、标定过程繁琐等方面的不足,提出了一种新的基于显著图的视线跟踪方法。通过红外光源设备在人眼角膜上产生的光斑中心与瞳孔中心建立瞳孔-角膜反射向量,然后将该向量作为视觉特征重构了基于显著图的视线跟踪算法。实验结果证明,提出的方法不仅缓解了视线跟踪系统标定过程繁琐的问题,而且对提高系统的精度和健壮性有一定的促进作用,这为面向人机交互的视线跟踪研究提供了可行的低成本解决方案。%For the deficiencies that existing eye gaze tracking devices are complex and calibration procedures are tedious, a novel eye gaze tracking method using saliency maps is proposed. With pupil center and reflection center on corneal generated by IR light device, a pupil-corneal reflection vector is constructed, which then acts as a kind of vision feature to reconstruct the eye gaze tracking algorithm based on saliency maps. The experiment result demonstrates that the proposed method not only can alleviate the tedious calibration of eye gaze tracking, but also has a little improvement in system accuracy and robustness, which provides a feasible low-cost eye gaze tracking research for human computer interaction.

  12. Eye Gaze during Observation of Static Faces in Deaf People

    OpenAIRE

    Katsumi Watanabe; Tetsuya Matsuda; Tomoyuki Nishioka; Miki Namatame

    2011-01-01

    Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the e...

  13. Eye-gaze patterns as students study worked-out examples in mechanics

    Directory of Open Access Journals (Sweden)

    Brian H. Ross

    2010-10-01

    Full Text Available This study explores what introductory physics students actually look at when studying worked-out examples. Our classroom experiences indicate that introductory physics students neither discuss nor refer to the conceptual information contained in the text of worked-out examples. This study is an effort to determine to what extent students incorporate the textual information into the way they study. Student eye-gaze patterns were recorded as they studied the examples to aid them in solving a target problem. Contrary to our expectations from classroom interactions, students spent 40±3% of their gaze time reading the textual information. Their gaze patterns were also characterized by numerous jumps between corresponding mathematical and textual information, implying that they were combining information from both sources. Despite this large fraction of time spent reading the text, student recall of the conceptual information contained therein remained very poor. We also found that having a particular problem in mind had no significant effects on the gaze-patterns or conceptual information retention.

  14. Driver fatigue alarm based on eye detection and gaze estimation

    Science.gov (United States)

    Sun, Xinghua; Xu, Lu; Yang, Jingyu

    2007-11-01

    The driver assistant system has attracted much attention as an essential component of intelligent transportation systems. One task of driver assistant system is to prevent the drivers from fatigue. For the fatigue detection it is natural that the information about eyes should be utilized. The driver fatigue can be divided into two types, one is the sleep with eyes close and another is the sleep with eyes open. Considering that the fatigue detection is related with the prior knowledge and probabilistic statistics, the dynamic Bayesian network is used as the analysis tool to perform the reasoning of fatigue. Two kinds of experiments are performed to verify the system effectiveness, one is based on the video got from the laboratory and another is based on the video got from the real driving situation. Ten persons participate in the test and the experimental result is that, in the laboratory all the fatigue events can be detected, and in the practical vehicle the detection ratio is about 85%. Experiments show that in most of situations the proposed system works and the corresponding performance is satisfying.

  15. Complicating Eroticism and the Male Gaze: Feminism and Georges Bataille’s Story of the Eye

    Directory of Open Access Journals (Sweden)

    Chris Vanderwees

    2014-01-01

    Full Text Available This article explores the relationship between feminist criticism and Georges Bataille’s Story of the Eye . Much of the critical work on Bataille assimilates his psychosocial theories in Erotism with the manifestation of those theories in his fiction without acknowledging potential contradictions between the two bodies of work. The conflation of important distinctions between representations of sex and death in Story of the Eye and the writings of Erotism forecloses the possibility of reading Bataille’s novel as a critique of gender relations. This article unravels some of the distinctions between Erotism and Story of the Eye in order to complicate the assumption that the novel simply reproduces phallogocentric sexual fantasies of transgression. Drawing from the work of Angela Carter and Laura Mulvey, the author proposes the possibility of reading Story of the Eye as a pornographic critique of gender relations through an analysis of the novel’s displacement and destruction of the male gaze.

  16. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behavior in Dealing with Uncertainty.

    Science.gov (United States)

    Wittek, Peter; Liu, Ying-Hsang; Darányi, Sándor; Gedeon, Tom; Lim, Ik Soo

    2016-01-01

    Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Our findings from an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain suggest that user traits of cognitive styles and perceived search task difficulty are significantly correlated with eye gaze and search behavior. We also demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favor of ambiguity minimization. Since the pattern of behavior in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.

  17. Risk and Ambiguity in Information Seeking: Eye Gaze Patterns Reveal Contextual Behaviour in Dealing with Uncertainty

    Directory of Open Access Journals (Sweden)

    Peter Wittek

    2016-11-01

    Full Text Available Information foraging connects optimal foraging theory in ecology withhow humans search for information. The theory suggests that, followingan information scent, the information seeker must optimize the tradeoffbetween exploration by repeated steps in the search space vs.exploitation, using the resources encountered. We conjecture that thistradeoff characterizes how a user deals with uncertainty and its twoaspects, risk and ambiguity in economic theory. Risk is related to theperceived quality of the actually visited patch of information, and canbe reduced by exploiting and understanding the patch to a better extent.Ambiguity, on the other hand, is the opportunity cost of having higherquality patches elsewhere in the search space. The aforementionedtradeoff depends on many attributes, including traits of the user: atthe two extreme ends of the spectrum, analytic and wholistic searchersemploy entirely different strategies. The former type focuses onexploitation first, interspersed with bouts of exploration, whereas thelatter type prefers to explore the search space first and consume later.Our findings from an eye-tracking study of experts' interactions withnovel search interfaces in the biomedical domain suggest that usertraits of cognitive styles and perceived search task difficultyare significantly correlated with eye gaze and search behaviour. Wealso demonstrate that perceived risk shifts the balance betweenexploration and exploitation in either type of users, tilting it againstvs. in favour of ambiguity minimization. Since the pattern of behaviourin information foraging is quintessentially sequential, risk andambiguity minimization cannot happen simultaneously, leading to afundamental limit on how good such a tradeoff can be. This in turnconnects information seeking with the emergent field of quantum decisiontheory.

  18. Testing the dual-route model of perceived gaze direction: Linear combination of eye and head cues.

    Science.gov (United States)

    Otsuka, Yumiko; Mareschal, Isabelle; Clifford, Colin W G

    2016-06-01

    We have recently proposed a dual-route model of the effect of head orientation on perceived gaze direction (Otsuka, Mareschal, Calder, & Clifford, 2014; Otsuka, Mareschal, & Clifford, 2015), which computes perceived gaze direction as a linear combination of eye orientation and head orientation. By parametrically manipulating eye orientation and head orientation, we tested the adequacy of a linear model to account for the effect of horizontal head orientation on perceived direction of gaze. Here, participants adjusted an on-screen pointer toward the perceived gaze direction in two image conditions: Normal condition and Wollaston condition. Images in the Normal condition included a change in the visible part of the eye along with the change in head orientation, while images in the Wollaston condition were manipulated to have identical eye regions across head orientations. Multiple regression analysis with explanatory variables of eye orientation and head orientation revealed that linear models account for most of the variance both in the Normal condition and in the Wollaston condition. Further, we found no evidence that the model with a nonlinear term explains significantly more variance. Thus, the current study supports the dual-route model that computes the perceived gaze direction as a linear combination of eye orientation and head orientation.

  19. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  20. Keeping your eye on the rail: gaze behaviour of horse riders approaching a jump.

    Science.gov (United States)

    Hall, Carol; Varley, Ian; Kay, Rachel; Crundall, David

    2014-01-01

    The gaze behaviour of riders during their approach to a jump was investigated using a mobile eye tracking device (ASL Mobile Eye). The timing, frequency and duration of fixations on the jump and the percentage of time when their point of gaze (POG) was located elsewhere were assessed. Fixations were identified when the POG remained on the jump for 100 ms or longer. The jumping skill of experienced but non-elite riders (n = 10) was assessed by means of a questionnaire. Their gaze behaviour was recorded as they completed a course of three identical jumps five times. The speed and timing of the approach was calculated. Gaze behaviour throughout the overall approach and during the last five strides before take-off was assessed following frame-by-frame analyses. Differences in relation to both round and jump number were found. Significantly longer was spent fixated on the jump during round 2, both during the overall approach and during the last five strides (priders who scored higher for jumping skill tended to fixate on the jump earlier (p = 0.07), when the horse was further from the jump (p = 0.09) and their first fixation on the jump was of a longer duration (p = 0.06). Trials with elite riders are now needed to further identify sport-specific visual skills and their relationship with performance. Visual training should be included in preparation for equestrian sports participation, the positive impact of which has been clearly demonstrated in other sports.

  1. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    Science.gov (United States)

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  2. In the Eye of the Beholder-  A Survey of Models for Eyes and Gaze

    DEFF Research Database (Denmark)

    Witzner Hansen, Dan; Ji, Qiang

    2010-01-01

    Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications...

  3. Kinematic property of target motion conditions gaze behavior and eye-hand synergy during manual tracking.

    Science.gov (United States)

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2013-12-01

    This study investigated how frequency demand and motion feedback influenced composite ocular movements and eye-hand synergy during manual tracking. Fourteen volunteers conducted slow and fast force-tracking in which targets were displayed in either line-mode or wave-mode to guide manual tracking with target movement of direct position or velocity nature. The results showed that eye-hand synergy was a selective response of spatiotemporal coupling conditional on target rate and feedback mode. Slow and line-mode tracking exhibited stronger eye-hand coupling than fast and wave-mode tracking. Both eye movement and manual action led the target signal during fast-tracking, while the latency of ocular navigation during slow-tracking depended on the feedback mode. Slow-tracking resulted in more saccadic responses and larger pursuit gains than fast-tracking. Line-mode tracking led to larger pursuit gains but fewer and shorter gaze fixations than wave-mode tracking. During slow-tracking, incidences of saccade and gaze fixation fluctuated across a target cycle, peaking at velocity maximum and the maximal curvature of target displacement, respectively. For line-mode tracking, the incidence of smooth pursuit was phase-dependent, peaking at velocity maximum as well. Manual behavior of slow or line-mode tracking was better predicted by composite eye movements than that of fast or wave-mode tracking. In conclusion, manual tracking relied on versatile visual strategies to perceive target movements of different kinematic properties, which suggested a flexible coordinative control for the ocular and manual sensorimotor systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    Science.gov (United States)

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  5. Estimating 3D gaze in physical environment: a geometric approach on consumer-level remote eye tracker

    Science.gov (United States)

    Wibirama, Sunu; Mahesa, Rizki R.; Nugroho, Hanung A.; Hamamoto, Kazuhiko

    2017-02-01

    Remote eye trackers with consumer price have been used for various applications on flat computer screen. On the other hand, 3D gaze tracking in physical environment has been useful for visualizing gaze behavior, robots controller, and assistive technology. Instead of using affordable remote eye trackers, 3D gaze tracking in physical environment has been performed using corporate-level head mounted eye trackers, limiting its practical usage to niche user. In this research, we propose a novel method to estimate 3D gaze using consumer-level remote eye tracker. We implement geometric approach to obtain 3D point of gaze from binocular lines-of-sight. Experimental results show that the proposed method yielded low errors of 3.47+/-3.02 cm, 3.02+/-1.34 cm, and 2.57+/-1.85 cm in X, Y , and Z dimensions, respectively. The proposed approach may be used as a starting point for designing interaction method in 3D physical environment.

  6. Eye'm talking to you: speakers' gaze direction modulates co-speech gesture processing in the right MTG.

    Science.gov (United States)

    Holler, Judith; Kokal, Idil; Toni, Ivan; Hagoort, Peter; Kelly, Spencer D; Özyürek, Aslı

    2015-02-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech-gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  8. In the eye of the beholder: reduced threat-bias and increased gaze-imitation towards reward in relation to trait anger.

    Directory of Open Access Journals (Sweden)

    David Terburg

    Full Text Available The gaze of a fearful face silently signals a potential threat's location, while the happy-gaze communicates the location of impending reward. Imitating such gaze-shifts is an automatic form of social interaction that promotes survival of individual and group. Evidence from gaze-cueing studies suggests that covert allocation of attention to another individual's gaze-direction is facilitated when threat is communicated and further enhanced by trait anxiety. We used novel eye-tracking techniques to assess whether dynamic fearful and happy facial expressions actually facilitate automatic gaze-imitation. We show that this actual gaze-imitation effect is stronger when threat is signaled, but not further enhanced by trait anxiety. Instead, trait anger predicts facilitated gaze-imitation to reward, and to reward compared to threat. These results agree with an increasing body of evidence on trait anger sensitivity to reward.

  9. In the Eye of the Beholder: Reduced Threat-Bias and Increased Gaze-Imitation towards Reward in Relation to Trait Anger

    Science.gov (United States)

    Terburg, David; Aarts, Henk; Putman, Peter; van Honk, Jack

    2012-01-01

    The gaze of a fearful face silently signals a potential threat's location, while the happy-gaze communicates the location of impending reward. Imitating such gaze-shifts is an automatic form of social interaction that promotes survival of individual and group. Evidence from gaze-cueing studies suggests that covert allocation of attention to another individual's gaze-direction is facilitated when threat is communicated and further enhanced by trait anxiety. We used novel eye-tracking techniques to assess whether dynamic fearful and happy facial expressions actually facilitate automatic gaze-imitation. We show that this actual gaze-imitation effect is stronger when threat is signaled, but not further enhanced by trait anxiety. Instead, trait anger predicts facilitated gaze-imitation to reward, and to reward compared to threat. These results agree with an increasing body of evidence on trait anger sensitivity to reward. PMID:22363632

  10. Gaze perception in social anxiety and social anxiety disorder.

    Science.gov (United States)

    Schulze, Lars; Renneberg, Babette; Lobmaier, Janek S

    2013-12-16

    Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  11. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  12. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  13. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Science.gov (United States)

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa

    2014-01-01

    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  14. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Directory of Open Access Journals (Sweden)

    Eero Ahtola

    Full Text Available OBJECTIVE: To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm. METHOD: Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus. RESULTS: The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. CONCLUSION: The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. SIGNIFICANCE: Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  15. Gaze as a biometric

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  16. Eye Gaze Tracking Method Based on Pupil Center Cornea Reflection Technique%基于瞳孔-角膜反射技术的视线跟踪方法

    Institute of Scientific and Technical Information of China (English)

    吴广发; 宋鸿陟; 黄生辉

    2014-01-01

    视线跟踪是基于多通道的人机交互技术的重要研究内容,而基于瞳孔-角膜反射技术的视线方向是目前应用最广泛的视线跟踪技术之一。瞳孔-角膜反射技术的主要目的是提取人眼图像中瞳孔-角膜反射向量作为视线方向计算模型所需的视觉信息,通过搭建红外光源设备提取瞳孔-角膜反射向量构建基于瞳孔-角膜反射技术的视线跟踪系统,为面向人机交互的视线跟踪研究提供可行的低成本解决方案。%Eye gaze tracking is an important research content of human-computer interaction technology based on multiple channels, and the gaze estimation based on pupil center cornea reflection technique is one of eye gaze tracking technologies with the widest application. The pri-mary purpose of pupil center cornea reflection technique is to extract the pupil center cornea reflection vector in human eye image as the vision information of gaze estimation model. Constructs an infrared light device to extract the pupil center cornea reflection vector and builds up an eye gaze tracking system based on pupil center cornea reflection technique, provides a feasible and low-cost solution for the eye gaze tracking research of human-computer interaction.

  17. Inhibition of Return in Response to Eye Gaze and Peripheral Cues in Young People with Asperger's Syndrome

    Science.gov (United States)

    Marotta, Andrea; Pasini, Augusto; Ruggiero, Sabrina; Maccari, Lisa; Rosa, Caterina; Lupianez, Juan; Casagrande, Maria

    2013-01-01

    Inhibition of return (IOR) reflects slower reaction times to stimuli presented in previously attended locations. In this study, we examined this inhibitory after-effect using two different cue types, eye-gaze and standard peripheral cues, in individuals with Asperger's syndrome and typically developing individuals. Typically developing…

  18. Inhibition of Return in Response to Eye Gaze and Peripheral Cues in Young People with Asperger's Syndrome

    Science.gov (United States)

    Marotta, Andrea; Pasini, Augusto; Ruggiero, Sabrina; Maccari, Lisa; Rosa, Caterina; Lupianez, Juan; Casagrande, Maria

    2013-01-01

    Inhibition of return (IOR) reflects slower reaction times to stimuli presented in previously attended locations. In this study, we examined this inhibitory after-effect using two different cue types, eye-gaze and standard peripheral cues, in individuals with Asperger's syndrome and typically developing individuals. Typically developing…

  19. Rhesus monkeys show human-like changes in gaze following across the lifespan.

    Science.gov (United States)

    Rosati, Alexandra G; Arre, Alyssa M; Platt, Michael L; Santos, Laurie R

    2016-05-11

    Gaze following, or co-orienting with others, is a foundational skill for human social behaviour. The emergence of this capacity scaffolds critical human-specific abilities such as theory of mind and language. Non-human primates also follow others' gaze, but less is known about how the cognitive mechanisms supporting this behaviour develop over the lifespan. Here we experimentally tested gaze following in 481 semi-free-ranging rhesus macaques (Macaca mulatta) ranging from infancy to old age. We found that monkeys began to follow gaze in infancy and this response peaked in the juvenile period-suggesting that younger monkeys were especially attuned to gaze information, like humans. After sexual maturity, monkeys exhibited human-like sex differences in gaze following, with adult females showing more gaze following than males. Finally, older monkeys showed reduced propensity to follow gaze, just as older humans do. In a second study (n = 80), we confirmed that macaques exhibit similar baseline rates of looking upwards in a control condition, regardless of age. Our findings indicate that-despite important differences in human and non-human primate life-history characteristics and typical social experiences-monkeys undergo robust ontogenetic shifts in gaze following across early development, adulthood and ageing that are strikingly similar to those of humans.

  20. Remote eye-gaze tracking method robust to the device rotation

    Science.gov (United States)

    Kim, Sung-Tae; Choi, Kang-A.; Shin, Yong-Goo; Kang, Mun-Cheon; Ko, Sung-Jea

    2016-08-01

    A remote eye-gaze tracking (REGT) method is presented, which compensates for the error caused by device rotation. The proposed method is based on the state-of-the-art homography normalization (HN) method. Conventional REGT methods, including the HN method, suffer from a large estimation error in the presence of device rotation. However, little effort has been made to clarify the relation between the device rotation and its subsequent error. This paper introduces two factors inducing device rotation error, the discrepancy between the optical and visual axis, called angle kappa, and the change in camera location. On the basis of these factors, an efficient method for compensating for the REGT error is proposed. While the device undergoes a 360-deg rotation, a series of erroneous points of gaze (POGs) are obtained on the screen and modeled as an ellipse, and then the center of the ellipse is exploited to estimate the rotation-invariant POG. Experimental results demonstrate that the proposed REGT method can estimate the POG accurately in spite of the rotational movement of the device.

  1. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Science.gov (United States)

    Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  2. Real-time inference of word relevance from electroencephalogram and eye gaze

    Science.gov (United States)

    Wenzel, M. A.; Bogojeski, M.; Blankertz, B.

    2017-10-01

    Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.

  3. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  4. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention

    Science.gov (United States)

    Montague, Enid; Asan, Onur

    2014-01-01

    Objective The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Background Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. Methods A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients’ and physicians’ gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor- technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. Conclusion This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. PMID:24380671

  5. Different but complementary roles of action and gaze in action observation priming: Insights from eye- and motion-tracking measures

    Directory of Open Access Journals (Sweden)

    Clement eLetesson

    2015-05-01

    Full Text Available Action priming following action observation is thought to be caused by the observed action kinematics being represented in the same brain areas as those used for action execution. But, action priming can also be explained by shared goal representations, with compatibility between observation of the agent’s gaze and the intended action of the observer. To assess the contribution of action kinematics and eye gaze cues in the prediction of an agent’s action goal and action priming, participants observed actions where the availability of both cues was manipulated. Action observation was followed by action execution, and the congruency between the target of the agent’s and observer’s actions, and the congruency between the observed and executed action spatial location were manipulated. Eye movements were recorded during the observation phase, and the action priming was assessed using motion analysis. The results showed that the observation of gaze information influenced the observer’s prediction speed to attend to the target, and that observation of action kinematic information influenced the accuracy of these predictions. Motion analysis results showed that observed action cues alone primed both spatial incongruent and object congruent actions, consistent with the idea that the prime effect was driven by similarity between goals and kinematics. The observation of action and eye gaze cues together induced a prime effect complementarily sensitive to object and spatial congruency. While observation of the agent’s action kinematics triggered an object-centered and kinematic-centered action representation, independently, the complementary observation of eye gaze triggered a more fine-grained representation illustrating a specification of action kinematics towards the selected goal. Even though both cues differentially contributed to action priming, their complementary integration led to a more refined pattern of action priming.

  6. Toddlers' gaze following through attention modulation : Intention is in the eye of the beholder

    NARCIS (Netherlands)

    de Bordes, Pieter F.; Cox, Ralf F. A.; Hasselman, Fred; Cillessen, Antonius H. N.

    2013-01-01

    We investigated 20-month-olds' (N = 56) gaze following by presenting toddlers with a female model that displayed either ostensive or no ostensive cues before shifting her gaze laterally toward an object. The results indicated that toddlers reliably followed the model's gaze redirection after mutual

  7. Alternative Indices of Performance: An Exploration of Eye Gaze Metrics in a Visual Puzzle Task

    Science.gov (United States)

    2014-07-01

    complimentary information about operator state. 15. SUBJECT TERMS Workload, Eye Tracking, Eye Movements , Nonlinear Dynamics 16. SECURITY CLASSIFICATION OF: 17...15 10. Image pair 1 (Mountain Lake, Left; Sunflowers , Right...human performance. For example, qualitative shifts in movement (e.g., from walking to running), can be measured by the variability patterns in the

  8. Gaze Following Is Modulated by Expectations Regarding Others' Action Goals.

    Directory of Open Access Journals (Sweden)

    Jairo Perez-Osorio

    Full Text Available Humans attend to social cues in order to understand and predict others' behavior. Facial expressions and gaze direction provide valuable information to infer others' mental states and intentions. The present study examined the mechanism of gaze following in the context of participants' expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal. In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements. The essential pattern of results of Experiment 1 was replicated, with the gaze-cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent.

  9. 视线追踪系统中注视点估计算法研究%The Research on Gaze Estimation for Eye Tracking System

    Institute of Scientific and Technical Information of China (English)

    金纯; 李娅萍; 高奇; 曾伟

    2016-01-01

    针对非接触式视线跟踪系统中注视点估计算法鲁棒性差的问题,提出一种基于角度映射的注视点估计算法。首先,根据人眼特性及视觉成像原理,瞳孔中心相对于角膜反射光斑的位置与注视点相对于红外光源的位置之间具有一定的角度映射关系,据此估计出注视点近似位置。然后,在眼球模型结构的基础上分析了眼球角膜曲面对注视点造成的偏差,通过弧长对误差进行补偿。最后,利用非线性多项式模型对眼球视轴和光轴之间的偏差进行拟合,得到最终的视线落点。实验证明,该系统具有较高的精度和自由度,在水平和垂直方向上最大误差均小于1 cm。%Aiming at the problem of poor robustness of gaze point estimation algorithm of un-contact eye tracking system, a novel gaze point estimation algorithm is proposed based on angle mapping. First, according to human eyes characteristics and visual imaging theory, the gaze points are estimated with the angle mapping relationship be-tween the correspond positions of pupil center and corneal reflection points and the correspond positions of gaze point and infrared lights. Then the gaze point error of ocular cornea surface is aualyzed based on eyeball structure and compensation for it with arc length. Finally the nonlinear polynomial model is also used to fit the error of eye visual axis and optical axis. In the experiment, the deviation of this algorithm is less than 1 centimeter both in hori-zontal and vertical directions. The result shows that the system proposed has higher precision and degree of free-dom.

  10. Perceptual and not physical eye contact elicits pupillary dilation.

    Science.gov (United States)

    Honma, Motoyasu; Tanaka, Yasuto; Osada, Yoshihisa; Kuriyama, Kenichi

    2012-01-01

    Eye contact is important to share communication during social interactions. However, how accurately humans can perceive the gaze direction of others toward themselves and whether pupils dilate when humans consciously or unconsciously perceive own eyes are looked by others remain unclear. In this study, we examined the relationship between the explicit perception of looking into each other's eyes and the implicit physiological response of pupillary dilation by using an original face-to-face method. We found that humans do not correctly detect the gaze direction of others. Furthermore, one's pupils dilated when one gazed at others' eyes. Awareness of others' gaze on one's eyes, rather than the actual focusing of other's gaze on one's eyes, enhanced pupillary dilation. Therefore, physiological responses are caused not when people actually look into each other's gaze, but when the consciousness of other's gaze is activated, which suggests that eye contact often involves one-way communication.

  11. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Directory of Open Access Journals (Sweden)

    Christoforos eChristoforou

    2015-05-01

    Full Text Available Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e. advertising, shelf testing, and website usability. However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewer’s lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e. movie, narrative content, emotional content, etc., and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group and individual differences in the study of attention-deficit disorders and, the study of desensitization to media violence.

  12. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    Science.gov (United States)

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.

  13. The role of emotion in learning trustworthiness from eye-gaze: Evidence from facial electromyography.

    Science.gov (United States)

    Manssuer, Luis R; Pawling, Ralph; Hayes, Amy E; Tipper, Steven P

    2016-01-01

    Gaze direction can be used to rapidly and reflexively lead or mislead others' attention as to the location of important stimuli. When perception of gaze direction is congruent with the location of a target, responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged more trustworthy than faces that consistently gaze incongruently. However, it's unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting.

  14. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    Science.gov (United States)

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  15. Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation.

    Science.gov (United States)

    McLaughlin, Laura; Bond, Raymond; Hughes, Ciara; McConnell, Jonathan; McFadden, Sonyia

    2017-09-01

    To investigate image interpretation performance by diagnostic radiography students, diagnostic radiographers and reporting radiographers by computing eye gaze metrics using eye tracking technology. Three groups of participants were studied during their interpretation of 8 digital radiographic images including the axial and appendicular skeleton, and chest (prevalence of normal images was 12.5%). A total of 464 image interpretations were collected. Participants consisted of 21 radiography students, 19 qualified radiographers and 18 qualified reporting radiographers who were further qualified to report on the musculoskeletal (MSK) system. Eye tracking data was collected using the Tobii X60 eye tracker and subsequently eye gaze metrics were computed. Voice recordings, confidence levels and diagnoses provided a clear demonstration of the image interpretation and the cognitive processes undertaken by each participant. A questionnaire afforded the participants an opportunity to offer information on their experience in image interpretation and their opinion on the eye tracking technology. Reporting radiographers demonstrated a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took a mean of 2.4s longer to clinically decide on all features compared to students. Reporting radiographers also had a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took longer to clinically decide on an image diagnosis (p=0.02) than radiographers. Reporting radiographers had a greater mean fixation duration (p=0.01), mean fixation count (p=0.04) and mean visit count (p=0.04) within the areas of pathology compared to students. Eye tracking patterns, presented within heat maps, were a good reflection of group expertise and search strategies. Eye gaze metrics such as time to first fixate, fixation count, fixation duration and visit count within the areas of pathology were indicative of the radiographer's competency. The accuracy and confidence of

  16. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    Science.gov (United States)

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  17. Training experience in gestures affects the display of social gaze in baboons' communication with a human.

    Science.gov (United States)

    Bourjade, Marie; Canteloup, Charlotte; Meguerditchian, Adrien; Vauclair, Jacques; Gaunet, Florence

    2015-01-01

    Gaze behaviour, notably the alternation of gaze between distal objects and social partners that accompanies primates' gestural communication is considered a standard indicator of intentionality. However, the developmental precursors of gaze behaviour in primates' communication are not well understood. Here, we capitalized on the training in gestures dispensed to olive baboons (Papio anubis) as a way of manipulating individual communicative experience with humans. We aimed to delineate the effects of such a training experience on gaze behaviour displayed by the monkeys in relation with gestural requests. Using a food-requesting paradigm, we compared subjects trained in requesting gestures (i.e. trained subjects) to naïve subjects (i.e. control subjects) for their occurrences of (1) gaze behaviour, (2) requesting gestures and (3) temporal combination of gaze alternation with gestures. We found that training did not affect the frequencies of looking at the human's face, looking at food or alternating gaze. Hence, social gaze behaviour occurs independently from the amount of communicative experience with humans. However, trained baboons-gesturing more than control subjects-exhibited most gaze alternation combined with gestures, whereas control baboons did not. By reinforcing the display of gaze alternation along with gestures, we suggest that training may have served to enhance the communicative function of hand gestures. Finally, this study brings the first quantitative report of monkeys producing requesting gestures without explicit training by humans (controls). These results may open a window on the developmental mechanisms (i.e. incidental learning vs. training) underpinning gestural intentional communication in primates.

  18. Recognition of Emotion from Facial Expressions with Direct or Averted Eye Gaze and Varying Expression Intensities in Children with Autism Disorder and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Dina Tell

    2014-01-01

    Full Text Available Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.

  19. A new high-speed visual stimulation method for gaze-contingent eye movement and brain activity studies

    Directory of Open Access Journals (Sweden)

    Fabio eRichlan

    2013-07-01

    Full Text Available Approaches using eye movements as markers of ongoing brain activity to investigate perceptual and cognitive processes were able to implement highly sophisticated paradigms driven by eye movement recordings. Crucially, these paradigms involve display changes that have to occur during the time of saccadic blindness, when the subject is unaware of the change. Therefore, a combination of high-speed eye tracking and high-speed visual stimulation is required in these paradigms. For combined eye movement and brain activity studies (e.g., fMRI, EEG, MEG, fast and exact timing of display changes is especially important, because of the high susceptibility of these methods to visual stimulation. Eye tracking systems already achieve sampling rates up to 2000 Hz, but recent LCD technologies for computer screens reduced the temporal resolution to mostly 60 Hz, which is too slow for gaze-contingent display changes. We developed a high-speed video projection system, which is capable of reliably delivering display changes within the time frame of < 5 ms. This could not be achieved even with the fastest CRT monitors available (< 16 ms. The present video projection system facilitates the realization of cutting-edge eye movement research requiring reliable high-speed visual stimulation (e.g., gaze-contingent display changes, short-time presentation, masked priming. Moreover, this system can be used for fast visual presentation in order to assess brain activity using various methods, such as electroencephalography (EEG and functional magnetic resonance imaging (fMRI. The latter technique was previously excluded from high-speed visual stimulation, because it is not possible to operate conventional CRT monitors in the strong magnetic field of an MRI scanner. Therefore, the present video projection system offers new possibilities for studying eye movement-related brain activity using a combination of eye tracking and fMRI.

  20. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of

  1. Eye movements reveal epistemic curiosity in human observers.

    Science.gov (United States)

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states.

  2. Eye Gaze Reveals a Fast, Parallel Extraction of the Syntax of Arithmetic Formulas

    Science.gov (United States)

    Schneider, Elisa; Maruyama, Masaki; Dehaene, Stanislas; Sigman, Mariano

    2012-01-01

    Mathematics shares with language an essential reliance on the human capacity for recursion, permitting the generation of an infinite range of embedded expressions from a finite set of symbols. We studied the role of syntax in arithmetic thinking, a neglected component of numerical cognition, by examining eye movement sequences during the…

  3. Preference for human eyes in human infants.

    Science.gov (United States)

    Dupierrix, Eve; de Boisferon, Anne Hillairet; Méary, David; Lee, Kang; Quinn, Paul C; Di Giorgio, Elisa; Simion, Francesca; Tomonaga, Masaki; Pascalis, Olivier

    2014-07-01

    Despite evidence supporting an early attraction to human faces, the nature of the face representation in neonates and its development during the first year after birth remain poorly understood. One suggestion is that an early preference for human faces reflects an attraction toward human eyes because human eyes are distinctive compared with other animals. In accord with this proposal, prior empirical studies have demonstrated the importance of the eye region in face processing in adults and infants. However, an attraction for the human eye has never been shown directly in infants. The current study aimed to investigate whether an attraction for human eyes would be present in newborns and older infants. With the use of a preferential looking time paradigm, newborns and 3-, 6-, 9-, and 12-month-olds were simultaneously presented with a pair of nonhuman primate faces (chimpanzees and Barbary macaques) that differed only by the eyes, thereby pairing a face with original nonhuman primate eyes with the same face in which the eyes were replaced by human eyes. Our results revealed that no preference was observed in newborns, but a preference for nonhuman primate faces with human eyes emerged from 3months of age and remained stable thereafter. The findings are discussed in terms of how a preference for human eyes may emerge during the first few months after birth.

  4. Co-development of manner and path concepts in language, action, and eye-gaze behavior.

    Science.gov (United States)

    Lohan, Katrin S; Griffiths, Sascha S; Sciutti, Alessandra; Partmann, Tim C; Rohlfing, Katharina J

    2014-07-01

    In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate children's behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human

  5. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    Science.gov (United States)

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Loneliness and the social monitoring system: Emotion recognition and eye gaze in a real-life conversation.

    Science.gov (United States)

    Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike

    2016-02-01

    Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed.

  7. Discriminating between intentional and unintentional gaze fixation using multimodal-based fuzzy logic algorithm for gaze tracking system with NIR camera sensor

    Science.gov (United States)

    Naqvi, Rizwan Ali; Park, Kang Ryoung

    2016-06-01

    Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.

  8. Multi-initialized States Referred Work Parameter Calibration for Gaze Tracking Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Qijie Zhao

    2012-09-01

    Full Text Available In order to adaptively calibrate the work parameters in the infrared‐TV based eye gaze tracking Human‐Robot Interaction (HRI system, a kind of gaze direction sensing model has been provided for detecting the eye gaze identified parameters. We paid more attention to situations where the user’s head was in a different position to the interaction interface. Furthermore, the algorithm for automatically correcting work parameters of the system has also been put up by defining certain initial reference system states and analysing the historical information of the interaction between a user and the system. Moreover, considering some application cases and factors, and relying on minimum error rate Bayesian decision‐making theory, a mechanism for identifying system state and adaptively calibrating parameters has been proposed. Finally, some experiments have been done with the established system and the results suggest that the proposed mechanism and algorithm can identify the system work state in multi‐ situations, and can automatically correct the work parameters to meet the demands of a gaze tracking HRI system.

  9. A Wide-View Parallax-Free Eye-Mark Recorder with a Hyperboloidal Half-Silvered Mirror and Appearance-Based Gaze Estimation.

    Science.gov (United States)

    Mori, Hiroki; Sumiya, Erika; Mashita, Tomohiro; Kiyokawa, Kiyoshi; Takemura, Haruo

    2011-07-01

    In this paper, we propose a wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror and a gaze estimation method suitable for the device. Our eye-mark recorder provides a wide field-of-view video recording of the user's exact view by positioning the focal point of the mirror at the user's viewpoint. The vertical angle of view of the prototype is 122 degree (elevation and depression angles are 38 and 84 degree, respectively) and its horizontal view angle is 116 degree (nasal and temporal view angles are 38 and 78 degree, respectively). We implemented and evaluated a gaze estimation method for our eye-mark recorder. We use an appearance-based approach for our eye-mark recorder to support a wide field-of-view. We apply principal component analysis (PCA) and multiple regression analysis (MRA) to determine the relationship between the captured images and their corresponding gaze points. Experimental results verify that our eye-mark recorder successfully captures a wide field-of-view of a user and estimates gaze direction with an angular accuracy of around 2 to 4 degree.

  10. Stabilization of gaze during circular locomotion in darkness. II. Contribution of velocity storage to compensatory eye and head nystagmus in the running monkey

    Science.gov (United States)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. Yaw eye in head (Eh) and head on body velocities (Hb) were measured in two monkeys that ran around the perimeter of a circular platform in darkness. The platform was stationary or could be counterrotated to reduce body velocity in space (Bs) while increasing gait velocity on the platform (Bp). The animals were also rotated while seated in a primate chair at eccentric locations to provide linear and angular accelerations similar to those experienced while running. 2. Both animals had head and eye nystagmus while running in darkness during which slow phase gaze velocity on the body (Gb) partially compensated for body velocity in space (Bs). The eyes, driven by the vestibuloocular reflex (VOR), supplied high-frequency characteristics, bringing Gb up to compensatory levels at the beginning and end of the slow phases. The head provided substantial gaze compensation during the slow phases, probably through the vestibulocollic reflex (VCR). Synchronous eye and head quick phases moved gaze in the direction of running. Head movements occurred consistently only when animals were running. This indicates that active body and limb motion may be essential for inducing the head-eye gaze synergy. 3. Gaze compensation was good when running in both directions in one animal and in one direction in the other animal. The animals had long VOR time constants in these directions. The VOR time constant was short to one side in one animal, and it had poor gaze compensation in this direction. Postlocomotory nystagmus was weaker after running in directions with a long VOR time constant than when the animals were passively rotated in darkness. We infer that velocity storage in the vestibular system had been activated to produce continuous Eh and Hb during running and to counteract postrotatory afterresponses. 4. Continuous compensatory gaze nystagmus was not produced by passive eccentric rotation with the head stabilized or free. This indicates that an aspect of active locomotion, most

  11. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye regi...... extraction based on active appearance modeling and eye tracking using a new fast and robust heuristic. The eye tracker is shown to perform well for low resolution image segments, hence, making it feasible to estimate gaze using a single generic camera.......Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  12. Towards emotion modeling based on gaze dynamics in generic interfaces

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye regi...... extraction based on active appearance modeling and eye tracking using a new fast and robust heuristic. The eye tracker is shown to perform well for low resolution image segments, hence, making it feasible to estimate gaze using a single generic camera.......Gaze detection can be a useful ingredient in generic human computer interfaces if current technical barriers are overcome. We discuss the feasibility of concurrent posture and eye-tracking in the context of single (low cost) camera imagery. The ingredients in the approach are posture and eye region...

  13. Temporal Structure of Human Gaze Dynamics Is Invariant During Free Viewing.

    Directory of Open Access Journals (Sweden)

    Colleen A Marlow

    Full Text Available We investigate the dynamic structure of human gaze and present an experimental study of the frequency components of the change in gaze position over time during free viewing of computer-generated fractal images. We show that changes in gaze position are scale-invariant in time with statistical properties that are characteristic of a random walk process. We quantify and track changes in the temporal structure using a well-defined scaling parameter called the Hurst exponent, H. We find H is robust regardless of the spatial complexity generated by the fractal images. In addition, we find the Hurst exponent is invariant across all participants, including those with distinct changes to higher order visual processes due to neural degeneration. The value we find for H of 0.57 shows that the gaze dynamics during free viewing of fractal images are consistent with a random walk process with persistent movements. Our research suggests the human visual system may have a common strategy that drives the dynamics of human gaze during exploration.

  14. Temporal Structure of Human Gaze Dynamics Is Invariant During Free Viewing.

    Science.gov (United States)

    Marlow, Colleen A; Viskontas, Indre V; Matlin, Alisa; Boydston, Cooper; Boxer, Adam; Taylor, Richard P

    2015-01-01

    We investigate the dynamic structure of human gaze and present an experimental study of the frequency components of the change in gaze position over time during free viewing of computer-generated fractal images. We show that changes in gaze position are scale-invariant in time with statistical properties that are characteristic of a random walk process. We quantify and track changes in the temporal structure using a well-defined scaling parameter called the Hurst exponent, H. We find H is robust regardless of the spatial complexity generated by the fractal images. In addition, we find the Hurst exponent is invariant across all participants, including those with distinct changes to higher order visual processes due to neural degeneration. The value we find for H of 0.57 shows that the gaze dynamics during free viewing of fractal images are consistent with a random walk process with persistent movements. Our research suggests the human visual system may have a common strategy that drives the dynamics of human gaze during exploration.

  15. Gazing toward humans: a study on water rescue dogs using the impossible task paradigm.

    Science.gov (United States)

    D'Aniello, Biagio; Scandurra, Anna; Prato-Previde, Emanuela; Valsecchi, Paola

    2015-01-01

    Various studies have assessed the role of life experiences, including learning opportunities, living conditions and the quality of dog-human relationships, in the use of human cues and problem-solving ability. The current study investigates how and to what extent training affects the behaviour of dogs and the communication of dogs with humans by comparing dogs trained for a water rescue service and untrained pet dogs in the impossible task paradigm. Twenty-three certified water rescue dogs (the water rescue group) and 17 dogs with no training experience (the untrained group) were tested using a modified version of the impossible task described by Marshall-Pescini et al. in 2009. The results demonstrated that the water rescue dogs directed their first gaze significantly more often towards the owner and spent more time gazing toward two people compared to the untrained pet dogs. There was no difference between the dogs of the two groups as far as in the amount of time spent gazing at the owner or the stranger; neither in the interaction with the apparatus attempting to obtain food. The specific training regime, aimed at promoting cooperation during the performance of water rescue, could account for the longer gazing behaviour shown toward people by the water rescue dogs and the priority of gazing toward the owner.

  16. Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls.

    Science.gov (United States)

    Panchuk, Derek; Vickers, Joan N

    2011-08-01

    We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.

  17. Mother-infant mutual eye gaze supports emotion regulation in infancy during the Still-Face paradigm.

    Science.gov (United States)

    MacLean, Peggy C; Rynes, Kristina N; Aragón, Crystal; Caprihan, Arvind; Phillips, John P; Lowe, Jean R

    2014-11-01

    This study was designed to examine the sequential relationship between mother-infant synchrony and infant affect using multilevel modeling during the Still Face paradigm. We also examined self-regulatory behaviors that infants use during the Still-Face paradigm to modulate their affect, particularly during stressors where their mothers are not available to help them co-regulate. There were 84 mother-infant dyads, of healthy full term 4 month old infants. Second-by-second coding of infant self-regulation and infant affect was done, in addition to mother-infant mutual eye gaze. Using multilevel modeling, we found that infant affect became more positive when mutual gaze had occurred the previous second, suggesting that the experience of synchronicity was associated with observable shifts in affect. We also found a positive association between self-regulatory behaviors and increases in positive affect only during the Still-Face episode (episode 2). Our study provides support for the role of mother-infant synchronicity in emotion regulation as well as support for the role of self-regulatory behaviors in emotion regulation that can have important implication for intervention.

  18. Gaze perception in social anxiety and social anxiety disorder

    Directory of Open Access Journals (Sweden)

    Lars eSchulze

    2013-12-01

    Full Text Available Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD. Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed.

  19. Gaze and hand position effects on finger-movement-related human brain activation.

    Science.gov (United States)

    Bédard, Patrick; Sanes, Jerome N

    2009-02-01

    Humans commonly use their hands to move and to interact with their environment by processing visual and proprioceptive information to determine the location of a goal-object and the initial hand position. It remains elusive, however, how the human brain fully uses this sensory information to generate accurate movements. In monkeys, it appears that frontal and parietal areas use and combine gaze and hand signals to generate movements, whereas in humans, prior work has separately assessed how the brain uses these two signals. Here we investigated whether and how the human brain integrates gaze orientation and hand position during simple visually triggered finger tapping. We hypothesized that parietal, frontal, and subcortical regions involved in movement production would also exhibit modulation of movement-related activation as a function of gaze and hand positions. We used functional MRI to measure brain activation while healthy young adults performed a visually cued finger movement and fixed gaze at each of three locations and held the arm in two different configurations. We found several areas that exhibited activation related to a mixture of these hand and gaze positions; these included the sensory-motor cortex, supramarginal gyrus, superior parietal lobule, superior frontal gyrus, anterior cingulate, and left cerebellum. We also found regions within the left insula, left cuneus, left midcingulate gyrus, left putamen, and right tempo-occipital junction with activation driven only by gaze orientation. Finally, clusters with hand position effects were found in the cerebellum bilaterally. Our results indicate that these areas integrate at least two signals to perform visual-motor actions and that these could be used to subserve sensory-motor transformations.

  20. The late positive potential indexes a role for emotion during learning of trust from eye-gaze cues. : The LPP and learning of trust from gaze

    OpenAIRE

    Manssuer, Luis; Roberts, Mark; Tipper, Steven Paul

    2015-01-01

    Gaze direction perception triggers rapid visuospatial orienting to the location observed by others. When this is congruent with the location of a target, reaction times are faster than when incongruent. Functional magnetic resonance imaging studies suggest that the non-joint attention induced by incongruent cues are experienced as more emotionally negative and this could relate to less favorable trust judgments of the faces when gaze-cues are contingent with identity. Here, we provide further...

  1. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    Science.gov (United States)

    Range, Friederike; Virányi, Zsófia

    2017-01-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs’ following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour. PMID

  2. The Expressive Gaze Model: Using Gaze to Express Emotion

    Science.gov (United States)

    2010-07-01

    Thomas and O. Johnston, Disney Animation : The Illusion of Life, Abbeville Press, 1981. 2. B. Lance and S.C. Marsella, “Emotionally Expressive Head and...Thomas and Ollie Johnston Early animators realized that the eyes are an important aspect of the human face regarding the communication and expression...of emotion. They also found that properly animating believ- able, emotionally expressive gaze is extremely dif- ficult. If it’s done improperly

  3. Sociability and gazing toward humans in dogs and wolves: Simple behaviors with broad implications.

    Science.gov (United States)

    Bentosela, Mariana; Wynne, C D L; D'Orazio, M; Elgier, A; Udell, M A R

    2016-01-01

    Sociability, defined as the tendency to approach and interact with unfamiliar people, has been found to modulate some communicative responses in domestic dogs, including gaze behavior toward the human face. The objective of this study was to compare sociability and gaze behavior in pet domestic dogs and in human-socialized captive wolves in order to identify the relative influence of domestication and learning in the development of the dog-human bond. In Experiment 1, we assessed the approach behavior and social tendencies of dogs and wolves to a familiar and an unfamiliar person. In Experiment 2, we compared the animal's duration of gaze toward a person's face in the presence of food, which the animals could see but not access. Dogs showed higher levels of interspecific sociability than wolves in all conditions, including those where attention was unavailable. In addition, dogs gazed longer at the person's face than wolves in the presence of out-of-reach food. The potential contributions of domestication, associative learning, and experiences during ontogeny to prosocial behavior toward humans are discussed. © 2016 Society for the Experimental Analysis of Behavior.

  4. Beyond eye gaze: What else can eyetracking reveal about cognition and cognitive development?

    Science.gov (United States)

    Eckstein, Maria K; Guerra-Carrillo, Belén; Miller Singley, Alison T; Bunge, Silvia A

    2016-11-11

    This review provides an introduction to two eyetracking measures that can be used to study cognitive development and plasticity: pupil dilation and spontaneous blink rate. We begin by outlining the rich history of gaze analysis, which can reveal the current focus of attention as well as cognitive strategies. We then turn to the two lesser-utilized ocular measures. Pupil dilation is modulated by the brain's locus coeruleus-norepinephrine system, which controls physiological arousal and attention, and has been used as a measure of subjective task difficulty, mental effort, and neural gain. Spontaneous eyeblink rate correlates with levels of dopamine in the central nervous system, and can reveal processes underlying learning and goal-directed behavior. Taken together, gaze, pupil dilation, and blink rate are three non-invasive and complementary measures of cognition with high temporal resolution and well-understood neural foundations. Here we review the neural foundations of pupil dilation and blink rate, provide examples of their usage, describe analytic methods and methodological considerations, and discuss their potential for research on learning, cognitive development, and plasticity.

  5. EyeFrame: Real-time memory aid improves human multitasking via domain-general eye tracking procedures

    Directory of Open Access Journals (Sweden)

    P. eTaylor

    2015-09-01

    Full Text Available OBJECTIVE: We developed an extensively general closed-loop system to improve human interaction in various multitasking scenarios, with semi-autonomous agents, processes, and robots. BACKGROUND: Much technology is converging toward semi-independent processes with intermittent human supervision distributed over multiple computerized agents. Human operators multitask notoriously poorly, in part due to cognitive load and limited working memory. To multitask optimally, users must remember task order, e.g., the most neglected task, since longer times not monitoring an element indicates greater probability of need for user input. The secondary task of monitoring attention history over multiple spatial tasks requires similar cognitive resources as primary tasks themselves. Humans can not reliably make more than ~2 decisions/s. METHODS: Participants managed a range of 4-10 semi-autonomous agents performing rescue tasks. To optimize monitoring and controlling multiple agents, we created an automated short term memory aid, providing visual cues from users' gaze history. Cues indicated when and where to look next, and were derived from an inverse of eye fixation recency. RESULTS: Contingent eye tracking algorithms drastically improved operator performance, increasing multitasking capacity. The gaze aid reduced biases, and reduced cognitive load, measured by smaller pupil dilation. CONCLUSIONS: Our eye aid likely helped by delegating short-term memory to the computer, and by reducing decision making load. Past studies used eye position for gaze-aware control and interactive updating of displays in application-specific scenarios, but ours is the first to successfully implement domain-general algorithms. Procedures should generalize well to: process control, factory operations, robot control, surveillance, aviation, air traffic control, driving, military, mobile search and rescue, and many tasks where probability of utility is predicted by duration since last

  6. Eye-gaze independent EEG-based brain-computer interfaces for communication

    Science.gov (United States)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  7. Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis

    Directory of Open Access Journals (Sweden)

    Sean eAndrist

    2015-07-01

    Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.

  8. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs we...... evaluated on two eye tracking devices (Tobii/QuickGlance (QG)). The main findings show that there is a significant difference in selection times between long and short SGGs, between vertical and horizontal selections, as well as between the different tracking systems....

  9. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    Science.gov (United States)

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  10. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders.

    Science.gov (United States)

    Grynszpan, Ouriel; Nadel, Jacqueline

    2014-01-01

    The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g., think, believe, know) in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant's gaze. Eleven typical adults and eleven high functioning adults with Autism Spectrum Disorders (ASD) were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  11. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2015-01-01

    Full Text Available The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g. think, believe, know in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant’s gaze. Eleven typical adults and eleven high functioning adults with ASD were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  12. Exploring associations between gaze patterns and putative human mirror neuron system activity

    Directory of Open Access Journals (Sweden)

    Peter Hugh Donaldson

    2015-07-01

    Full Text Available The human mirror neuron system (MNS is hypothesised to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity, healthy right-handed participants aged 18-40 (n = 26 viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation (TMS. Motor-evoked potentials (MEPs recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze (PG and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  13. Vestibulo-Ocular Reflex Suppression during Head-Fixed Saccades Reveals Gaze Feedback Control

    OpenAIRE

    Daye, Pierre M.; Dale C Roberts; David S Zee; Optican, Lance M.

    2015-01-01

    Previous experiments have shown that the vestibulo-ocular reflex (VOR) is partially suppressed during large head-free gaze (gaze = eye-in-head + head-in-space) shifts when both the eyes and head are moving actively, on a fixed body, or when the eyes are moving actively and the head passively on a fixed body. We tested, in human subjects, the hypothesis that the VOR is also suppressed during gaze saccades made with en bloc, head and body together, rotations. Subjects made saccades by following...

  14. A Regression-based User Calibration Framework for Real-time Gaze Estimation

    OpenAIRE

    Arar, Nuri Murat; Gao, Hua; Thiran, Jean-Philippe

    2016-01-01

    Eye movements play a very significant role in human computer interaction (HCI) as they are natural and fast, and contain important cues for human cognitive state and visual attention. Over the last two decades, many techniques have been proposed to accurately estimate the gaze. Among these, video-based remote eye trackers have attracted much interest since they enable non-intrusive gaze estimation. To achieve high estimation accuracies for remote systems, user calibration is inevitable in ord...

  15. Ubiquitous gaze: using gaze at the interface

    NARCIS (Netherlands)

    Heylen, Dirk; Aghajan, Hamid; López-Cózar Delgado, Ramón; Augusto, Juan Carlos

    2010-01-01

    In the quest for more natural forms of interaction between humans and machines, information on where someone is looking and how (for how long, with long or shorter gaze periods) plays a prominent part. The importance of gaze in social interaction, its manifold functions and expressive force, and the

  16. How does the topic of conversation affect verbal exchange and eye gaze? A comparison between typical development and high-functioning autism.

    Science.gov (United States)

    Nadig, Aparna; Lee, Iris; Singh, Leher; Bosshart, Kyle; Ozonoff, Sally

    2010-07-01

    Conversation is a primary area of difficulty for individuals with high-functioning autism (HFA) although they have unimpaired formal language abilities. This likely stems from the unstructured nature of face-to-face conversation as well as the need to coordinate other modes of communication (e.g. eye gaze) with speech. We conducted a quantitative analysis of both verbal exchange and gaze data obtained from conversations between children with HFA and an adult, compared with those of typically developing children matched on language level. We examined a new question: how does speaking about a topic of interest affect reciprocity of verbal exchange and eye gaze? Conversations on generic topics were compared with those on individuals' circumscribed interests, particularly intense interests characteristic of HFA. Two opposing hypotheses were evaluated. Speaking about a topic of interest may improve reciprocity in conversation by increasing participants' motivation and engagement. Alternatively, it could engender more one-sided interaction, given the engrossing nature of circumscribed interests. In their verbal exchanges HFA participants demonstrated decreased reciprocity during the interest topic, evidenced by fewer contingent utterances and more monologue-style speech. Moreover, a measure of stereotyped behaviour and restricted interest symptoms was inversely related to reciprocal verbal exchange. However, both the HFA and comparison groups looked significantly more to their partner's face during the interest than generic topic. Our interpretation of results across modalities is that circumscribed interests led HFA participants to be less adaptive to their partner verbally, but speaking about a highly practiced topic allowed for increased gaze to the partner. The function of this increased gaze to partner may differ for the HFA and comparison groups.

  17. Trends and Techniques in Visual Gaze Analysis

    CERN Document Server

    Stellmach, Sophie; Dachselt, Raimund; Lindley, Craig A

    2010-01-01

    Visualizing gaze data is an effective way for the quick interpretation of eye tracking results. This paper presents a study investigation benefits and limitations of visual gaze analysis among eye tracking professionals and researchers. The results were used to create a tool for visual gaze analysis within a Master's project.

  18. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    Science.gov (United States)

    Ivanov, Iliya V; Mackeben, Manfred; Vollmer, Annika; Martus, Peter; Nguyen, Nhung X; Trauzettel-Klosinski, Susanne

    2016-01-01

    Degenerative retinal diseases, especially retinitis pigmentosa (RP), lead to severe peripheral visual field loss (tunnel vision), which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST). Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses) that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental) and reading (control) training. All subjects who first performed reading training underwent experimental training later (waiting list control group). To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT) during exploratory saccade training, Percent Preferred Walking Speed (PPWS), the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field. In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training condition

  19. I spy with my little eye: Analysis of airline pilots' gaze patterns in a manual instrument flight scenario.

    Science.gov (United States)

    Haslbeck, Andreas; Zhang, Bo

    2017-09-01

    The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. 基于人眼注视非穿戴自然人机交互%Gazing Based Non-Wearable and Natural Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    王佳雯; 管业鹏

    2016-01-01

    提出了一种基于人眼注视的非穿戴自然人机交互新方法。基于人体生物结构特征,采用主动形状模型确定人眼轮廓特征点,并根据HSV色彩空间构建人眼特征直方图,采用粒子滤波法,对人眼目标跟踪与定位。基于最大三角化划分人眼轮廓特征,构建人眼几何模型,通过图像帧间均值滤波,确定人眼注视交互目标,实现非穿戴的人机交互,满足用户交互的灵活性、舒适性和自由性等要求。通过实验对比,验证了该方法有效、可行。%A novel non-wearable and natural human-computer interaction(HCI)method has been proposed based on eye gazing. According to human being biological structure characteristics ,an active shape model is employed to locate some feature points in the eye profile. A histogram of eye feature has been built according to the HSV color space. A particle filter method has been adopted to track and locate the eye. A 2D eye geometric model is constructed based on the maximal triangulation of the eye contour features. A temporal median filter strategy has been developed to determine a stable gazing interactive target. Non-wearable and natural HCI modal is realized in which the user can move flexibly both in comfort and freedom interactive ways. Experiment results indicate that the developed approach is efficient and can be used to natural non-wearable HCI.

  1. Mechanisms of Empathic Behavior in Children with Callous-Unemotional Traits: Eye Gaze and Emotion Recognition

    OpenAIRE

    Delk, Lauren Annabel

    2016-01-01

    The presence of callous-unemotional (CU) traits (e.g., shallow affect, lack of empathy) in children predicts reduced prosocial behavior. Similarly, CU traits relate to emotion recognition deficits, which may be related to deficits in visual attention to the eye region of others. Notably, recognition of others' distress necessarily precedes sympathy, and sympathy is a key predictor in prosocial outcomes. Thus, visual attention and emotion recognition may mediate the relationship between CU tra...

  2. Goats display audience-dependent human-directed gazing behaviour in a problem-solving task.

    Science.gov (United States)

    Nawroth, Christian; Brett, Jemma M; McElligott, Alan G

    2016-07-01

    Domestication is an important factor driving changes in animal cognition and behaviour. In particular, the capacity of dogs to communicate in a referential and intentional way with humans is considered a key outcome of how domestication as a companion animal shaped the canid brain. However, the lack of comparison with other domestic animals makes general conclusions about how domestication has affected these important cognitive features difficult. We investigated human-directed behaviour in an 'unsolvable problem' task in a domestic, but non-companion species: goats. During the test, goats experienced a forward-facing or an away-facing person. They gazed towards the forward-facing person earlier and for longer and showed more gaze alternations and a lower latency until the first gaze alternation when the person was forward-facing. Our results provide strong evidence for audience-dependent human-directed visual orienting behaviour in a species that was domesticated primarily for production, and show similarities with the referential and intentional communicative behaviour exhibited by domestic companion animals such as dogs and horses. This indicates that domestication has a much broader impact on heterospecific communication than previously believed.

  3. A Proactive Approach of Robotic Framework for Making Eye Contact with Humans

    Directory of Open Access Journals (Sweden)

    Mohammed Moshiul Hoque

    2014-01-01

    Full Text Available Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.

  4. Eye Movement Training and Suggested Gaze Strategies in Tunnel Vision - A Randomized and Controlled Pilot Study.

    Directory of Open Access Journals (Sweden)

    Iliya V Ivanov

    Full Text Available Degenerative retinal diseases, especially retinitis pigmentosa (RP, lead to severe peripheral visual field loss (tunnel vision, which impairs mobility. The lack of peripheral information leads to fewer horizontal eye movements and, thus, diminished scanning in RP patients in a natural environment walking task. This randomized controlled study aimed to improve mobility and the dynamic visual field by applying a compensatory Exploratory Saccadic Training (EST.Oculomotor responses during walking and avoiding obstacles in a controlled environment were studied before and after saccade or reading training in 25 RP patients. Eye movements were recorded using a mobile infrared eye tracker (Tobii glasses that measured a range of spatial and temporal variables. Patients were randomly assigned to two training conditions: Saccade (experimental and reading (control training. All subjects who first performed reading training underwent experimental training later (waiting list control group. To assess the effect of training on subjects, we measured performance in the training task and the following outcome variables related to daily life: Response Time (RT during exploratory saccade training, Percent Preferred Walking Speed (PPWS, the number of collisions with obstacles, eye position variability, fixation duration, and the total number of fixations including the ones in the subjects' blind area of the visual field.In the saccade training group, RTs on average decreased, while the PPWS significantly increased. The improvement persisted, as tested 6 weeks after the end of the training. On average, the eye movement range of RP patients before and after training was similar to that of healthy observers. In both, the experimental and reading training groups, we found many fixations outside the subjects' seeing visual field before and after training. The average fixation duration was significantly shorter after the training, but only in the experimental training

  5. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows.

  6. Same-sex gaze attraction influences mate-choice copying in humans.

    Science.gov (United States)

    Yorzinski, Jessica L; Platt, Michael L

    2010-02-09

    Mate-choice copying occurs when animals rely on the mating choices of others to inform their own mating decisions. The proximate mechanisms underlying mate-choice copying remain unknown. To address this question, we tracked the gaze of men and women as they viewed a series of photographs in which a potential mate was pictured beside an opposite-sex partner; the participants then indicated their willingness to engage in a long-term relationship with each potential mate. We found that both men and women expressed more interest in engaging in a relationship with a potential mate if that mate was paired with an attractive partner. Men and women's attention to partners varied with partner attractiveness and this gaze attraction influenced their subsequent mate choices. These results highlight the prevalence of non-independent mate choice in humans and implicate social attention and reward circuitry in these decisions.

  7. Vestibulo-ocular reflex suppression during head-fixed saccades reveals gaze feedback control.

    Science.gov (United States)

    Daye, Pierre M; Roberts, Dale C; Zee, David S; Optican, Lance M

    2015-01-21

    Previous experiments have shown that the vestibulo-ocular reflex (VOR) is partially suppressed during large head-free gaze (gaze = eye-in-head + head-in-space) shifts when both the eyes and head are moving actively, on a fixed body, or when the eyes are moving actively and the head passively on a fixed body. We tested, in human subjects, the hypothesis that the VOR is also suppressed during gaze saccades made with en bloc, head and body together, rotations. Subjects made saccades by following a target light. During some trials, the chair rotated so as to move the entire body passively before, during, or after a saccade. The modulation of the VOR was a function of both saccade amplitude and the time of the head perturbation relative to saccade onset. Despite the perturbation, gaze remained accurate. Thus, VOR modulation is similar when gaze changes are programmed for the eyes alone or for the eyes and head moving together. We propose that the brain always programs a change in gaze using feedback based on gaze and head signals, rather than on separate eye and head trajectories.

  8. A Model of the Human Eye

    Science.gov (United States)

    Colicchia, G.; Wiesner, H.; Waltner, C.; Zollman, D.

    2008-01-01

    We describe a model of the human eye that incorporates a variable converging lens. The model can be easily constructed by students with low-cost materials. It shows in a comprehensible way the functionality of the eye's optical system. Images of near and far objects can be focused. Also, the defects of near and farsighted eyes can be demonstrated.

  9. Objects capture perceived gaze direction.

    Science.gov (United States)

    Lobmaier, Janek S; Fischer, Martin H; Schwaninger, Adrian

    2006-01-01

    The interpretation of another person's eye gaze is a key element of social cognition. Previous research has established that this ability develops early in life and is influenced by the person's head orientation, as well as local features of the person's eyes. Here we show that the presence of objects in the attended space also has an impact on gaze interpretation. Eleven normal adults identified the fixation points of photographed faces with a mouse cursor. Their responses were systematically biased toward the locations of nearby objects. This capture of perceived gaze direction probably reflects the attribution of intentionality and has methodological implications for research on gaze perception.

  10. Gaze-controlled Driving

    DEFF Research Database (Denmark)

    Tall, Martin; Alapetite, Alexandre; San Agustin, Javier

    2009-01-01

    We investigate if the gaze (point of regard) can control a remote vehicle driving on a racing track. Five different input devices (on-screen buttons, mouse-pointing low-cost webcam eye tracker and two commercial eye tracking systems) provide heading and speed control on the scene view transmitted...

  11. Patterns of Visual Attention and Gaze to Human and Animal Faces in Children with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Servet Bayram

    2012-12-01

    Full Text Available The aim of the study is to investigate the patterns of visual attention and gaze to familiar female/male faces and animal faces in high-functioning children with Autism Spectrum Disorders (ASD. Seven children with ASD and ten (10 typically developing (TD children participated in this study. To collect data, an eye-tracking system was used while participants looked at visual stimuli. According to the results of the study, high-functioning children with ASD have deficiency in getting relevant social information from the eyes though faces familiar to them, but they use information from the eye region in face exploration more than from the other parts of the faces. In addition, children with ASD seem to present gaze patterns similar to those of TD children during face exploration.

  12. 基于标记点检测的视线跟踪注视点估计%Eye Tracking Gaze Estimation Based on Marker Detection

    Institute of Scientific and Technical Information of China (English)

    龚秀锋; 李斌; 邓宏平; 张文聪

    2011-01-01

    For head-mounted eye gaze tracking, additional head position sensors is needed to determine the gaze direction, a new method based on marker detection is proposed to estimate the gaze of point for head-mounted system in this paper.The markers are detected by computer vision method, and the relationship between scene image and computer screen is constructed with point correspondences in two views.The point of gaze in the scene image is translated to computer screen coordinate.Experimental result shows that this method can estimate the point of gaze in real scene easily irrespective of user's head position.%传统的头戴式视线跟踪系统需要借助额外的头部位置跟踪器或其他辅助设备才能定位视线方向.针对该问题,提出一种基于标记点检测的注视点估计方法.该方法通过计算机视觉的方法检测标记点,建立场景图像与真实场景中计算机屏幕之间的空间关系,将场景图像中的注视点坐标映射到计算机屏幕中.实验结果表明,该方法简单易行,可以较好地估计出用户在真实场景中的注视点坐标.

  13. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes.

    Science.gov (United States)

    Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M

    2016-01-01

    Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.

  14. Rapid target foraging with reach or gaze: The hand looks further ahead than the eye.

    Science.gov (United States)

    Diamond, Jonathan S; Wolpert, Daniel M; Flanagan, J Randall

    2017-07-01

    Real-world tasks typically consist of a series of target-directed actions and often require choices about which targets to act on and in what order. Such choice behavior can be assessed from an optimal foraging perspective whereby target selection is shaped by a balance between rewards and costs. Here we evaluated such decision-making in a rapid movement foraging task. On a given trial, participants were presented with 15 targets of varying size and value and were instructed to harvest as much reward as possible by either moving a handle to the targets (hand task) or by briefly fixating them (eye task). The short trial duration enabled participants to harvest about half the targets, ensuring that total reward was due to choice behavior. We developed a probabilistic model to predict target-by-target harvesting choices that considered the rewards and movement-related costs (i.e., target distance and size) associated with the current target as well as future targets. In the hand task, in comparison to the eye task, target choice was more strongly influenced by movement-related costs and took into account a greater number of future targets, consistent with the greater costs associated with arm movement. In both tasks, participants exhibited near-optimal behaviour and in a constrained version of the hand task in which choices could only be based on target positions, participants consistently chose among the shortest movement paths. Our results demonstrate that people can rapidly and effectively integrate values and movement-related costs associated with current and future targets when sequentially harvesting targets.

  15. The late positive potential indexes a role for emotion during learning of trust from eye-gaze cues

    OpenAIRE

    Manssuer, Luis R.; Roberts, Mark V.; Tipper, Steven P.

    2015-01-01

    Gaze direction perception triggers rapid visuospatial orienting to the location observed by others. When this is congruent with the location of a target, reaction times are faster than when incongruent. Functional magnetic resonance imaging studies suggest that the non-joint attention induced by incongruent cues are experienced as more emotionally negative and this could relate to less favorable trust judgments of the faces when gaze-cues are contingent with identity. Here, we provide further...

  16. Learning to predict where human gaze is using quaternion DCT based regional saliency detection

    Science.gov (United States)

    Li, Ting; Xu, Yi; Zhang, Chongyang

    2014-09-01

    Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.

  17. Making Gazes Explicit: Facilitating Epistemic Access in the Humanities

    Science.gov (United States)

    Luckett, Kathy; Hunma, Aditi

    2014-01-01

    This paper addresses the problem of curriculum design in the Humanities and Social Sciences, and more specifically the challenge of designing foundation courses for first-generation or "disadvantaged" learners. Located in the social realist school of the sociology of education studies that builds on the legacy of Basil Bernstein, we…

  18. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    Science.gov (United States)

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  19. 视觉搜索任务中直视探测优势的眼动研究%The Detection Superiority of Perceived Direct Gaze in Visual Search Task:Evidence from Eye Movements

    Institute of Scientific and Technical Information of China (English)

    胡中华; 赵光; 刘强; 李红

    2012-01-01

    Previous studies have reported that a straight gaze target embedded in averted gaze distracters was detected faster and more accurate than an averted gaze target among straight gaze distracters. The phenomenon of detection superiority of perceived direct gaze was termed as "the stare-in-the-crowd effect". "The stare-in-the-crowd effect" could be explained as that a straight gaze captures visual-spatial attention more effectively than an averted gaze. However, it is also possible that the stimulus items matching process under the direct gaze condition is faster and easier than that under the averted gaze condition. This explanation has not been tested in previous studies. In addition, head orientation was found to be able to affect the detection of gaze direction. However, it is not clear how head orientation affectsthe detection of gaze direction. In view of this, we used eye tracking approach and divided the detection of gaze direction into three behavioral epochs: the preparation, search and response epoch. To investigate: (1) in which epoch the detection advantage of the direct gaze occurred, and whether the more effectiveness of stimulus items matching process under the direct gaze condition contributed to the-stare-in-the-crowd effect, along with the capture visual-spatial attention of direct gaze. (2) How head orientation affected the detection of gaze direction, and in which visual search epoch this effect was mainly manifested. We used a visual search task. The experiment consisted of two factors: gaze direction (direct gaze; averted gaze) and head orientation (frontal head; deviated head). Subjects were instructed to detect as accurately and quickly as possible whether the target gaze direction was present or not. Sixteen volunteers participated in the experiment (6 males and 10 females). Behavioral results showed that the direct gaze targets were detected more rapidly and accurately than the averted gaze targets; Eye movement analysis found: the detection

  20. The late positive potential indexes a role for emotion during learning of trust from eye-gaze cues.

    Science.gov (United States)

    Manssuer, Luis R; Roberts, Mark V; Tipper, Steven P

    2015-01-01

    Gaze direction perception triggers rapid visuospatial orienting to the location observed by others. When this is congruent with the location of a target, reaction times are faster than when incongruent. Functional magnetic resonance imaging studies suggest that the non-joint attention induced by incongruent cues are experienced as more emotionally negative and this could relate to less favorable trust judgments of the faces when gaze-cues are contingent with identity. Here, we provide further support for these findings using time-resolved event-related potentials. In addition to replicating the effects of identity-contingent gaze-cues on reaction times and trust judgments, we discovered that the emotion-related late positive potential increased across blocks to incongruent compared to congruent faces before, during and after the gaze-cue, suggesting both learning and retrieval of emotion states associated with the face. We also discovered that the face-recognition-related N250 component appeared to localize to sources in anterior temporal areas. Our findings provide unique electrophysiological evidence for the role of emotion in learning trust from gaze-cues, suggesting that the retrieval of face evaluations during interaction may take around 1000 ms and that the N250 originates from anterior temporal face patches.

  1. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze

  2. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Directory of Open Access Journals (Sweden)

    Johanna Palcu

    2017-06-01

    Full Text Available Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a determine whether presenting human faces (static or animated in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product in an advertisement, and (c to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product. Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers

  3. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    Science.gov (United States)

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers’ visual

  4. Gaze categorization under uncertainty: psychophysics and modeling.

    Science.gov (United States)

    Mareschal, Isabelle; Calder, Andrew J; Dadds, Mark R; Clifford, Colin W G

    2013-04-22

    The accurate perception of another person's gaze direction underlies most social interactions and provides important information about his or her future intentions. As a first step to measuring gaze perception, most experiments determine the range of gaze directions that observers judge as being direct: the cone of direct gaze. This measurement has revealed the flexibility of observers' perception of gaze and provides a useful benchmark against which to test clinical populations with abnormal gaze behavior. Here, we manipulated effective signal strength by adding noise to the eyes of synthetic face stimuli or removing face information. We sought to move beyond a descriptive account of gaze categorization by fitting a model to the data that relies on changing the uncertainty associated with an estimate of gaze direction as a function of the signal strength. This model accounts for all the data and provides useful insight into the visual processes underlying normal gaze perception.

  5. Attentional bias modification in depression through gaze contingencies and regulatory control using a new eye-tracking intervention paradigm: study protocol for a placebo-controlled trial.

    Science.gov (United States)

    Vazquez, Carmelo; Blanco, Ivan; Sanchez, Alvaro; McNally, Richard J

    2016-12-08

    Attentional biases, namely difficulties both to disengage attention from negative information and to maintain it on positive information, play an important role in the onset and maintenance of the disorder. Recently, researchers have developed specific attentional bias modification (ABM) techniques aimed to modify these maladaptive attentional patterns. However, the application of current ABM procedures has yielded, so far, scarce results in depression due, in part, to some methodological shortcomings. The aim of our protocol is the application of a new ABM technique, based on eye-tracker technology, designed to objectively train the specific attentional components involved in depression and, eventually, to reduce depressive symptoms. Based on sample size calculations, 32 dysphoric (BDI ≥13) participants will be allocated to either an active attentional bias training group or a yoked-control group. Attentional training will be individually administered on two sessions in two consecutive days at the lab. In the training task series of pairs of faces (i.e. neutral vs. sad; neutral vs. happy; happy vs. sad) will be displayed. Participants in the training group will be asked to localize as quickly as possible the most positive face of the pair (e.g., the neutral face in neutral vs. sad trials) and maintain their gaze on it for 750 ms or 1500 ms, in two different blocks, to advance to the next trial. Participants' maintenance of gaze will be measured by an eye-tracking apparatus. Participants in the yoked-control group will be exposed to the same stimuli and the same average amount of time than the experimental participants but without any instruction to maintain their gaze or any feedback on their performance. Pre and post training measures will be obtained to assess cognitive and emotional changes after the training. The findings from this research will provide a proof-of-principle of the efficacy of eye-tracking paradigms to modify attentional biases and

  6. Gaze Interactive Building Instructions

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Ahmed, Zaheer; Mardanbeigi, Diako

    We combine eye tracking technology and mobile tablets to support hands-free interaction with digital building instructions. As a proof-of-concept we have developed a small interactive 3D environment where one can interact with digital blocks by gaze, keystroke and head gestures. Blocks may be moved...

  7. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    Science.gov (United States)

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  8. Gaze shifts and fixations dominate gaze behavior of walking cats

    Science.gov (United States)

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  9. Early visual evoked potentials are modulated by eye position in humans induced by whole body rotations

    Directory of Open Access Journals (Sweden)

    Petit Laurent

    2004-09-01

    Full Text Available Abstract Background To reach and grasp an object in space on the basis of its image cast on the retina requires different coordinate transformations that take into account gaze and limb positioning. Eye position in the orbit influences the image's conversion from retinotopic (eye-centered coordinates to an egocentric frame necessary for guiding action. Neuroimaging studies have revealed eye position-dependent activity in extrastriate visual, parietal and frontal areas that is along the visuo-motor pathway. At the earliest vision stage, the role of the primary visual area (V1 in this process remains unclear. We used an experimental design based on pattern-onset visual evoked potentials (VEP recordings to study the effect of eye position on V1 activity in humans. Results We showed that the amplitude of the initial C1 component of VEP, acknowledged to originate in V1, was modulated by the eye position. We also established that putative spontaneous small saccades related to eccentric fixation, as well as retinal disparity cannot explain the effects of changing C1 amplitude of VEP in the present study. Conclusions The present modulation of the early component of VEP suggests an eye position-dependent activity of the human primary visual area. Our findings also evidence that cortical processes combine information about the position of the stimulus on the retinae with information about the location of the eyes in their orbit as early as the stage of primary visual area.

  10. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    Science.gov (United States)

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  11. Fix your eyes in the space you could reach: neurons in the macaque medial parietal cortex prefer gaze positions in peripersonal space.

    Directory of Open Access Journals (Sweden)

    Kostas Hadjidimitrakis

    Full Text Available Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.

  12. Cultural and Species Differences in Gazing Patterns for Marked and Decorated Objects: A Comparative Eye-Tracking Study

    Science.gov (United States)

    Mühlenbeck, Cordelia; Jacobsen, Thomas; Pritsch, Carla; Liebal, Katja

    2017-01-01

    Objects from the Middle Paleolithic period colored with ochre and marked with incisions represent the beginning of non-utilitarian object manipulation in different species of the Homo genus. To investigate the visual effects caused by these markings, we compared humans who have different cultural backgrounds (Namibian hunter–gatherers and German city dwellers) to one species of non-human great apes (orangutans) with respect to their perceptions of markings on objects. We used eye-tracking to analyze their fixation patterns and the durations of their fixations on marked and unmarked stones and sticks. In an additional test, humans evaluated the objects regarding their aesthetic preferences. Our hypotheses were that colorful markings help an individual to structure the surrounding world by making certain features of the environment salient, and that aesthetic appreciation should be associated with this structuring. Our results showed that humans fixated on the marked objects longer and used them in the structural processing of the objects and their background, but did not consistently report finding them more beautiful. Orangutans, in contrast, did not distinguish between object and background in their visual processing and did not clearly fixate longer on the markings. Our results suggest that marking behavior is characteristic for humans and evolved as an attention-directing rather than aesthetic benefit. PMID:28167923

  13. Cultural and Species Differences in Gazing Patterns for Marked and Decorated Objects: A Comparative Eye-Tracking Study.

    Science.gov (United States)

    Mühlenbeck, Cordelia; Jacobsen, Thomas; Pritsch, Carla; Liebal, Katja

    2017-01-01

    Objects from the Middle Paleolithic period colored with ochre and marked with incisions represent the beginning of non-utilitarian object manipulation in different species of the Homo genus. To investigate the visual effects caused by these markings, we compared humans who have different cultural backgrounds (Namibian hunter-gatherers and German city dwellers) to one species of non-human great apes (orangutans) with respect to their perceptions of markings on objects. We used eye-tracking to analyze their fixation patterns and the durations of their fixations on marked and unmarked stones and sticks. In an additional test, humans evaluated the objects regarding their aesthetic preferences. Our hypotheses were that colorful markings help an individual to structure the surrounding world by making certain features of the environment salient, and that aesthetic appreciation should be associated with this structuring. Our results showed that humans fixated on the marked objects longer and used them in the structural processing of the objects and their background, but did not consistently report finding them more beautiful. Orangutans, in contrast, did not distinguish between object and background in their visual processing and did not clearly fixate longer on the markings. Our results suggest that marking behavior is characteristic for humans and evolved as an attention-directing rather than aesthetic benefit.

  14. Children with ASD Can Use Gaze to Map New Words

    Science.gov (United States)

    Bean Ellawadi, Allison; McGregor, Karla K.

    2016-01-01

    Background: The conclusion that children with autism spectrum disorders (ASD) do not use eye gaze in the service of word learning is based on one-trial studies. Aims: To determine whether children with ASD come to use gaze in the service of word learning when given multiple trials with highly reliable eye-gaze cues. Methods & Procedures:…

  15. Effects of Gaze Direction Perception on Gaze Following Behavior%注视方向的知觉对注视追随行为的影响

    Institute of Scientific and Technical Information of China (English)

    张智君; 赵亚军; 占琪涛

    2011-01-01

    他人的注视线索可诱导观察者将注意自动地转移到该线索所指示的方向上去(注视追随),但仍不清楚注视方向的知觉在注视追随中起到何种作用.本研究结合注视适应和注视线索提示范式发现:知觉到的注视线索角度越大,其线索提示效应越强;知觉适应后被试判断注视方向的准确性下降,注视线索引起的注意转移量显著减少.可见,对注视方向的知觉能直接影响注视追随行为,而注视方向抽取受到刺激显著性(注视角度)和知觉适应等因素的调节.这提示:在意识状态下,注视知觉与注视追随存在直接联系,即可能存在从注视知觉系统到注意转移系统的皮层加工通路;注视追随并非纯粹的反射式加工,它受自上而下知觉经验的调节.%Observing another person's averted eye gaze leads to automatic shift of attention to the same object and facilitates subsequent early visual processing. This phenomenon is termed 'Joint attention'. Joint attention processing proceeds through two main stages: gaze perception and gaze following. Gaze perception refers to analysis of the perceptual features of a gaze cue. By contrast, gaze following refers to the tendency of observers to shift attention to locations looked at by others, which is indicated by gaze cueing effect (GCE). Most of researchers maintained that the latter process relied on the former (serial model). They hold an idea that mechanisms involved in gaze perception precede those involved in attentional cueing from gaze. However, other results suggested that gaze perception and gaze following might involve dissociable mechanisms (parallel model). As Doherty et al. (2009) reported in young children, it was possible for gaze following to occur in the absence of precise gaze perception. Thus, it remains unclear what role the gaze perception plays in gaze following behavior. In the current study, a gaze adaptation technique and a gaze cueing paradigm

  16. Target position relative to the head is essential for predicting head movement during head-free gaze pursuit.

    Science.gov (United States)

    C Pallus, Adam; G Freedman, Edward

    2016-08-01

    Gaze pursuit is the coordinated movement of the eyes and head that allows humans and other foveate animals to track moving objects. The control of smooth pursuit eye movements when the head is restrained is relatively well understood, but how the eyes coordinate with concurrent head movements when the head is free remains unresolved. In this study, we describe behavioral tasks that dissociate head and gaze velocity during head-free pursuit in monkeys. Existing models of gaze pursuit propose that both eye and head movements are driven only by the perceived velocity of the visual target and are therefore unable to account for these data. We show that in addition to target velocity, the positions of the eyes in the orbits and the retinal position of the target are important factors for predicting head movement during pursuit. When the eyes are already near their limits, further pursuit in that direction will be accompanied by more head movement than when the eyes are centered in the orbits, even when target velocity is the same. The step-ramp paradigm, often used in pursuit tasks, produces larger or smaller head movements, depending on the direction of the position step, while gaze pursuit velocity is insensitive to this manipulation. Using these tasks, we can reliably evoke head movements with peak velocities much faster than the target's velocity. Under these circumstances, the compensatory eye movements, which are often called counterproductive since they rotate the eyes in the opposite direction, are essential to maintaining accurate gaze velocity.

  17. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  18. Differences in gaze anticipation for locomotion with and without vision

    Directory of Open Access Journals (Sweden)

    Colas Nils Authié

    2015-06-01

    Full Text Available Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics.We asked ten participants to walk along two predefined complex trajectories (limacon and figure eight without any cue on the trajectory to follow. Two visual conditions were used: (i in light and (ii in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements.We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude. The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory.These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition.

  19. The feasibility of an automated eye-tracking-modified Fagan test of memory for human faces in younger Ugandan HIV-exposed children.

    Science.gov (United States)

    Chhaya, Ronak; Weiss, Jonathan; Seffren, Victoria; Sikorskii, Alla; Winke, Paula M; Ojuka, Julius C; Boivin, Michael J

    2017-05-22

    The Fagan Test of Infant Intelligence (FTII) uses longer gaze length for unfamiliar versus familiar human faces to gauge visual-spatial encoding, attention, and working memory in infants. Our objective was to establish the feasibility of automated eye tracking with the FTII in HIV-exposed Ugandan infants. The FTII was administered to 31 perinatally HIV-exposed noninfected (HEU) Ugandan children 6-12 months of age (11 boys; M = 0.69 years, SD = 0.14; 19 girls; M = 0.79, SD = 0.15). A series of 10 different faces were presented (familiar face exposure for 25 s followed by a gaze preference trial of 15 s with both the familiar and unfamiliar faces). Tobii X2-30 infrared camera for pupil detection provided automated eye-tracking measures of gaze location and length during presentation of Ugandan faces selected to correspond to the gender, age (adult, child), face expression, and orientation of the original FTII. Eye-tracking gaze length for unfamiliar faces was correlated with performance on the Mullen Scales of Early Learning (MSEL). Infants gazed longer at the novel picture compared to familiar across 10 novelty preference trials. Better MSEL cognitive development was correlated with proportionately longer time spent looking at the novel faces (r(30) = 0.52, p = .004); especially for the Fine Motor Cognitive Sub-scale (r(30) = 0.54, p = .002). Automated eye tracking in a human face recognition test proved feasible and corresponded to the MSEL composite cognitive development in HEU infants in a resource-constrained clinical setting. Eye tracking may be a viable means of enhancing the validity and accuracy of other neurodevelopmental measures in at-risk children in sub-Saharan Africa.

  20. Using Gaze Patterns to Predict Task Intent in Collaboration

    Directory of Open Access Journals (Sweden)

    Chien-Ming eHuang

    2015-07-01

    Full Text Available In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a worker'' prepared a sandwich by adding ingredients requested by a customer.'' In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 seconds before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.

  1. No Evidence of Emotional Dysregulation or Aversion to Mutual Gaze in Preschoolers with Autism Spectrum Disorder: An Eye-Tracking Pupillometry Study

    Science.gov (United States)

    Nuske, Heather J.; Vivanti, Giacomo; Dissanayake, Cheryl

    2015-01-01

    The "gaze aversion hypothesis", suggests that people with Autism Spectrum Disorder (ASD) avoid mutual gaze because they experience it as hyper-arousing. To test this hypothesis we showed mutual and averted gaze stimuli to 23 mixed-ability preschoolers with ASD ("M" Mullen DQ = 68) and 21 typically-developing preschoolers, aged…

  2. Human-eye versus computerized color matching.

    Science.gov (United States)

    Yap, A U; Sim, C P; Loh, W L; Teo, J H

    1999-01-01

    This project compared the difference in color matching between human-eye assessment and computerized colorimetry. Fifty dental personnel were asked to color match Vita Lumin shade tabs to seven different randomly arranged test tabs from the Z100 shade guide. All evaluators were blinded to the shades of the test tabs and were asked to match only body shade of the Vita Lumin tab to the middle third or body of each test tab. The results obtained were subsequently computed into L*a*b* values and compared with results obtained by computerized colorimetry. Results indicate that the difference in color matching between human-eye assessment and computerized colorimetry is shade dependent. Discrepancy was significant for b* coordinates for shades A1 and B2 and L* and b* coordinates for shade C4. For all shades evaluated, color difference between human-eye and computerized color matching is perceivable under clinical settings, as delta E values are greater than 3. There is a need for correction factors in the formal specification of the color-matching software due to the discrepancy between human-eye and computerized colorimetric color matching.

  3. Social interactions through the eyes of macaques and humans.

    Directory of Open Access Journals (Sweden)

    Richard McFarland

    Full Text Available Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression. Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys' gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of

  4. Context-sensitivity in Conversation. Eye gaze and the German Repair Initiator ‘bitte?’ (´pardon?´)

    DEFF Research Database (Denmark)

    Egbert, Maria

    1996-01-01

    Just as turn-taking has been found to be both context-free and context-sensitive (Sacks, Schegloff & Jefferson 1974), the organization of repair is also shown here to be both context-free and context-sensitive. In a comparison of American and German conversation, repair can be shown to be context......-free in that, basically, the same mechanism can be found across these two languages. However, repair is also sensitive to the linguistic inventory of a given language; in German, morphological marking, syntactic constraints, and grammatical congruity across turns are used as interactional resources....... In addition, repair is sensitive to certain characteristics of social situations. The selection of a particular repair initiator, German bitte? ‘pardon?’, indexes that there is no mutual gaze between interlocutors; i.e., there is no common course of action. The selection of bitte? not only initiates repair...

  5. Human-computer Interaction Based on Gaze Tracking and Gesture Recognition%基于视线跟踪和手势识别的人机交互

    Institute of Scientific and Technical Information of China (English)

    肖志勇; 秦华标

    2009-01-01

    提出一种新的基于视线跟踪和手势识别的交互方式用于远距离操作计算机.系统通过摄像头采集用户的图像,利用图像识别算法检测人眼和手指的位置,由人眼和指尖的连线确定用户指向屏幕的位置,通过判别用户手势的变化实现各种操作,达到人机交互的目的.实验结果表明,该交互方式可以较好地定位屏幕和判断用户的操作,实现自然、友好的远距离人机交互.%This paper presents a novel human-computer interaction for long-distance operation based on gaze tracking and gesture recognition. The system analyzes the image captured by camera and finds the position of eyes and fingers through some recognition algorithms. The position which user points to the screen finds through the line from the eye to the finger. By recognizing user's gestures, the system executes various operations. Experimental results demonstrate that the interaction can locate the position on the screen and recognize user's gesture. This method achieves friendly and natural long-distance human-computer interaction.

  6. Off-the-Shelf Gaze Interaction

    DEFF Research Database (Denmark)

    San Agustin, Javier

    People with severe motor-skill disabilities are often unable to use standard input devices such as a mouse or a keyboard to control a computer and they are, therefore, in strong need for alternative input devices. Gaze tracking offers them the possibility to use the movements of their eyes...... of the challenges introduced by the use of low-cost and off-the-shelf components for gaze interaction. The main contributions are: - Development and performance evaluation of the ITU Gaze Tracker, an off-the-shelf gaze tracker that uses an inexpensive webcam or video camera to track the user’s eye. The software...... is readily available as open source, offering the possibility to try out gaze interaction for a low price and to analyze, improve and extend the software by modifying the source code. - A novel gaze estimation method based on homographic mappings between planes. No knowledge about the hardware configuration...

  7. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    Science.gov (United States)

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  8. Zoonotic helminths affecting the human eye

    Directory of Open Access Journals (Sweden)

    Eberhard Mark L

    2011-03-01

    Full Text Available Abstract Nowaday, zoonoses are an important cause of human parasitic diseases worldwide and a major threat to the socio-economic development, mainly in developing countries. Importantly, zoonotic helminths that affect human eyes (HIE may cause blindness with severe socio-economic consequences to human communities. These infections include nematodes, cestodes and trematodes, which may be transmitted by vectors (dirofilariasis, onchocerciasis, thelaziasis, food consumption (sparganosis, trichinellosis and those acquired indirectly from the environment (ascariasis, echinococcosis, fascioliasis. Adult and/or larval stages of HIE may localize into human ocular tissues externally (i.e., lachrymal glands, eyelids, conjunctival sacs or into the ocular globe (i.e., intravitreous retina, anterior and or posterior chamber causing symptoms due to the parasitic localization in the eyes or to the immune reaction they elicit in the host. Unfortunately, data on HIE are scant and mostly limited to case reports from different countries. The biology and epidemiology of the most frequently reported HIE are discussed as well as clinical description of the diseases, diagnostic considerations and video clips on their presentation and surgical treatment. Homines amplius oculis, quam auribus credunt Seneca Ep 6,5 Men believe their eyes more than their ears

  9. The Mona Lisa effect: neural correlates of centered and off-centered gaze.

    Science.gov (United States)

    Boyarskaya, Evgenia; Sebastian, Alexandra; Bauermann, Thomas; Hecht, Heiko; Tüscher, Oliver

    2015-02-01

    The Mona Lisa effect describes the phenomenon when the eyes of a portrait appear to look at the observer regardless of the observer's position. Recently, the metaphor of a cone of gaze has been proposed to describe the range of gaze directions within which a person feels looked at. The width of the gaze cone is about five degrees of visual angle to either side of a given gaze direction. We used functional magnetic resonance imaging to investigate how the brain regions involved in gaze direction discrimination would differ between centered and decentered presentation positions of a portrait exhibiting eye contact. Subjects observed a given portrait's eyes. By presenting portraits with varying gaze directions-eye contact (0°), gaze at the edge of the gaze cone (5°), and clearly averted gaze (10°), we revealed that brain response to gaze at the edge of the gaze cone was similar to that produced by eye contact and different from that produced by averted gaze. Right fusiform gyrus and right superior temporal sulcus showed stronger activation when the gaze was averted as compared to eye contact. Gaze sensitive areas, however, were not affected by the portrait's presentation location. In sum, although the brain clearly distinguishes averted from centered gaze, a substantial change of vantage point does not alter neural activity, thus providing a possible explanation why the feeling of eye contact is upheld even in decentered stimulus positions. © 2014 Wiley Periodicals, Inc.

  10. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children.

    Directory of Open Access Journals (Sweden)

    Anaïs Racca

    Full Text Available Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions.

  11. Constraining eye movement when redirecting walking trajectories alters turning control in healthy young adults.

    Science.gov (United States)

    Pradeep Ambati, V N; Murray, Nicholas G; Saucedo, Fabricio; Powell, Douglas W; Reed-Jones, Rebecca J

    2013-05-01

    Humans use a specific steering synergy, where the eyes and head lead rotation to the new direction, when executing a turn or change in direction. Increasing evidence suggests that eye movement is critical for turning control and that when the eyes are constrained, or participants have difficulties making eye movements, steering control is disrupted. The purpose of the current study was to extend previous research regarding eye movements and steering control to a functional walking and turning task. This study investigated eye, head, trunk, and pelvis kinematics of healthy young adults during a 90° redirection of walking trajectory under two visual conditions: Free Gaze (the eyes were allowed to move naturally in the environment), and Fixed Gaze (participants were required to fixate the eyes on a target in front). Results revealed significant differences in eye, head, and trunk coordination between Free Gaze and Fixed Gaze conditions (p segments moved together with no significant differences between segment onset times. In addition, the sequence of segment rotation during Fixed Gaze suggested a bottom-up postural perturbation control strategy in place of top-down steering control seen in Free Gaze. The results of this study support the hypothesis that eye movement is critical for the release of the steering synergy for turning control.

  12. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  13. Culture and Listeners' Gaze Responses to Stuttering

    Science.gov (United States)

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  14. From the "Eye of History" to "A Second Gaze": The Visual Archive and the Marginalized in the History of Education

    Science.gov (United States)

    Grosvenor, Ian

    2007-01-01

    This paper has several concerns. It is about both the stories we tell and the images we place with those stories; it is also about historical practice and the power of the image to generate new research approaches. The paper is organized into three sections: the "eye of history" and historians and the visual archive; histories of black…

  15. Gameplay experience in a gaze interaction game

    CERN Document Server

    Nacke, Lennart E; Sasse, Dennis; Lindley, Craig A

    2010-01-01

    Assessing gameplay experience for gaze interaction games is a challenging task. For this study, a gaze interaction Half-Life 2 game modification was created that allowed eye tracking control. The mod was deployed during an experiment at Dreamhack 2007, where participants had to play with gaze navigation and afterwards rate their gameplay experience. The results show low tension and negative affects scores on the gameplay experience questionnaire as well as high positive challenge, immersion and flow ratings. The correlation between spatial presence and immersion for gaze interaction was high and yields further investigation. It is concluded that gameplay experience can be correctly assessed with the methodology presented in this paper.

  16. Cyclooxygenase-2 expression in the normal human eye and its expression pattern in selected eye tumours

    DEFF Research Database (Denmark)

    Wang, Jinmei; Wu, Yazhen; Heegaard, Steffen;

    2011-01-01

    using antibodies against COX-2 was performed on paraffin sections of normal human eyes and selected eye tumours arising from cells expressing COX-2. Results: Cyclooxygenase-2 expression was found in various structures of the normal eye. Abundant expression was seen in the cornea, iris, ciliary body......Purpose: Cyclooxygenase-2 (COX-2) is an enzyme involved in neoplastic processes. The purpose of the present study is to investigate COX-2 expression in the normal human eye and the expression pattern in selected eye tumours involving COX-2 expressing cells. Methods: Immunohistochemical staining...... and retina. The COX-2 expression was less in tumours deriving from the ciliary epithelium and also in retinoblastoma. Conclusion: Cyclooxygenase-2 is constitutively expressed in normal human eyes. The expression of COX-2 is much lower in selected eye tumours involving COX-2 expressing cells....

  17. Gaze disorders: A clinical approach

    Directory of Open Access Journals (Sweden)

    Pulikottil Wilson Vinny

    2016-01-01

    Full Text Available A single clear binocular vision is made possible by the nature through the oculomotor system along with inputs from the cortical areas as well their descending pathways to the brainstem. Six systems of supranuclear control mechanisms play a crucial role in this regard. These are the saccadic system, the smooth pursuit system, the vestibular system, the optokinetic system, the fixation system, and the vergence system. In gaze disorders, lesions at different levels of the brain spare some of the eye movement systems while affecting others. The resulting pattern of eye movements helps clinicians to localize lesions accurately in the central nervous system. Common lesions causing gaze palsies include cerebral infarcts, demyelinating lesions, multiple sclerosis, tumors, Wernicke's encephalopathy, metabolic disorders, and neurodegenerative disorders such as progressive supranuclear palsy. Evaluation of the different gaze disorders is a bane of most budding neurologists and neurosurgeons. However, a simple and systematic clinical approach to this problem can make their early diagnosis rather easy.

  18. Spontaneous social orienting and gaze following in ringtailed lemurs (Lemur catta).

    Science.gov (United States)

    Shepherd, Stephen V; Platt, Michael L

    2008-01-01

    Both human and nonhuman primates preferentially orient toward other individuals and follow gaze in controlled environments. Precisely where any animal looks during natural behavior, however, remains unknown. We used a novel telemetric gaze-tracking system to record orienting behavior of ringtailed lemurs (Lemur catta) interacting with a naturalistic environment. We here provide the first evidence that ringtailed lemurs, group-living prosimian primates, preferentially gaze towards other individuals and, moreover, follow other lemurs' gaze while freely moving and interacting in naturalistic social and ecological environments. Our results support the hypothesis that stem primates were capable of orienting toward and following the attention of other individuals. Such abilities may have enabled the evolution of more complex social behavior and cognition, including theory of mind and language, which require spontaneous attention sharing. This is the first study to use telemetric eye-tracking to quantitatively monitor gaze in any nonhuman animal during locomotion, feeding, and social interaction. Moreover, this is the first demonstration of gaze following by a prosimian primate and the first to report gaze following during spontaneous interaction in naturalistic social environments.

  19. Sphero-cylindrical error for oblique gaze as a function of the position of the centre of rotation of the eye.

    Science.gov (United States)

    Perches, Sara; Ares, Jorge; Collados, Victoria; Palos, Fernando

    2013-07-01

    New designs of ophthalmic lenses customised for particular wearing conditions (e.g., vertex distance or wrap tilt angle) have emerged during the last few years. However, there is limited information about the extent of any improvement in visual quality of these products. The aim of this work was to determine whether customisation according to the centre of rotation of the eye (CRE) improves visual quality for oblique gaze in monofocal spherical lenses. Conventional spherical lenses were designed by numerical ray tracing with back vertex powers (BVP) ranging from +8 to -8 dioptres (D) and base curves from 0 to 8 D. The wavefront error at oblique gaze (40°) was computed for each design with CRE positions from 20 to 35 mm. Sphero-cylindrical (SC) error was calculated using wavefront Zernike coefficients, considering only monochromatic aberrations. Visual acuity in logMAR was estimated following the Raasch empirical regression model. SC error and visual acuity maps were calculated for each BVP as a function of base curves and CRE in a graded colour scale. From SC error maps maximum spherical and cylindrical errors (MSE and MCE) of 1.49 D and -1.24 D respectively were found for BVP from 0 to -2 D, 2.27 D and -1.90 D for BVP from -2 D to -4 D, 2.59 D and -2.20 D for lenses from -4 D to -6 D and 2.63 D and -2.28 D for lenses from -6 D to -8 D. Concerning positive lenses, we obtained MSE and MCE of 0.37 D and -1.35 D respectively for lenses from 0 D to +2 D, 0.39 D and -2.23 D for lenses from +2 D to +4 D and 0.36 D and -2.73 D for lenses from +4 D to +6 D. Regarding visual acuity maps for 40° oblique gaze, significant loss of visual acuity (>0.30 logMAR, Snellen 6/12, 20/40, decimal 0.50) was found for BVP as low as -2 D. Clinically negligible high order aberration levels (equivalent spherical power high SC error when they were designed with low bases. However, high BVP negative lenses with low SC error were found for medium bases and low

  20. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    paradigm in the context of human-machine interaction as low-cost gaze trackers become more ubiquitous. The viability of gaze gestures as an innovative way to control a computer rests on how easily they can be assimilated by potential users and also on the ability of machine learning algorithms...... to discriminate intentional gaze gestures from typical gaze activity performed during standard interaction with electronic devices. In this work, through a set of experiments and user studies, we evaluate the performance of two different gaze gestures modalities, gliding gaze gestures and saccadic gaze gestures...

  1. Under Moroccan Gaze: Dis/(Reorienting Orientalism American Style in Abdellatif Akbib’s Tangier’s Eyes on America

    Directory of Open Access Journals (Sweden)

    Lhoussain Simour

    2010-06-01

    Full Text Available This article engages with travel literature and is mostly concerned with the image of America in Abdellatif Akbib’s travel-inspired narrative, Tangier’s Eyes on America (2001. It is devoted to examining a number of patterns of representation especially as they pertain to the notion of counter discourse and counter-hegemonic modalities of resistance and subversion. It also inspects the discursive mechanisms Akbib employs to represent America and highlights how Western cultural prejudices and stereotypes are destabilised, and how the discursively-inflected distortions of the Orientalist mindset are disturbed in his work. The choice of this text is determined by a strong desire to discover how the Other of the Orientalist ideology examines and understands the Western Self and modernity and how he/she dismantles “the Centre/Margin binarism of imperial discourse.”

  2. Modeling and simulation of the human eye

    Science.gov (United States)

    Duran, R.; Ventura, L.; Nonato, L.; Bruno, O.

    2007-02-01

    The computational modeling of the human eye has been wide studied for different sectors of the scientific and technological community. One of the main reasons for this increasing interest is the possibility to reproduce eye optic properties by means of computational simulations, becoming possible the development of efficient devices to treat and to correct the problems of the vision. This work explores this aspect still little investigated of the modeling of the visual system, considering a computational sketch that make possible the use of real data in the modeling and simulation of the human visual system. This new approach makes possible the individual inquiry of the optic system, assisting in the construction of new techniques used to infer vital data in medical investigations. Using corneal topography to collect real data from patients, a computational model of cornea is constructed and a set of simulations were build to ensure the correctness of the system and to investigate the effect of corneal abnormalities in retinal image formation, such as Plcido Discs, Point Spread Function, Wave front and the projection of a real image and it's visualization on retina.

  3. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    Science.gov (United States)

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  4. 基于暗瞳图像的人眼视线估计*%Eye gaze tracking based on dark pupil image∗

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    The accurate localization of iris center is difficult since the outer boundary of iris is often occluded significantly by the eyelids. In order to solve this problem, an infrared light source un-coaxial with the camera is used to produce dark pupil image for pupil center estimation. Firstly, the 3D position of the center of cornea curvature, which is used as translational movement information of eyeball, is computed using two cameras and the coordinates of two cornea reflections on the cameras’ imaging planes. Then, the relative displacement of pupil center from the projection of the cornea curvature center on 2D image is extracted, describing the rotational movement of the eyeball. Finally, the feature vector is mapped into coordinates of gazing point on the screen using artificial neural network. As for the eye region detection problem, two wide-view webcams are used, and adaptive boosting + active appearance model algorithm is adopted to limit the region of interest within a small area. The result of our experiment shows that the average root-mean-square error is 0.62◦ in horizontal direction and 1.05◦ in vertical direction, which demonstrates the effectiveness of our solution in eye gaze tracking.%  虹膜外边缘受眼睑遮挡较为严重时,会给虹膜中心的准确提取造成很大的困难.为此,提出利用放置在相机轴外的红外光源产生的暗瞳图像估计瞳孔中心,该方法避免了提取虹膜外边缘遇到的遮挡问题.首先利用角膜反射光斑在相机像面中的位置估计角膜所在球体中心的三维空间坐标,作为眼球的平动信息;然后考察瞳孔中心与角膜球体中心在相机成像面投影位置的相对偏移,作为眼球的转动信息;最后利用人工神经网络完成视线特征向量与注视点坐标间的映射.在人眼区域定位的问题上,利用两部大视场相机,采用自适应增强算法和主动表观模型算法实现眼部区域的准确定位

  5. Gaze-based rehearsal in children under 7: a developmental investigation of eye movements during a serial spatial memory task.

    Science.gov (United States)

    Morey, Candice C; Mareva, Silvana; Lelonkiewicz, Jaroslaw R; Chevalier, Nicolas

    2017-03-12

    The emergence of strategic verbal rehearsal at around 7 years of age is widely considered a major milestone in descriptions of the development of short-term memory across childhood. Likewise, rehearsal is believed by many to be a crucial factor in explaining why memory improves with age. This apparent qualitative shift in mnemonic processes has also been characterized as a shift from passive visual to more active verbal mnemonic strategy use, but no investigation of the development of overt spatial rehearsal has informed this explanation. We measured serial spatial order reconstruction in adults and groups of children 5-7 years old and 8-11 years old, while recording their eye movements. Children, particularly the youngest children, overtly fixated late-list spatial positions longer than adults, suggesting that younger children are less likely to engage in covert rehearsal during stimulus presentation than older children and adults. However, during retention the youngest children overtly fixated more of the to-be-remembered sequences than any other group, which is inconsistent with the idea that children do nothing to try to remember. Altogether, these data are inconsistent with the notion that children under 7 do not engage in any attempts to remember. They are most consistent with proposals that children's style of remembering shifts around age 7 from reactive cue-driven methods to proactive, covert methods, which may include cumulative rehearsal.

  6. Comparison of dogs and humans in visual scanning of social interaction

    OpenAIRE

    Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M.; Vainio, Outi; Kujala, Miiamaaria V.

    2015-01-01

    Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations w...

  7. A non-verbal Turing test: differentiating mind from machine in gaze-based social interaction.

    Science.gov (United States)

    Pfeiffer, Ulrich J; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons' gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character's gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete.

  8. A non-verbal Turing test: differentiating mind from machine in gaze-based social interaction.

    Directory of Open Access Journals (Sweden)

    Ulrich J Pfeiffer

    Full Text Available In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons' gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character's gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete.

  9. A Non-Verbal Turing Test: Differentiating Mind from Machine in Gaze-Based Social Interaction

    Science.gov (United States)

    Pfeiffer, Ulrich J.; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons’ gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character’s gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete. PMID:22096599

  10. Assessing Self-Awareness through Gaze Agency.

    Science.gov (United States)

    Gregori Grgič, Regina; Crespi, Sofia Allegra; de'Sperati, Claudio

    2016-01-01

    We define gaze agency as the awareness of the causal effect of one's own eye movements in gaze-contingent environments, which might soon become a widespread reality with the diffusion of gaze-operated devices. Here we propose a method for measuring gaze agency based on self-monitoring propensity and sensitivity. In one task, naïf observers watched bouncing balls on a computer monitor with the goal of discovering the cause of concurrently presented beeps, which were generated in real-time by their saccades or by other events (Discovery Task). We manipulated observers' self-awareness by pre-exposing them to a condition in which beeps depended on gaze direction or by focusing their attention to their own eyes. These manipulations increased propensity to agency discovery. In a second task, which served to monitor agency sensitivity at the sensori-motor level, observers were explicitly asked to detect gaze agency (Detection Task). Both tasks turned out to be well suited to measure both increases and decreases of gaze agency. We did not find evident oculomotor correlates of agency discovery or detection. A strength of our approach is that it probes self-monitoring propensity-difficult to evaluate with traditional tasks based on bodily agency. In addition to putting a lens on this novel cognitive function, measuring gaze agency could reveal subtle self-awareness deficits in pathological conditions and during development.

  11. 基于眼电和稳态视觉诱发电位分析的目光跟踪方法%Eye Gaze Tracking Based on Analysis of Electrooculography and Steady-State Visual Evoked Potentials

    Institute of Scientific and Technical Information of China (English)

    郭琛; 高小榕

    2012-01-01

    Eye-tracking technology as a means of human-computer interaction ( HCI) and eye behavior measurement has been widely used in psychology and cognitive scientific research. Meanwhile brain-computer interface ( BCI) based on steady-state visual evoked potential (SSVEP) is also a concerned method for disabled patients. A novel method of eye-tracking using combined analysis of the two signals was proposed in this paper. Two kinds of electrophysiological signals,EOG and EEG,were simultaneously analyzed with this method. The procedure of EOG analysis contained modules of detrend,denoising,angular transformation and calibration. The analysis of SSVEP was based on algorithm of canonical correlation analysis (CCA) ,and with this algorithm one frequency was selected as the target of which the canonical correlation was maximal and over threshold. The screen coordinates of the target could be used as benchmark parameters for calibration of gaze point tracking. The results showed that gaze points could be identified by EOG every 0. 5 s and targets spotted by EEG every 2 s. Both could run independently or work together and the time and accuracy of detection would be improved when cooperation.%眼动跟踪技术作为人机交互手段和行为检测方法已广泛应用于心理学和认知科学领域的研究,基于稳态视觉诱发电位的脑-机接口也是一种备受关注的人机交互方法.本研究提出一种结合眼电和稳态视觉诱发电位同步分析的眼睛注视点位置跟踪方法,通过同步检测两种电生理信号:眼电信号(EOG)和脑电信号(EEG)来实现.主要的处理算法有:基于EOG的人机交互算法,包括基线去除、去噪声、角度变换、基准校正等;基于SSVEP的脑-机接口算法,通过典型相关分析法实现.由SSVEP判断出的目标对应的屏幕坐标可以作为眼动分析中基准校正的输入参数.实验结果表明:每0.5 sEOG-HCI可以对注视点位置进行一次识别;每2 sSSVEP-BCI可以

  12. Human secretory phospholipase A(2), group IB in normal eyes and in eye diseases

    DEFF Research Database (Denmark)

    Prause, Jan U; Bazan, Nicolas G; Heegaard, Steffen

    2007-01-01

    study was to identify human GIB (hGIB) in the normal human eye and investigate the pattern of expression in patients with eye diseases involving hGIB-rich cells. METHODS: Human GIB mRNA was identified in the human retina by means of in situ hybridization and polymerase chain reaction. Antibodies against...... hGIB were obtained and immunohistochemical staining was performed on paraffin-embedded sections of normal and pathological eyes. Donor eyes from patients with descemetization of the cornea, Fuchs' corneal endothelial dystrophy, age-related macular degeneration, malignant choroidal melanoma......, retinitis pigmentosa and glaucoma were evaluated. RESULTS: Expression of hGIB was found in various cells of the eye. The most abundant expression was found in retinal pigment epithelium (RPE) cells, the inner photoreceptor segments, ganglion cells and the corneal endothelium. We explored diseases involving...

  13. Interacting with Objects in the Environment by Gaze and Hand Gestures

    DEFF Research Database (Denmark)

    Hales, Jeremy; Mardanbeigi, Diako; Rozado, David

    2013-01-01

    A head-mounted wireless gaze tracker in the form of gaze tracking glasses is used here for continuous and mobile monitoring of a subject's point of regard on the surrounding environment. We combine gaze tracking and hand gesture recognition to allow a subject to interact with objects in the envir......A head-mounted wireless gaze tracker in the form of gaze tracking glasses is used here for continuous and mobile monitoring of a subject's point of regard on the surrounding environment. We combine gaze tracking and hand gesture recognition to allow a subject to interact with objects...... in the environment by gazing at them, and controlling the object using hand gesture commands. The gaze tracking glasses was made from low-cost hardware consisting of a safety glasses' frame and wireless eye tracking and scene cameras. An open source gaze estimation algorithm is used for eye tracking and user's gaze...... in smart environments....

  14. Toward the design of a low cost vision-based gaze tracker for interaction skill acquisition

    Directory of Open Access Journals (Sweden)

    Martinez Francis

    2011-12-01

    Full Text Available The human gaze is a basic mean for non verbal interaction between humans; however, in several situations, especially in the context of upper limb motor impairments, the gaze constitutes also an alternative mean for human’s interactions with the environment (real or virtual. Mastering these interactions through specific tools, requires frequently the acquisition of new skills and understanding of mechanisms which allow to acquire the necessary skills. Therefore the technological tool is a key for new interaction skills’ acquisition. This paper presents a tool for interaction skill acquisition via a gaze. The proposed gaze tracker is a low cost head mounted system based on vision technology “only”. The system hardware specifications and the status of the gaze tracker design are presented; the dedicated algorithm for eye detection and tracking, and an improvement of G. Zelinsky model for eye movement predication during the search of a predefined object in an image are outlined. Results of the software preliminary evaluation are presented.

  15. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  16. Cyclooxygenase-2 expression in the normal human eye and its expression pattern in selected eye tumours

    DEFF Research Database (Denmark)

    Wang, Jinmei; Wu, Yazhen; Heegaard, Steffen;

    2011-01-01

    and retina. The COX-2 expression was less in tumours deriving from the ciliary epithelium and also in retinoblastoma. Conclusion: Cyclooxygenase-2 is constitutively expressed in normal human eyes. The expression of COX-2 is much lower in selected eye tumours involving COX-2 expressing cells....

  17. Olhar e contato ocular: desenvolvimento típico e comparação na Síndrome de Down Gaze and eye contact: typical development and comparison in Down syndrome

    Directory of Open Access Journals (Sweden)

    Aline Elise Gerbelli Belini

    2008-03-01

    Full Text Available OBJETIVO: Investigar o desenvolvimento do olhar e do contato ocular em bebê portadora de síndrome de Down, comparando a freqüência de seu olhar para diferentes alvos ao comportamento visual de bebês em desenvolvimento típico. MÉTODOS: Um bebê, do gênero feminino, portador de Síndrome de Down, sem distúrbios visuais diagnosticados até a conclusão da coleta, e 17 bebês em desenvolvimento típico, foram filmados mensal e domiciliarmente, em interação livre com suas mães, do primeiro ao quinto mês de vida. Foi contabilizada a freqüência do olhar dirigido a 11 alvos, entre eles "olhar para os olhos da mãe". RESULTADOS: Os bebês em desenvolvimento típico apresentaram evolução estatisticamente significante, ao longo do período, nas freqüências de "olhos fechados" e de seu olhar para "objetos", "a pesquisadora", "o ambiente", "o próprio corpo", "o rosto da mãe" e "os olhos da mãe". Houve estabilidade estatística da amostra em "olhar para outra pessoa", "olhar para o corpo da mãe" e "abrir e fechar os olhos". O desenvolvimento do olhar e do contato ocular ocorreu de forma estatisticamente muito semelhante no bebê com síndrome de Down, em comparação com as médias dos demais bebês (teste qui-quadrado e com sua variabilidade individual (análise por aglomerados significativos. CONCLUSÕES: A interação precoce entre o bebê e sua mãe parece interferir mais na comunicação não-verbal da dupla do que limitações geneticamente influenciadas. Isto pode ter refletido nas semelhanças encontradas entre o desenvolvimento do comportamento e do contato visuais no bebê com síndrome de Down e nas crianças sem alterações de desenvolvimento.PURPOSE: To assess gaze and eye contact development of a baby girl with Down syndrome and to compare the frequency of gaze directed to different targets to that of babies with normal development. METHODS: A female baby with Down syndrome, without any detected eye conditions and 17

  18. Aquaporins 6-12 in the human eye

    DEFF Research Database (Denmark)

    Tran, Thuy Linh; Bek, Toke; Holm, Lars;

    2013-01-01

    Purpose: Aquaporins (AQPs) are widely expressed and have diverse distribution patterns in the eye. AQPs 0-5 have been localized at the cellular level in human eyes. We investigated the presence of the more recently discovered AQPs 6-12 in the human eye. Methods: RT-PCR was performed on fresh tissue...... from two human eyes divided into the cornea, corneal limbus, ciliary body and iris, lens, choroid, optic nerve, retina and sclera. Each structure was examined to detect the mRNA of AQPs 6-12. Twenty-one human eyes were examined using immunohistochemical and immunofluorescence techniques to determine...... in the nonpigmented ciliary epithelium and retinal ganglion cells. AQP11 immunolabelling was detected in the corneo-limbal epithelium, nonpigmented ciliary epithelium and inner limiting membrane of the retina. Conclusion: Selective expression of AQP7, AQP9 and AQP11 was found within various structures of the human...

  19. Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- Study Paper

    Directory of Open Access Journals (Sweden)

    Latesh Kumari

    2014-07-01

    Full Text Available For fatigue detection such as in the application of driver‟s fatigue monitoring system, the eye state analysis is one of the important and deciding steps to determine the fatigue of driver‟s eyes. In this study, algorithms for face detection, eye detection and eye state analysis have been studied and presented as well as an efficient algorithm for detection of face, eyes have been proposed. Firstly the efficient algorithm for face detection method has been presented which find the face area in the human images. Then, novel algorithms for detection of eye region and eye state are introduced. In this paper we propose a multi-view based eye state detection to determine the state of the eye. With the help of skin color model, the algorithm detects the face regions in an YCbCr color model. By applying the skin segmentation which normally separates the skin and non-skin pixels of the images, it detects the face regions of the image under various lighting and noise conditions. Then from these face regions, the eye regions are extracted within those extracted face regions. Our proposed algorithms are fast and robust as there is not pattern match.

  20. Eye movement-invariant representations in the human visual system.

    Science.gov (United States)

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  1. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  2. People deliver eye care: managing human resources

    Directory of Open Access Journals (Sweden)

    Kayode Odusote

    2005-12-01

    Full Text Available People deliver health. Effective health care needs an efficient and motivated health workforce, which is the totality of individuals who directly or indirectly contribute to the promotion, protection and improvement of the health of the population.Community eye health is about providing eye health care to the people as close as possible to where they live and as much as possible at a price they can afford. It promotes people-centred care rather than the traditional disease-centred eye care services. In order to provide effective and efficient eye care services, we need an adequate number of well-qualified, well-motivated and equitably distributed eye health workers (EHWs.

  3. Wide-angle chromatic aberration corrector for the human eye.

    Science.gov (United States)

    Benny, Yael; Manzanera, Silvestre; Prieto, Pedro M; Ribak, Erez N; Artal, Pablo

    2007-06-01

    The human eye is affected by large chromatic aberration. This may limit vision and makes it difficult to see fine retinal details in ophthalmoscopy. We designed and built a two-triplet system for correcting the average longitudinal chromatic aberration of the eye while keeping a reasonably wide field of view. Measurements in real eyes were conducted to examine the level and optical quality of the correction. We also performed some tests to evaluate the effect of the corrector on visual performance.

  4. Eye Contact Is Crucial for Referential Communication in Pet Dogs

    Science.gov (United States)

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners’ visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners’ direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners’ availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives. PMID:27626933

  5. The Development of Joint Visual Attention: A Longitudinal Study of Gaze following during Interactions with Mothers and Strangers

    Science.gov (United States)

    Gredeback, Gustaf; Fikke, Linn; Melinder, Annika

    2010-01-01

    Two- to 8-month-old infants interacted with their mother or a stranger in a prospective longitudinal gaze following study. Gaze following, as assessed by eye tracking, emerged between 2 and 4 months and stabilized between 6 and 8 months of age. Overall, infants followed the gaze of a stranger more than they followed the gaze of their mothers,…

  6. Human eye and the sun hot and cold light

    CERN Document Server

    Vavilov, S I

    1965-01-01

    The Human Eye and the Sun, """"Hot"""" and """"Cold"""" Light is a translation from the Russian language and is a reproduction of texts from Volume IV of S.I. Vavilov, president of the U.S.S.R. Academy of Sciences. The book deals with theoretical and practical developments in lighting techniques. The text gives a brief introduction on the relationship of the human eye and the sun, describing the properties of light, of the sun, and of the human eye. The book describes hot (incandescence) and cold light (luminescence) as coming from different sources. These two types of light are compared. The

  7. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.|info:eu-repo/dai/nl/29880977X

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and

  8. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eye-tracking experiments

    NARCIS (Netherlands)

    Dalmaijer, E.S.; Mathôt, S.; van der Stigchel, S.

    2014-01-01

    he PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexib

  9. Use of Eye Tracking as an Innovative Instructional Method in Surgical Human Anatomy.

    Science.gov (United States)

    Sánchez-Ferrer, María Luísa; Grima-Murcia, María Dolores; Sánchez-Ferrer, Francisco; Hernández-Peñalver, Ana Isabel; Fernández-Jover, Eduardo; Sánchez Del Campo, Francisco

    Tobii glasses can record corneal infrared light reflection to track pupil position and to map gaze focusing in the video recording. Eye tracking has been proposed for use in training and coaching as a visually guided control interface. The aim of our study was to test the potential use of these glasses in various situations: explanations of anatomical structures on tablet-type electronic devices, explanations of anatomical models and dissected cadavers, and during the prosection thereof. An additional aim of the study was to test the use of the glasses during laparoscopies performed on Thiel-embalmed cadavers (that allows pneumoinsufflation and exact reproduction of the laparoscopic surgical technique). The device was also tried out in actual surgery (both laparoscopy and open surgery). We performed a pilot study using the Tobii glasses. Dissection room at our School of Medicine and in the operating room at our Hospital. To evaluate usefulness, a survey was designed for use among students, instructors, and practicing physicians. The results were satisfactory, with the usefulness of this tool supported by more than 80% positive responses to most questions. There was no inconvenience for surgeons and that patient safety was ensured in the real laparoscopy. To our knowledge, this is the first publication to demonstrate the usefulness of eye tracking in practical instruction of human anatomy, as well as in teaching clinical anatomy and surgical techniques in the dissection and operating rooms. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  10. Longitudinal chromatic aberration of the human infant eye.

    Science.gov (United States)

    Wang, Jingyun; Candy, T Rowan; Teel, Danielle F W; Jacobs, Robert J

    2008-09-01

    Although the longitudinal chromatic aberration (LCA) of the adult eye has been studied, there are no data collected from the human infant eye. A chromatic retinoscope was used to measure cyclopleged infant and adult refractions with four pseudomonochromatic sources (centered at 472, 538, 589, and 652 nm) and with polychromatic light. The LCA of the infant eyes between 472 and 652 nm was a factor of 1.7 greater than the LCA found in the adult group: infant mean=1.62 D, SD+/- 0.14 D; adult mean=0.96 D, SD+/- 0.17 D. The elevated level of LCA in infant eyes is consistent with the greater optical power of the immature eye and indicates similar chromatic dispersion in infant and adult eyes. The implications for visual performance, defocus detection, and measurement of refraction are discussed.

  11. Human eye colour and HERC2, OCA2 and MATP

    DEFF Research Database (Denmark)

    Mengel-From, Jonas; Børsting, Claus; Sanchez, Juan J.

    2010-01-01

    Prediction of human eye colour by forensic genetic methods is of great value in certain crime investigations. Strong associations between blue/brown eye colour and the SNP loci rs1129038 and rs12913832 in the HERC2 gene were recently described. Weaker associations between eye colour and other...... value of typing either the HERC2 SNPs rs1129038 and/or rs12913832 that are in strong linkage disequilibrium was observed when eye colour was divided into two groups, (1) blue, grey and green (light) and (2) brown and hazel (dark). Sequence variations in rs11636232 and rs7170852 in HERC2, rs1800407...... in OCA2 and rs16891982 in MATP showed additional association with eye colours in addition to the effect of HERC2 rs1129038. Diplotype analysis of three sequence variations in HERC2 and one sequence variation in OCA2 showed the best discrimination between light and dark eye colours with a likelihood ratio...

  12. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    Science.gov (United States)

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  13. Towards understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior

    Directory of Open Access Journals (Sweden)

    Stephen M. Fiore

    2013-11-01

    Full Text Available As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI. We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  14. Eye movement prediction by oculomotor plant Kalman filter with brainstem control

    Institute of Scientific and Technical Information of China (English)

    Oleg V.KOMOGORTSEV; Javed I.KHAN

    2009-01-01

    Our work addresses one of the core issues related to Human Computer Interaction (HCI) systems that use eye gaze as an input.This issue is the sensor,transmission and other delays that exist in any eye tracker-based system,reducing its performance.A delay effect can be compensated by an accurate prediction of the eye movement trajectories.This paper introduces a mathematical model of the human eye that uses anatomical properties of the Human Visual System to predict eye movement trajectories.The eye mathematical model is transformed into a Kalman filter form to provide continuous eye position signal prediction during all eye movement types.The model presented in this paper uses brainstem control properties employed during transitions between fast (saccade) and slow (fixations,pursuit) eye movements.Results presented in this paper indicate that the proposed eye model in a Kalman filter form improves the accuracy of eye move-ment prediction and is capable of a real-time performance.In addition to the HCI systems with the direct eye gaze input,the proposed eye model can be immediately applied for a bit-rate/computational reduction in real-time gaze-contingent systems.

  15. Study of Optical Models Regarding the Human Eye

    Directory of Open Access Journals (Sweden)

    Maryam Abolmasoomi

    2011-03-01

    Full Text Available Introduction: Until now, many models have been presented for optical study of the human eye. In recent years, surgery on the anterior section of the eye (such as cataract and photo-refractive surgery has increased, so a study on the optics of the eye and evaluation of vision quality has become more important. Material and Methods: In this article, some of these models are considered. They include models with spherical and conic-section surfaces (for cornea and lens, simple models and new models with complex surfaces. Results: Evaluation of the optical models of the eye provides the possibility of enhancing the representation of human vision and also increasing the accuracy of surgery on the anterior section of the eye to enable higher quality vision.

  16. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  17. Human eye haptics-based multimedia.

    Science.gov (United States)

    Velandia, David; Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron

    2014-01-01

    Immersive and interactive multimedia applications offer complementary study tools in anatomy as users can explore 3D models while obtaining information about the organ, tissue or part being explored. Haptics increases the sense of interaction with virtual objects improving user experience in a more realistic manner. Common eye studying tools are books, illustrations, assembly models, and more recently these are being complemented with mobile apps whose 3D capabilities, computing power and customers are increasing. The goal of this project is to develop a complementary eye anatomy and pathology study tool using deformable models within a multimedia application, offering the students the opportunity for exploring the eye from up close and within with relevant information. Validation of the tool provided feedback on the potential of the development, along with suggestions on improving haptic feedback and navigation.

  18. Visual attention to plain and ornamented human bodies: an eye-tracking study.

    Science.gov (United States)

    Wohlrab, Silke; Fink, Bernhard; Pyritz, Lennart W; Rahlfs, Moritz; Kappeler, Peter M

    2007-06-01

    Signaling mate quality through visual adornments is a common phenomenon in animals and humans. However, humans are probably the only species who applies artificial ornaments. Such deliberate alterations of the skin, e.g., tattoos and scarring patterns, have been discussed by researchers as potential handicap signals, but there is still very little information about a potential biological signaling value of body modification. In this study eye-tracking was employed to investigate the signaling value of tattoos and other body modification. Measurement of gaze duration of 50 individuals while watching plain, scarred, accessorized, and tattooed bodies of artificial human images indicated that participants looked significantly longer at tattooed than at scarred, accessorized, and plain bodies. Generally, male participants paid more attention to tattooed stimuli of both sexes. More detailed analyses showed that particularly female tattooed stimuli were looked at longer. These findings are discussed within an evolutionary framework by suggesting that tattoos might have some signaling value which influences the perception of both male and female conspecifics and may hence also affect mating decisions.

  19. A real-life illusion of assimilation in the human face: eye size illusion caused by eyebrows and eye shadow.

    Science.gov (United States)

    Morikawa, Kazunori; Matsushita, Soyogu; Tomita, Akitoshi; Yamanami, Haruna

    2015-01-01

    Does an assimilative illusion like the Delboeuf illusion occur in the human face? We investigated factors that might influence the perceived size of the eyes in a realistic face. Experiment 1 manipulated the position of the eyebrows (high or low), the presence/absence of eye shadow, and the viewing distance (0.6 m or 5 m), then measured the perceived eye size using a psychophysical method. The results showed that low eyebrows (i.e., closer to the eyes) make the eyes appear larger, suggesting that the assimilation of eyes into the eyebrows is stronger when the eye-eyebrow distance is shorter. The results also demonstrated that the application of eye shadow also makes the eyes look larger. Moreover, the effect of eye shadow is more pronounced when viewed from a distance. In order to investigate the mechanism of the eye size illusion demonstrated in Experiment 1, Experiment 2 measured the magnitude of the Delboeuf illusion at a viewing distance of 0.6 m or 5 m, with or without gray gradation simulating the eye shadow that was used in Experiment 1. The experiment demonstrated that the Delboeuf illusion is modulated by viewing distance and gradation in the same way as the eye size illusion. These results suggest that the eye size illusion induced by the eyebrows and the Delboeuf illusion involve the same mechanism, and that eye shadow causes the assimilation of the eyes into itself and enhances assimilation between the eyes and the eyebrows.

  20. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    Science.gov (United States)

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  1. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    Science.gov (United States)

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  2. Direct Gaze Modulates Face Recognition in Young Infants

    Science.gov (United States)

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  3. Discussion and Future Directions for Eye Tracker Development

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Mulvey, Fiona; Mardanbegi, Diako

    2011-01-01

    Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking.......Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking....

  4. Owners' direct gazes increase dogs' attention-getting behaviors.

    Science.gov (United States)

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs.

  5. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    Science.gov (United States)

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  6. The Use of Gaze to Control Drones

    DEFF Research Database (Denmark)

    Hansen, John Paulin; Alapetite, Alexandre; MacKenzie, I. Scott

    2014-01-01

    This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or “drones”). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty, reliabil...... was considered significantly more reliable than the others. We discuss design and performance issues for the gaze-plus-manual split of controls when drones are operated using gaze in conjunction with tablets, near-eye displays (glasses), or monitors.......This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or “drones”). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty...

  7. High-Speed Noninvasive Eye-Tracking System

    Science.gov (United States)

    Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin

    2007-01-01

    The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.

  8. Anatomy of the lamina cribrosa in human eyes.

    Science.gov (United States)

    Radius, R L; Gonzales, M

    1981-12-01

    Light microscopy of specimens of human eyes cut in cross section at the level of the lamina cribrosa showed variation in structural anatomy, as demonstrated previously in certain primate eyes. Connective tissue and glial cell structural elements were greater in nasal-temporal as compared with inferior and superior quadrants of the disc. This regional variation suggests a hypothesis for the specificity of early patterns of optic nerve dysfunction characteristic of glaucomatous optic neuropathy. In glaucomatous eyes, nerve head regions with relatively less structural tissue elements may yield early to detrimental effects of persistent pressure elevation.

  9. A comparison of geometric- and regression-based mobile gaze-tracking

    Science.gov (United States)

    Browatzki, Björn; Bülthoff, Heinrich H.; Chuang, Lewis L.

    2014-01-01

    Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation. PMID:24782737

  10. A comparison of geometric- and regression-based mobile gaze-tracking

    Directory of Open Access Journals (Sweden)

    Björn eBrowatzki

    2014-04-01

    Full Text Available Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation.

  11. A non-verbal turing test: Differentiating mind from machine in gaze-based social interaction

    OpenAIRE

    Ulrich J Pfeiffer; Bert Timmermans; Gary Bente; Kai Vogeley; Leonhard Schilbach

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons' gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to ...

  12. Eye Movements Affect Postural Control in Young and Older Females.

    Science.gov (United States)

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  13. Gaze interaction with textual user interface

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Lund, Haakon; Madsen, Janus Askø

    2015-01-01

    ” option for text navigation. People readily understood how to execute RSVP command prompts and a majority of them preferred gaze input to a pen pointer. We present the concept of a smartwatch that can track eye movements and mediate command options whenever in proximity of intelligent devices...

  14. Amygdala activation for eye contact despite complete cortical blindness.

    Science.gov (United States)

    Burra, Nicolas; Hervais-Adelman, Alexis; Kerzel, Dirk; Tamietto, Marco; de Gelder, Beatrice; Pegna, Alan J

    2013-06-19

    Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex. A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer. Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.

  15. Face and eye scanning in gorillas (Gorilla gorilla), orangutans (Pongo abelii), and humans (Homo sapiens): unique eye-viewing patterns in humans among hominids.

    Science.gov (United States)

    Kano, Fumihiro; Call, Josep; Tomonaga, Masaki

    2012-11-01

    Because the faces and eyes of primates convey a rich array of social information, the way in which primates view faces and eyes reflects species-specific strategies for facial communication. How are humans and closely related species such as great apes similar and different in their viewing patterns for faces and eyes? Following previous studies comparing chimpanzees (Pan troglodytes) with humans (Homo sapiens), this study used the eye-tracking method to directly compare the patterns of face and eye scanning by humans, gorillas (Gorilla gorilla), and orangutans (Pongo abelii). Human and ape participants freely viewed pictures of whole bodies and full faces of conspecifics and allospecifics under the same experimental conditions. All species were strikingly similar in that they viewed predominantly faces and eyes. No particular difference was identified between gorillas and orangutans, and they also did not differ from the chimpanzees tested in previous studies. However, humans were somewhat different from apes, especially with respect to prolonged eye viewing. We also examined how species-specific facial morphologies, such as the male flange of orangutans and the black-white contrast of human eyes, affected viewing patterns. Whereas the male flange of orangutans affected viewing patterns, the color contrast of human eyes did not. Humans showed prolonged eye viewing independently of the eye color of presented faces, indicating that this pattern is internally driven rather than stimulus dependent. Overall, the results show general similarities among the species and also identify unique eye-viewing patterns in humans.

  16. Investigation of ultrasound axially traversing the human eye.

    Science.gov (United States)

    Chivers, R C; Round, W H; Zieniuk, J K

    1984-01-01

    A ray tracing model for ultrasonic propagation through the human eye, including the lens, has been developed on the assumptions of lossless media and non-reflecting interfaces. Measurement of the distribution of an ultrasonic beam before and after traversing specimens of human eyes in vitro, and of the velocity of ultrasound in the various dissected media, has permitted some comparison of the predictions of the model with experiment. The agreement is good although there are significant limitations involved and these are discussed. For imaging systems the effect of the eye arises largely from the lens which acts as a defocussing lens of focal length approx. 13.5 cm. Although the experiments were performed at approx. 4 MHz, the validity of the ray tracing model is largely frequency independent and will be appropriate at the higher frequencies commonly used in ophthalmology.

  17. Using the human eye to characterize displays

    Science.gov (United States)

    Gille, Jennifer; Larimer, James O.

    2001-06-01

    Monitor characterization has taken on new importance for non-professional users, who are not usually equipped to make photometric measurements. Our purpose was to examine some of the visual judgements used in characterization schemes that have been proposed for web users. We studied adjusting brightness to set the black level, banding effects du to digitization, and gamma estimation in the light an din the dark, and a color-matching tasks in the light, on a desktop CRT and a laptop LCD. Observers demonstrated the sensitivity of the visual system for comparative judgements in black- level adjustment, banding visibility, and gamma estimation. The results of the color-matching task were ambiguous. In the brightness adjustment task, the action of the adjustment was not as presumed; however, perceptual judgements were as expected under the actual conditions. Whenthe gamma estimates of observers were compared to photometric measurements, pro9blems with the definition of gamma were identified. Information about absolute light levels that would be important for characterizing a display, given the shortcomings of gamma in measuring apparent contrast, are not measurable by eye alone. The LCD was not studied as extensively as the CRT because of viewing-angle problems, and its transfer function did not follow a power law, rendering gamma estimation meaningless.

  18. Human volunteer study with PGME: Eye irritation during vapour exposure

    NARCIS (Netherlands)

    Emmen, H.H.; Muijser, H.; Arts, J.H.E.; Prinsen, M.K.

    2003-01-01

    The objective of this study was to establish the possible occurrence of eye irritation and subjective symptoms in human volunteers exposed to propylene glycol monomethyl ether (PGME) vapour at concentrations of 0, 100 and 150 ppm. Testing was conducted in 12 healthy male volunteers using a repeated

  19. Human volunteer study with PGME: Eye irritation during vapour exposure

    NARCIS (Netherlands)

    Emmen, H.H.; Muijser, H.; Arts, J.H.E.; Prinsen, M.K.

    2003-01-01

    The objective of this study was to establish the possible occurrence of eye irritation and subjective symptoms in human volunteers exposed to propylene glycol monomethyl ether (PGME) vapour at concentrations of 0, 100 and 150 ppm. Testing was conducted in 12 healthy male volunteers using a repeated

  20. Evaluation of a low-cost open-source gaze tracker

    DEFF Research Database (Denmark)

    San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner; Møllenbach, Emilie;

    2010-01-01

    This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user's eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending...

  1. Gaze-based interaction with public displays using off-the-shelf components

    DEFF Research Database (Denmark)

    San Agustin, Javier; Hansen, John Paulin; Tall, Martin Henrik

    Eye gaze can be used to interact with high-density information presented on large displays. We have built a system employing off-the-shelf hardware components and open-source gaze tracking software that enables users to interact with an interface displayed on a 55” screen using their eye movement...

  2. Coding gaze tracking data with chromatic gradients for VR Exposure Therapy

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Grillon, Helena; De Heras Ciechomski, Pablo

    2007-01-01

    This article presents a simple and intuitive way to represent the eye-tracking data gathered during immersive virtual reality exposure therapy sessions. Eye-tracking technology is used to observe gaze movements during vir- tual reality sessions and the gaze-map chromatic gradient coding allows...

  3. Children's Knowledge of Deceptive Gaze Cues and Its Relation to Their Actual Lying Behavior

    Science.gov (United States)

    McCarthy, Anjanie; Lee, Kang

    2009-01-01

    Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children's gaze behavior when lying is consistent with this belief. In our study, 7- to 15-year-olds and adults answered questions truthfully ("Truth" questions)…

  4. Human rights: eye for cultural diversity

    NARCIS (Netherlands)

    Y.M. Donders

    2012-01-01

    The relationship and interaction between international human rights law and cultural diversity is a current topic, as is shown by the recent debates in The Netherlands on, for instance, the proposed ban on wearing facial coverage, or burqas, and the proposed ban on ritual slaughter without anaesthes

  5. Prediction of human eye fixations using symmetry

    NARCIS (Netherlands)

    Kootstra, Gert; Schomaker, Lambert

    2009-01-01

    Humans are very sensitive to symmetry in visual patterns. Reaction time experiments show that symmetry is detected and recognized very rapidly. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of

  6. The effect of gaze direction on three-dimensional face recognition in infants.

    Science.gov (United States)

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2012-09-01

    Eye gaze is an important tool for social contact. In this study, we investigated whether direct gaze facilitates the recognition of three-dimensional face images in infants. We presented artificially produced face images in rotation to 6-8 month-old infants. The eye gaze of the face images was either direct or averted. Sixty-one sequential images of each face were created by rotating the vertical axis of the face from frontal view to ± 30°. The recognition performances of the infants were then compared between faces with direct gaze and faces with averted gaze. Infants showed evidence that they were able to discriminate the novel from familiarized face by 8 months of age and only when gaze is direct. These results suggest that gaze direction may affect three-dimensional face recognition in infants.

  7. Physiology and pathology of eye-head coordination.

    Science.gov (United States)

    Proudlock, Frank Antony; Gottlob, Irene

    2007-09-01

    Human head movement control can be considered as part of the oculomotor system since the control of gaze involves coordination of the eyes and head. Humans show a remarkable degree of flexibility in eye-head coordination strategies, nonetheless an individual will often demonstrate stereotypical patterns of eye-head behaviour for a given visual task. This review examines eye-head coordination in laboratory-based visual tasks, such as saccadic gaze shifts and combined eye-head pursuit, and in common tasks in daily life, such as reading. The effect of the aging process on eye-head coordination is then reviewed from infancy through to senescence. Consideration is also given to how pathology can affect eye-head coordination from the lowest through to the highest levels of oculomotor control, comparing conditions as diverse as eye movement restrictions and schizophrenia. Given the adaptability of the eye-head system we postulate that this flexible system is under the control of the frontal cortical regions, which assist in planning, coordinating and executing behaviour. We provide evidence for this based on changes in eye-head coordination dependant on the context and expectation of presented visual stimuli, as well as from changes in eye-head coordination caused by frontal lobe dysfunction.

  8. Gaze beats mouse

    DEFF Research Database (Denmark)

    Mateo, Julio C.; San Agustin, Javier; Hansen, John Paulin

    2008-01-01

    Facial EMG for selection is fast, easy and, combined with gaze pointing, it can provide completely hands-free interaction. In this pilot study, 5 participants performed a simple point-and-select task using mouse or gaze for pointing and a mouse button or a facial-EMG switch for selection. Gaze...... pointing was faster than mouse pointing, while maintaining a similar error rate. EMG and mouse-button selection had a comparable performance. From analyses of completion time, throughput and error rates, we concluded that the combination of gaze and facial EMG holds potential for outperforming the mouse....

  9. Gaze Tracking Through Smartphones

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik; Hansen, John Paulin; Møllenbach, Emilie

    Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest.......Mobile gaze trackers embedded in smartphones or tablets provide a powerful personal link to game devices, head-mounted micro-displays, pc´s, and TV’s. This link may offer a main road to the mass market for gaze interaction, we suggest....

  10. Human eye visual hyperacuity: Controlled diffraction for image resolution improvement

    Science.gov (United States)

    Lagunas, A.; Domínguez, O.; Martinez-Conde, S.; Macknik, S. L.; Del-Río, C.

    2017-09-01

    The Human Visual System appears to be using a low number of sensors for image capturing, and furthermore, regarding the physical dimensions of cones—photoreceptors responsible for the sharp central vision—we may realize that these sensors are of a relatively small size and area. Nonetheless, the human eye is capable of resolving fine details thanks to visual hyperacuity and presents an impressive sensitivity and dynamic range when set against conventional digital cameras of similar characteristics. This article is based on the hypothesis that the human eye may be benefiting from diffraction to improve both image resolution and acquisition process. The developed method involves the introduction of a controlled diffraction pattern at an initial stage that enables the use of a limited number of sensors for capturing the image and makes possible a subsequent post-processing to improve the final image resolution.

  11. Noise Challenges in Monomodal Gaze Interaction

    DEFF Research Database (Denmark)

    Skovsgaard, Henrik

    Modern graphical user interfaces (GUIs) are designed with able-bodied users in mind. Operating these interfaces can be impossible for some users who are unable to control the conventional mouse and keyboard. An eye tracking system offers possibilities for independent use and improved quality...... of life via dedicated interface tools especially tailored to the users’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eye-tracking systems and there are many available on the market. Even...... stream are most wanted. The work in this thesis presents three contributions that may advance the use of low-cost monomodal gaze tracking and research in the field: - An assessment of a low-cost open-source gaze tracker and two eye tracking systems through an accuracy and precision test and a performance...

  12. Multiphoton tomography of the human eye

    Science.gov (United States)

    König, Karsten; Batista, Ana; Hager, Tobias; Seitz, Berthold

    2017-02-01

    Multiphoton tomography (MPT) is a novel label-free clinical imaging method for non-invasive tissue imaging with high spatial (300 nm) and temporal (100 ps) resolutions. In vivo optical histology can be realized due to the nonlinear excitation of endogenous fluorophores and second-harmonic generation (SHG) of collagen. Furthermore, optical metabolic imaging (OMI) is performed by two-photon autofluorescence lifetime imaging (FLIM). So far, applications of the multiphoton tomographs DermaInspect and MPTflex were limited to dermatology. Novel applications include intraoperative brain tumor imaging as well as cornea imaging. In this work we describe two-photon imaging of ex vivo human corneas unsuitable for transplantation. Furthermore, the cross-linking (CXL) process of corneal collagen based on UVA exposure and 0.1 % riboflavin was studied. The pharmacokinetics of the photosensitizer could be detected with high spatial resolution. Interestingly, an increase in the stromal autofluorescence intensity and modifications of the autofluorescence lifetimes were observed in the human corneal samples within a few days following CXL.

  13. Head movements evoked in alert rhesus monkey by vestibular prosthesis stimulation: implications for postural and gaze stabilization.

    Directory of Open Access Journals (Sweden)

    Diana E Mitchell

    Full Text Available The vestibular system detects motion of the head in space and in turn generates reflexes that are vital for our daily activities. The eye movements produced by the vestibulo-ocular reflex (VOR play an essential role in stabilizing the visual axis (gaze, while vestibulo-spinal reflexes ensure the maintenance of head and body posture. The neuronal pathways from the vestibular periphery to the cervical spinal cord potentially serve a dual role, since they function to stabilize the head relative to inertial space and could thus contribute to gaze (eye-in-head + head-in-space and posture stabilization. To date, however, the functional significance of vestibular-neck pathways in alert primates remains a matter of debate. Here we used a vestibular prosthesis to 1 quantify vestibularly-driven head movements in primates, and 2 assess whether these evoked head movements make a significant contribution to gaze as well as postural stabilization. We stimulated electrodes implanted in the horizontal semicircular canal of alert rhesus monkeys, and measured the head and eye movements evoked during a 100 ms time period for which the contribution of longer latency voluntary inputs to the neck would be minimal. Our results show that prosthetic stimulation evoked significant head movements with latencies consistent with known vestibulo-spinal pathways. Furthermore, while the evoked head movements were substantially smaller than the coincidently evoked eye movements, they made a significant contribution to gaze stabilization, complementing the VOR to ensure that the appropriate gaze response is achieved. We speculate that analogous compensatory head movements will be evoked when implanted prosthetic devices are transitioned to human patients.

  14. Age- and fatigue-related markers of human faces: an eye-tracking study.

    Science.gov (United States)

    Nguyen, Huy Tu; Isaacowitz, Derek M; Rubin, Peter A D

    2009-02-01

    To investigate the facial cues that are used when making judgments about how old or tired a face appears. Experimental study. Forty-seven subjects: 15 male and 32 female participants, ranging from age 18 to 30 years. Forty-eight full-face digital images of "normal-appearing" patients were collected and uploaded to an eye-tracking system. We used an Applied Science Laboratories (Bedford, MA) Eye Tracker device associated with gaze-tracking software to record and calculate the gaze and fixation of the participants' left eye as they viewed images on a computer screen. After seeing each picture, participants were asked to assess the age of the face in the picture by making a selection on a rating scale divided into 5-year intervals; for fatigue judgments we used a rating scale from 1 (not tired) to 7 (most tired). The main outcome measure was gaze fixation, as assessed by tracking the eye movements of participants as they viewed full-face digital pictures. For fatigue judgments, participants spent the most time looking at the eye region (31.81%), then the forehead and the nose regions (14.99% and 14.12%, respectively); in the eye region, participants looked most at the brows (13.1%) and lower lids (9.4%). Participants spent more time looking at the cheeks on faces they rated as least tired than they did on those they rated as most tired (t = 2.079, Peye region (27.22%) and then the forehead (15.71%) and the nose (14.30%) had the highest frequencies of interest; in the eye region, the brows and lower lids also had the highest frequencies of interest (11.40% and 8.90%, respectively). Participants looked more at the brows (t = -2.63, Peye region. Consequently, these results suggest that aesthetic or functional surgery to the eye region may be one of the most effective interventions in enhancing the appearance of an individual. The author(s) have no proprietary or commercial interest in any materials discussed in this article.

  15. [Eye contact effects: A therapeutic issue?

    Science.gov (United States)

    Baltazar, M; Conty, L

    2016-12-01

    The perception of a direct gaze - that is, of another individual's gaze directed at the observer that leads to eye contact - is known to influence a wide range of cognitive processes and behaviors. We stress that these effects mainly reflect positive impacts on human cognition and may thus be used as relevant tools for therapeutic purposes. In this review, we aim (1) to provide an exhaustive review of eye contact effects while discussing the limits of the dominant models used to explain these effects, (2) to illustrate the therapeutic potential of eye contact by targeting those pathologies that show both preserved gaze processing and deficits in one or several functions that are targeted by the eye contact effects, and (3) to propose concrete ways in which eye contact could be employed as a therapeutic tool. (1) We regroup the variety of eye contact effects into four categories, including memory effects, activation of prosocial behavior, positive appraisals of self and others and the enhancement of self-awareness. We emphasize that the models proposed to account for these effects have a poor predictive value and that further descriptions of these effects is needed. (2) We then emphasize that people with pathologies that affect memory, social behavior, and self and/or other appraisal, and self-awareness could benefit from eye contact effects. We focus on depression, autism and Alzheimer's disease to illustrate our proposal. To our knowledge, no anomaly of eye contact has been reported in depression. Patients suffering from Alzheimer disease, at the early and moderate stage, have been shown to maintain a normal amount of eye contact with their interlocutor. We take into account that autism is controversial regarding whether gaze processing is preserved or altered. In the first view, individuals are thought to elude or omit gazing at another's eyes while in the second, individuals are considered to not be able to process the gaze of others. We adopt the first stance

  16. Fractal fluctuations in gaze speed visual search.

    Science.gov (United States)

    Stephen, Damian G; Anastas, Jason

    2011-04-01

    Visual search involves a subtle coordination of visual memory and lower-order perceptual mechanisms. Specifically, the fluctuations in gaze may provide support for visual search above and beyond what may be attributed to memory. Prior research indicates that gaze during search exhibits fractal fluctuations, which allow for a wide sampling of the field of view. Fractal fluctuations constitute a case of fast diffusion that may provide an advantage in exploration. We present reanalyses of eye-tracking data collected by Stephen and Mirman (Cognition, 115, 154-165, 2010) for single-feature and conjunction search tasks. Fluctuations in gaze during these search tasks were indeed fractal. Furthermore, the degree of fractality predicted decreases in reaction time on a trial-by-trial basis. We propose that fractality may play a key role in explaining the efficacy of perceptual exploration.

  17. An eye for the I: Preferential attention to the eyes of ingroup members.

    Science.gov (United States)

    Kawakami, Kerry; Williams, Amanda; Sidhu, David; Choma, Becky L; Rodriguez-Bailón, Rosa; Cañadas, Elena; Chung, Derek; Hugenberg, Kurt

    2014-07-01

    Human faces, and more specifically the eyes, play a crucial role in social and nonverbal communication because they signal valuable information about others. It is therefore surprising that few studies have investigated the impact of intergroup contexts and motivations on attention to the eyes of ingroup and outgroup members. Four experiments investigated differences in eye gaze to racial and novel ingroups using eye tracker technology. Whereas Studies 1 and 3 demonstrated that White participants attended more to the eyes of White compared to Black targets, Study 2 showed a similar pattern of attention to the eyes of novel ingroup and outgroup faces. Studies 3 and 4 also provided new evidence that eye gaze is flexible and can be meaningfully influenced by current motivations. Specifically, instructions to individuate specific social categories increased attention to the eyes of target group members. Furthermore, the latter experiments demonstrated that preferential attention to the eyes of ingroup members predicted important intergroup biases such as recognition of ingroup over outgroup faces (i.e., the own-race bias; Study 3) and willingness to interact with outgroup members (Study 4). The implication of these findings for general theorizing on face perception, individuation processes, and intergroup relations are discussed.

  18. Eye-Hand-Mouth Coordination in the Human Newborn.

    Science.gov (United States)

    Futagi, Yasuyuki

    2017-10-01

    There have been several studies concerning rudimentary coordination of the eyes, hands, and mouth in the human newborn. The author attempted to clarify the ontogenetic significance of the coordination during the earliest period of human life through a systematic review. The neural mechanism underlying the coordination was also discussed based on the current knowledge of cognitive neuroscience. Searches were conducted on PubMed and Google Scholar from their inception through March 2017. Studies have demonstrated that the coordination is a visually guided goal-directed motor behavior with intension and emotion. Current cognitive research has proved that feeding requires a large-scale neural network extending over several cortices. The eye-hand-mouth coordination in the newborn can be regarded as a precursor of subsequent self-feeding, and the coordination is very likely mediated through the underdeveloped but essentially the same network interconnecting cortices as in the adult. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Gaze interaction from bed

    DEFF Research Database (Denmark)

    Hansen, John Paulin; San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner

    2011-01-01

    This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assist...

  20. Gazes and Performances

    DEFF Research Database (Denmark)

    Larsen, Jonas

    Abstract: Recent literature has critiqued this notion of the 'tourist gaze' for reducing tourism to visual experiences 'sightseeing' and neglecting other senses and bodily experiences of doing tourism. A so-called 'performance turn' within tourist studies highlights how tourists experience places...... revised and expanded Tourist Gaze 3.0 that I am writing with John Urry at the moment....

  1. Elastic hysteresis in human eyes is age dependent value.

    Science.gov (United States)

    Ishii, Kotaro; Saito, Kei; Kameda, Toshihiro; Oshika, Tetsuro

    2012-06-19

    Background:  The elastic hysteresis phenomenon is observed when cyclic loading is applied to a viscoelastic system. The purpose of this study was to quantitatively evaluate elastic hysteresis in living human eyes against an external force. Design:  Prospective case series. Participants:  Twenty-four eyes of 24 normal human subjects (mean age: 41.5 ± 10.6 years) were recruited. Methods:  A non-contact tonometry process was recorded with a high-speed camera. Central corneal thickness (CCT), corneal thickness at 4 mm from the center, corneal curvature, and anterior chamber depth (ACD) were measured. Intraocular pressure (IOP) was also measured using Goldmann applanation tonometry (GAT) and dynamic contour tonometer (DCT). Main Outcome Measures:  Energy loss due to elastic hysteresis was calculated and graphed. Results:  The mean CCT was 552.5 ± 36.1 µm, corneal curvature was 7.84 ± 0.26 mm, and ACD was 2.83 ± 0.29 mm. The mean GAT-IOP was 14.2 ± 2.7 mmHg and DCT-IOP was 16.3 ± 3.5 mmHg. The mean energy loss due to elastic hysteresis was 3.90 × 10(-6) ± 2.49 × 10(-6) Nm. Energy loss due to elastic hysteresis correlated significantly with age (Pearson correlation coefficient = 0.596, p = 0.0016). There were no significant correlations between energy loss due to elastic hysteresis and other measurements. Conclusion:  Energy loss due to elastic hysteresis in the eyes of subjects was found to positively correlate with age, independent of anterior eye structure or IOP. Therefore, it is believed that the viscosity of the eye increases with age. © 2010 The Authors. Clinical and Experimental Ophthalmology © 2010 Royal Australian and New Zealand College of Ophthalmologists.

  2. Influence of gaze and directness of approach on the escape responses of the Indian rock lizard, Psammophilus dorsalis (Gray, 1831)

    Indian Academy of Sciences (India)

    Rachakonda Sreekar; Suhel Quader

    2013-12-01

    Animals often evaluate the degree of risk posed by a predator and respond accordingly. Since many predators orient their eyes towards prey while attacking, predator gaze and directness of approach could serve as conspicuous indicators of risk to prey. The ability to perceive these cues and discriminate between high and low predation risk should benefit prey species through both higher survival and decreased energy expenditure. We experimentally examined whether Indian rock lizards (Psammophilus dorsalis) can perceive these two indicators of predation risk by measuring the variation in their fleeing behaviour in response to type of gaze and approach by a human predator. Overall, we found that the gaze and approach of the predator influenced flight initiation distance, which also varied with attributes of the prey (i.e. size/sex and tail-raise behaviour). Flight initiation distance (FID) was 43% longer during direct approaches with direct gaze compared with tangential approaches with averted gaze. In further, exploratory, analyses, we found that FID was 23% shorter for adult male lizards than for female or young male (FYM) lizards. In addition, FYM lizards that showed a tail-raise display during approach had a 71% longer FID than those that did not. Our results suggest that multiple factors influence the decision to flee in animals. Further studies are needed to test the generality of these factors and to investigate the proximate mechanisms underlying flight decisions.

  3. 3D Gaze Estimation from Remote RGB-D Sensors

    OpenAIRE

    Funes Mora, Kenneth Alberto

    2015-01-01

    The development of systems able to retrieve and characterise the state of humans is important for many applications and fields of study. In particular, as a display of attention and interest, gaze is a fundamental cue in understanding people activities, behaviors, intentions, state of mind and personality. Moreover, gaze plays a major role in the communication process, like for showing attention to the speaker, indicating who is addressed or averting gaze to keep the floor. Therefore, many...

  4. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  5. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Science.gov (United States)

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  6. The Relationship between Children's Gaze Reporting and Theory of Mind

    Science.gov (United States)

    D'Entremont, Barbara; Seamans, Elizabeth; Boudreau, Elyse

    2012-01-01

    Seventy-nine 3- and 4-year-old children were tested on gaze-reporting ability and Wellman and Liu's (2004) continuous measure of theory of mind (ToM). Children were better able to report where someone was looking when eye and head direction were provided as a cue compared with when only eye direction cues were provided. With the exception of…

  7. Learning to Interact with a Computer by Gaze

    Science.gov (United States)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users' eye movement data during typing of 110 sentences. The experiment revealed that inefficient eye movements was dramatically reduced…

  8. Affine transform to reform pixel coordinates of EOG signals for controlling robot manipulators using gaze motions.

    Science.gov (United States)

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-06-10

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  9. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    Directory of Open Access Journals (Sweden)

    Muhammad Ilhamdi Rusydi

    2014-06-01

    Full Text Available Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2 produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs.

  10. Stare in the crowd: frontal face guides overt attention independently of its gaze direction.

    Science.gov (United States)

    Aya, Shirama

    2012-01-01

    Whether or not a stare in the midst of many faces can guide visual attention is a controversial issue. Two experiments are reported that investigate the hypothesis that visual attention is guided toward a frontal face in the search for a stare among faces with varied head angles. The participants were required to search for a face with a direct gaze in a context where the target could be at any of various head angles and the target's head angle was unpredictable in one trial. The search performance was better for a frontal-face target than for deviated-face targets. Furthermore, eye-movement analyses revealed that a frontal-face stimulus tended to be initially fixated prior to deviated-face stimuli, and many of the initially fixated frontal-face stimuli had an averted gaze. The findings suggest that a frontal face guides overt attention independently of its gaze direction in the search for a stare in a crowd. The validity of prioritising a frontal face in order to find a direct gaze among faces and the characteristics of a human-face detection system are discussed.

  11. Gaze interaction in UAS video exploitation

    Science.gov (United States)

    Hild, Jutta; Brüstle, Stefan; Heinze, Norbert; Peinsipp-Byma, Elisabeth

    2013-05-01

    A frequently occurring interaction task in UAS video exploitation is the marking or selection of objects of interest in the video. If an object of interest is visually detected by the image analyst, its selection/marking for further exploitation, documentation and communication with the team is a necessary task. Today object selection is usually performed by mouse interaction. As due to sensor motion all objects in the video move, object selection can be rather challenging, especially if strong and fast and ego-motions are present, e.g., with small airborne sensor platforms. In addition to that, objects of interest are sometimes too shortly visible to be selected by the analyst using mouse interaction. To address this issue we propose an eye tracker as input device for object selection. As the eye tracker continuously provides the gaze position of the analyst on the monitor, it is intuitive to use the gaze position for pointing at an object. The selection is then actuated by pressing a button. We integrated this gaze-based "gaze + key press" object selection into Fraunhofer IOSB's exploitation station ABUL using a Tobii X60 eye tracker and a standard keyboard for the button press. Representing the object selections in a spatial relational database, ABUL enables the image analyst to efficiently query the video data in a post processing step for selected objects of interest with respect to their geographical and other properties. An experimental evaluation is presented, comparing gaze-based interaction with mouse interaction in the context of object selection in UAS videos.

  12. Culture, gaze and the neural processing of fear expressions

    OpenAIRE

    2009-01-01

    The direction of others’ eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direc...

  13. Just one look: Direct gaze briefly disrupts visual working memory.

    Science.gov (United States)

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  14. Reading beyond the glance: eye tracking in neurosciences.

    Science.gov (United States)

    Popa, Livia; Selejan, Ovidiu; Scott, Allan; Mureşanu, Dafin F; Balea, Maria; Rafila, Alexandru

    2015-05-01

    From an interdisciplinary approach, the neurosciences (NSs) represent the junction of many fields (biology, chemistry, medicine, computer science, and psychology) and aim to explore the structural and functional aspects of the nervous system. Among modern neurophysiological methods that "measure" different processes of the human brain to salience stimuli, a special place belongs to eye tracking (ET). By detecting eye position, gaze direction, sequence of eye movement and visual adaptation during cognitive activities, ET is an effective tool for experimental psychology and neurological research. It provides a quantitative and qualitative analysis of the gaze, which is very useful in understanding choice behavior and perceptual decision making. In the high tech era, ET has several applications related to the interaction between humans and computers. Herein, ET is used to evaluate the spatial orienting of attention, the performance in visual tasks, the reactions to information on websites, the customer response to advertising, and the emotional and cognitive impact of various spurs to the brain.

  15. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    Directory of Open Access Journals (Sweden)

    Helmut Leder

    2016-08-01

    Full Text Available We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces.

  16. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    Science.gov (United States)

    Mitrovic, Aleksandra; Goller, Jürgen

    2016-01-01

    We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. PMID:27698984

  17. Gazing and Performing

    DEFF Research Database (Denmark)

    Larsen, Jonas; Urry, John

    2011-01-01

    The Tourist Gaze [Urry J, 1990 (Sage, London)] is one of the most discussed and cited tourism books (with about 4000 citations on Google scholar). Whilst wide ranging in scope, the book is known for the Foucault-inspired concept of the tourist gaze that brings out the fundamentally visual and image...... that the doings of tourism are physical or corporeal and not merely visual, and it is necessary to regard ‘performing’ rather than ‘gazing’ as the dominant tourist research paradigm. Yet we argue here that there are, in fact, many similarities between the paradigms of gaze and of performance. They should ‘dance...

  18. Gaze-following behind barriers in domestic dogs.

    Science.gov (United States)

    Met, Amandine; Miklósi, Ádám; Lakatos, Gabriella

    2014-11-01

    Although gaze-following abilities have been demonstrated in a wide range of species, so far no clear evidence has been available for dogs. In the current study, we examined whether dogs follow human gaze behind an opaque barrier in two different contexts, in a foraging situation and in a non-foraging situation (food involved vs. food not involved in the situation). We assumed that dogs will spontaneously follow the human gaze and that the foraging context will have a positive effect on dogs' gaze-following behaviour by causing an expectation in the dogs that food might be hidden somewhere in the room and might be communicated by the experimenter. This expectation presumably positively affects their motivational and attentional state. Here, we report that dogs show evidence of spontaneous gaze-following behind barriers in both situations. According to our findings, the dogs gazed earlier at the barrier in the indicated direction in both contexts. However, as we expected, the context also has some effect on dogs' gaze-following behaviour, as more dogs gazed behind the barrier in the indicated direction in the foraging situation. The present results also support the idea that gaze-following is a characteristic skill in mammals which may more easily emerge in certain functional contexts.

  19. Camera Mouse Including “Ctrl-Alt-Del” Key Operation Using Gaze, Blink, and Mouth Shape

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-04-01

    Full Text Available This paper presents camera mouse system with additional feature: "CTRL - ALT - DEL" key. The previous gaze-based camera mouse systems are only considering how to obtain gaze and making selection. We proposed gaze-based camera mouse with "CTRL - ALT - DEL" key. Infrared camera is put on top of display while user looking ahead. User gaze is estimated based on eye gaze and head pose. Blinking and mouth detections are used to create "CTR - ALT - DEL" key. Pupil knowledge is used to improve robustness of eye gaze estimation against different users. Also, Gabor filter is used to extract face features. Skin color information and face features are used to estimate head pose. The experiments of each method have done and the results show that all methods work perfectly. By implemented this system, troubleshooting of camera mouse can be done by user itself and makes camera mouse be more sophisticated.

  20. Tracking Eyes using Shape and Appearance

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nielsen, Mads; Hansen, John Paulin

    2002-01-01

    We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...... to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes. We use a Gaussian Process interpolation method for gaze determination, which facilitates stability feedback from the system. The use of a learning method for gaze estimation gives more flexibility...

  1. Irreversible electroporation of human primary uveal melanoma in enucleated eyes.

    Directory of Open Access Journals (Sweden)

    Yossi Mandel

    Full Text Available Uveal melanoma (UM is the most common primary intraocular tumor in adults and is characterized by high rates of metastatic disease. Although brachytherapy is the most common globe-sparing treatment option for small- and medium-sized tumors, the treatment is associated with severe adverse reactions and does not lead to increased survival rates as compared to enucleation. The use of irreversible electroporation (IRE for tumor ablation has potential advantages in the treatment of tumors in complex organs such as the eye. Following previous theoretical work, herein we evaluate the use of IRE for uveal tumor ablation in human ex vivo eye model. Enucleated eyes of patients with uveal melanoma were treated with short electric pulses (50-100 µs, 1000-2000 V/cm using a customized electrode design. Tumor bioimpedance was measured before and after treatment and was followed by histopathological evaluation. We found that IRE caused tumor ablation characterized by cell membrane disruption while sparing the non-cellular sclera. Membrane disruption and loss of cellular capacitance were also associated with significant reduction in total tumor impedance and loss of impedance frequency dependence. The effect was more pronounced near the pulsing electrodes and was dependent on time from treatment to fixation. Future studies should further evaluate the potential of IRE as an alternative method of uveal melanoma treatment.

  2. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Science.gov (United States)

    Folgerø, Per O.; Hodne, Lasse; Johansson, Christer; Andresen, Alf E.; Sætren, Lill C.; Specht, Karsten; Skaar, Øystein O.; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise “experimental art history”. Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast

  3. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History

    Directory of Open Access Journals (Sweden)

    Per Olav Folgerø

    2016-09-01

    Full Text Available This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise experimental art history. Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation, and the valence of facial expression.We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a

  4. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    Science.gov (United States)

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast

  5. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus.

    Directory of Open Access Journals (Sweden)

    Sayoko Ueda

    Full Text Available As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear, B-type (only the eye position is clear, and C-type (both the pupil and eye position are unclear. A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  6. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    Science.gov (United States)

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  7. Vestibular and cerebellar contribution to gaze optimality.

    Science.gov (United States)

    Sağlam, Murat; Glasauer, Stefan; Lehnen, Nadine

    2014-04-01

    Patients with chronic bilateral vestibular loss have large gaze variability and experience disturbing oscillopsia, which impacts physical and social functioning, and quality of life. Gaze variability and oscillopsia in these patients are attributed to a deficient vestibulo-ocular reflex, i.e. impaired online feedback motor control. Here, we assessed whether the lack of vestibular input also affects feed-forward motor learning, i.e. the ability to choose optimal movement parameters that minimize variability during active movements such as combined eye-head gaze shifts. A failure to learn from practice and reshape feed-forward motor commands in response to sensory error signals to achieve appropriate movements has been proposed to explain dysmetric gaze shifts in patients with cerebellar ataxia. We, therefore, assessed the differential roles of both sensory vestibular information and the cerebellum in choosing optimal movement kinematics. We have previously shown that, in the course of several gaze shifts, healthy subjects adjust the motor command to minimize endpoint variability also when movements are experimentally altered by an increase in the head moment of inertia. Here, we increased the head inertia in five patients with chronic complete bilateral vestibular loss (aged 45.4±7.1 years, mean±standard deviation), nine patients with cerebellar ataxia (aged 56.7±12.6 years), and 10 healthy control subjects (aged 39.7±6.3 years) while they performed large (75° and 80°) horizontal gaze shifts towards briefly flashed targets in darkness and, using our previous optimal control model, compared their gaze shift parameters to the expected optimal movements with increased head inertia. Patients with chronic bilateral vestibular loss failed to update any of the gaze shift parameters to the new optimum with increased head inertia. Consequently, they displayed highly variable, suboptimal gaze shifts. Patients with cerebellar ataxia updated some movement parameters to

  8. Novel automatic eye detection and tracking algorithm

    Science.gov (United States)

    Ghazali, Kamarul Hawari; Jadin, Mohd Shawal; Jie, Ma; Xiao, Rui

    2015-04-01

    The eye is not only one of the most complex but also the most important sensory organ of the human body. Eye detection and eye tracking are basement and hot issue in image processing. A non-invasive eye location and eye tracking is promising for hands-off gaze-based human-computer interface, fatigue detection, instrument control by paraplegic patients and so on. For this purpose, an innovation work frame is proposed to detect and tracking eye in video sequence in this paper. The contributions of this work can be divided into two parts. The first contribution is that eye filters were trained which can detect eye location efficiently and accurately without constraints on the background and skin colour. The second contribution is that a framework of tracker based on sparse representation and LK optic tracker were built which can track eye without constraint on eye status. The experimental results demonstrate the accuracy aspects and the real-time applicability of the proposed approach.

  9. Learning to interact with a computer by gaze

    DEFF Research Database (Denmark)

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users’ eye movement data during typing of 110 sentences. The experiment revealed...... that inefficient eye movements was dramatically reduced after only 15 to 25 sentences of typing, equal to approximately 3-4 hours of practice. The performance data fits a general learning model based on the power law of practice. The learning model can be used to estimate further improvements in gaze typing...

  10. Social decisions affect neural activity to perceived dynamic gaze.

    Science.gov (United States)

    Latinus, Marianne; Love, Scott A; Rossi, Alejandra; Parada, Francisco J; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin; Puce, Aina

    2015-11-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a 'default mode' that may focus on spatial information; a 'socially aware mode' that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified.

  11. Audience gaze while appreciating a multipart musical performance.

    Science.gov (United States)

    Kawase, Satoshi; Obata, Satoshi

    2016-11-01

    Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.

  12. Culture, gaze and the neural processing of fear expressions.

    Science.gov (United States)

    Adams, Reginald B; Franklin, Robert G; Rule, Nicholas O; Freeman, Jonathan B; Kveraga, Kestutis; Hadjikhani, Nouchine; Yoshikawa, Sakiko; Ambady, Nalini

    2010-06-01

    The direction of others' eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direct- versus averted-fear expressions, finding clear evidence for a culturally determined role of gaze in the processing of fear. Greater neural responsivity was apparent to averted- versus direct-gaze fear in several regions related to face and emotion processing, including bilateral amygdalae, when posed on same-culture faces, whereas greater response to direct- versus averted-gaze fear was apparent in these same regions when posed on other-culture faces. We also found preliminary evidence for intercultural variation including differential responses across participants to Japanese versus US Caucasian stimuli, and to a lesser degree differences in how Japanese and US Caucasian participants responded to these stimuli. These findings reveal a meaningful role of culture in the processing of eye gaze and emotion, and highlight their interactive influences in neural processing.

  13. Gaze inspired subtitle position evaluation for MOOCs videos

    Science.gov (United States)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  14. Genetic analyses of the human eye colours using a novel objective method for eye colour classification

    DEFF Research Database (Denmark)

    Andersen, Jeppe D.; Johansen, Peter; Harder, Stine

    2013-01-01

    In this study, we present a new objective method for measuring the eye colour on a continuous scale that allows researchers to associate genetic markers with different shades of eye colour. With the use of the custom designed software Digital Iris Analysis Tool (DIAT), the iris was automatically......-score ranged from −1 to 1 (brown to blue). The software eliminated the need for user based interpretation and qualitative eye colour categories. In 94% (570) of 605 analyzed eye images, the iris region was successfully extracted and a PIE-score was calculated. A very high correlation between the PIE...... identified and extracted from high resolution digital images. DIAT was made user friendly with a graphical user interface. The software counted the number of blue and brown pixels in the iris image and calculated a Pixel Index of the Eye (PIE-score) that described the eye colour quantitatively. The PIE...

  15. Temperature distribution simulation of the human eye exposed to laser radiation.

    Science.gov (United States)

    Mirnezami, Seyyed Abbas; Rajaei Jafarabadi, Mahdi; Abrishami, Maryam

    2013-01-01

    Human eye is a sensitive part of human body with no direct protection and due to its lack of protection against the external heat waves, studying the temperature distribution of heat waves on the human eye is of utmost importance. Various lasers are widely used in medical applications such as eye surgeries. The most significant issue in the eye surgeries with laser is estimation of temperature distribution and its increase in eye tissues due to the laser radiation intensity. Experimental and invasive methods to measure the eye temperature usually have high risks. In this paper, human eye has been modeled through studying the temperature distribution of three different laser radiations, using the finite element method. We simulated human eye under 1064 nm Neodymium-Doped Yttrium Aluminium Garnet (Nd: YAG) laser, 193 nm argon fluoride (ArF) excimer laser, and 1340 nm Neodymium doped Yttrium Aluminum Perovskite (Nd: YAP) laser radiation. The results show that these radiations cause temperature rise in retina, lens and cornea region, which will in turn causes serious damages to the eye tissues. This simulation can be a useful tool to study and predict the temperature distribution in laser radiation on the human eye and evaluate the risk involved in using laser to perform surgery.

  16. Gaze-cueing requires intact face processing - Insights from acquired prosopagnosia.

    Science.gov (United States)

    Burra, Nicolas; Kerzel, Dirk; Ramon, Meike

    2017-04-01

    Gaze-cueing is the automatic spatial orienting of attention in the direction of perceived gaze. Participants respond faster to targets located at positions congruent with the direction of gaze, compared to incongruent ones (gaze cueing effect, GCE). However, it still remains unclear whether its occurrence depends on intact integration of information from the entire eye region or face, rather than simply the presence of the eyes per se. To address this question, we investigated the GCE in PS, an extensively studied case of pure acquired prosopagnosia. In our gaze-cueing paradigm, we manipulated the duration at which cues were presented (70ms vs. 400ms) and the availability of facial information (full-face vs. eyes-only). For 70ms cue duration, we found a context-dependent dissociation between PS and controls: PS showed a GCE for eyes-only stimuli, whereas controls showed a GCE only for full-face stimuli. For 400ms cue duration, PS showed gaze-cueing independently of stimulus context, whereas in healthy controls a GCE again emerged only for full-face stimuli. Our findings suggest that attentional deployment based on the gaze direction of briefly presented faces requires intact processing of facial information, which affords salience to the eye region. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    Science.gov (United States)

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes.

  18. The Epistemology of the Gaze

    DEFF Research Database (Denmark)

    Kramer, Mette

    2007-01-01

    In psycho-semiotic film theory the gaze is often considered to be a straitjacket for the female spectator. If we approach the gaze from an empiric so-called ‘naturalised’ lens, it is possible to regard the gaze as a functional devise through which the spectator can obtain knowledge essential for ...

  19. A GazeWatch Prototype

    DEFF Research Database (Denmark)

    Paulin Hansen, John; Biermann, Florian; Møllenbach, Emile

    2015-01-01

    We demonstrate potentials of adding a gaze tracking unit to a smartwatch, allowing hands-free interaction with the watch itself and control of the environment. Users give commands via gaze gestures, i.e. looking away and back to the GazeWatch. Rapid presentation of single words on the watch displ...

  20. Genetic analyses of the human eye colours using a novel objective method for eye colour classification

    DEFF Research Database (Denmark)

    Andersen, Jeppe D.; Johansen, Peter; Harder, Stine

    2013-01-01

    In this study, we present a new objective method for measuring the eye colour on a continuous scale that allows researchers to associate genetic markers with different shades of eye colour. With the use of the custom designed software Digital Iris Analysis Tool (DIAT), the iris was automatically ...

  1. Genetic analyses of the human eye colours using a novel objective method for eye colour classification

    DEFF Research Database (Denmark)

    Andersen, Jeppe D.; Johansen, Peter; Harder, Stine;

    2013-01-01

    In this study, we present a new objective method for measuring the eye colour on a continuous scale that allows researchers to associate genetic markers with different shades of eye colour. With the use of the custom designed software Digital Iris Analysis Tool (DIAT), the iris was automatically...... and TYR rs1393350) on the eye colour. We evaluated the two published prediction models for eye colour (IrisPlex [1] and Snipper[2]) and compared the predictions with the PIE-scores. We found good concordance with the prediction from individuals typed as HERC2 rs12913832 G. However, both methods had...... by different studies and to perform large meta-studies that may reveal loci with small effects on the eye colour....

  2. Gaze stabilization reflexes in the mouse: New tools to study vision and sensorimotor

    NARCIS (Netherlands)

    B. van Alphen (Bart)

    2010-01-01

    markdownabstract__abstract__ Gaze stabilization reflexes are a popular model system in neuroscience for connecting neurophysiology and behavior as well as studying the neural correlates of behavioral plasticity. These compensatory eye movements are one of the simplest motor behaviors,

  3. Following gaze: gaze-following behavior as a window into social cognition

    Directory of Open Access Journals (Sweden)

    Stephen V Shepherd

    2010-03-01

    Full Text Available In general, individuals look where they attend and next intend to act. Many animals, including our own species, use observed gaze as a deictic (pointing cue to guide behavior. Among humans, these responses are reflexive and pervasive: they arise within a fraction of a second, act independently of task relevance, and appear to undergird our initial development of language and theory of mind. Human and nonhuman animals appear to share basic gaze-following behaviors, suggesting the foundations of human social cognition may also be present in nonhuman brains.

  4. An Automatic Eye Detection Method for Gray Intensity Facial Images

    Directory of Open Access Journals (Sweden)

    M Hassaballah

    2011-07-01

    Full Text Available Eyes are the most salient and stable features in the human face, and hence automatic extraction or detection of eyes is often considered as the most important step in many applications, such as face identification and recognition. This paper presents a method for eye detection of still grayscale images. The method is based on two facts: eye regions exhibit unpredictable local intensity, therefore entropy in eye regions is high and the center of eye (iris is too dark circle (low intensity compared to the neighboring regions. A score based on the entropy of eye and darkness of iris is used to detect eye center coordinates. Experimental results on two databases; namely, FERET with variations in views and BioID with variations in gaze directions and uncontrolled conditions show that the proposed method is robust against gaze direction, variations in views and variety of illumination. It can achieve a correct detection rate of 97.8% and 94.3% on a set containing 2500 images of FERET and BioID databases respectively. Moreover, in the cases with glasses and severe conditions, the performance is still acceptable.

  5. Head and Eye Movements Affect Object Processing in 4-Month-Old Infants More than an Artificial Orientation Cue

    Science.gov (United States)

    Wahl, Sebastian; Michel, Christine; Pauen, Sabina; Hoehl, Stefanie

    2013-01-01

    This study investigates the effects of attention-guiding stimuli on 4-month-old infants' object processing. In the human head condition, infants saw a person turning her head and eye gaze towards or away from objects. When presented with the objects again, infants showed increased attention in terms of longer looking time measured by eye…

  6. Towards gaze-controlled platform games

    DEFF Research Database (Denmark)

    Muñoz, Jorge; Yannakakis, Georgios N.; Mulvey, Fiona

    2011-01-01

    This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial...... analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character...... controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game...

  7. Blue eyes in lemurs and humans: same phenotype, different genetic mechanism

    DEFF Research Database (Denmark)

    Bradley, Brenda J; Pedersen, Anja; Mundy, Nicholas I

    2009-01-01

    Almost all mammals have brown or darkly-pigmented eyes (irises), but among primates, there are some prominent blue-eyed exceptions. The blue eyes of some humans and lemurs are a striking example of convergent evolution of a rare phenotype on distant branches of the primate tree. Recent work...... on humans indicates that blue eye color is associated with, and likely caused by, a single nucleotide polymorphism (rs12913832) in an intron of the gene HERC2, which likely regulates expression of the neighboring pigmentation gene OCA2. This raises the immediate question of whether blue eyes in lemurs might...... have a similar genetic basis. We addressed this by sequencing the homologous genetic region in the blue-eyed black lemur (Eulemur macaco flavifrons; N = 4) and the closely-related black lemur (Eulemur macaco macaco; N = 4), which has brown eyes. We then compared a 166-bp segment corresponding...

  8. Ritual relieved axial dystonia triggered by gaze-evoked amaurosis.

    Science.gov (United States)

    Jacome, D E

    1997-11-01

    A woman with chronic posttraumatic axial lateropulsion cervical dystonia ("belly dancer's head") found relief of her spontaneous dystonic spasms by the sequential performance of an elaborate motor ritual. During an episode of left optic papillitis caused by central retinal vein occlusion, gaze-evoked amaurosis of the left eye developed, preceded by achromatopsia, during left lateral gaze. Gaze-evoked amaurosis triggered axial dystonia, which was followed by her unique, stereotyped, dystonia-relieving ritual that simulated a slow dance. Visual symptoms improved progressively in 1 year. Eventually, she was unable to trigger her dystonia by eye movements. Spontaneous dystonia remained otherwise unchanged from before the episode of papillitis and was still relieved by her unique ritual.

  9. Early averted gaze processing in the right Fusiform Gyrus: An EEG source imaging study.

    Science.gov (United States)

    Berchio, Cristina; Rihs, Tonia A; Piguet, Camille; Dayer, Alexandre G; Aubry, Jean-Michel; Michel, Christoph M

    2016-09-01

    Humans are able to categorize face properties with impressively short latencies. Nevertheless, the latency at which gaze recognition occurs is still a matter of debate. Through spatio-temporal analysis of high-density event-related potentials (ERP), we investigated the brain activity underlying the ability to spontaneously and quickly process gaze. We presented neutral faces with direct and averted gaze in a matching picture paradigm, where subjects had to detect repetition of identical faces and gaze was implicitly manipulated. The results indicate that faces with averted gaze were better discriminated than faces with direct gaze, and evoked stronger P100 amplitudes localized to the right fusiform gyrus. In contrast, direct gaze induced stronger activation in the orbital frontal gyrus at this latency. Later in time, at the beginning of the N170 component, direct gaze induced changes in scalp topography with a stronger activation in the right medial temporal gyrus. The location of these differential activations of direct vs. averted gaze further support the view that faces with averted gaze are perceived as less rewarding than faces with direct gaze. We additionally found differential ERP responses between repeated and novel faces as early as 50ms, thereby replicating earlier studies of very fast detection of mnestic aspects of stimuli. Together, these results suggest an early dissociation between implicit gaze detection and explicit identity processing.

  10. Control and Functions of Fixational Eye Movements

    Science.gov (United States)

    Rucci, Michele; Poletti, Martina

    2016-01-01

    Humans and other species explore a visual scene by rapidly shifting their gaze 2-3 times every second. Although the eyes may appear immobile in the brief intervals in between saccades, microscopic (fixational) eye movements are always present, even when attending to a single point. These movements occur during the very periods in which visual information is acquired and processed and their functions have long been debated. Recent technical advances in controlling retinal stimulation during normal oculomotor activity have shed new light on the visual contributions of fixational eye movements and their degree of control. The emerging body of evidence, reviewed in this article, indicates that fixational eye movements are important components of the strategy by which the visual system processes fine spatial details, enabling both precise positioning of the stimulus on the retina and encoding of spatial information into the joint space-time domain.

  11. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    Science.gov (United States)

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  12. Atypical face gaze in autism.

    Science.gov (United States)

    Trepagnier, Cheryl; Sebrechts, Marc M; Peterson, Rebecca

    2002-06-01

    An eye-tracking study of face and object recognition was conducted to clarify the character of face gaze in autistic spectrum disorders. Experimental participants were a group of individuals diagnosed with Asperger's disorder or high-functioning autistic disorder according to their medical records and confirmed by the Autism Diagnostic Interview-Revised (ADI-R). Controls were selected on the basis of age, gender, and educational level to be comparable to the experimental group. In order to maintain attentional focus, stereoscopic images were presented in a virtual reality (VR) headset in which the eye-tracking system was installed. Preliminary analyses show impairment in face recognition, in contrast with equivalent and even superior performance in object recognition among participants with autism-related diagnoses, relative to controls. Experimental participants displayed less fixation on the central face than did control-group participants. The findings, within the limitations of the small number of subjects and technical difficulties encountered in utilizing the helmet-mounted display, suggest an impairment in face processing on the part of the individuals in the experimental group. This is consistent with the hypothesis of disruption in the first months of life, a period that may be critical to typical social and cognitive development, and has important implications for selection of appropriate targets of intervention.

  13. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    Science.gov (United States)

    Hamlin, Robert P

    2017-02-21

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best(2) ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not.

  14. Repeatability and reliability of human eye in visual shade selection.

    Science.gov (United States)

    Özat, P B; Tuncel, İ; Eroğlu, E

    2013-12-01

    Deficiencies in the human visual percep-tion system have challenged the efficiency of the visual shade-matching protocol. The aim of this study was to evaluate the repeatability and reliability of human eye in visual shade selection. Fifty-four volunteering dentists were asked to match the shade of an upper right central incisor tooth of a single subject. The Vita 3D-Master shade guide was used for the protocol. Before each shade-matching procedure, the definitive codes of the shade tabs were hidden by an opaque strip and the shade tabs were placed into the guide randomly. The procedure was repeated 1 month later to ensure that visual memory did not affect the results. The L*, a* and b* values of the shade tabs were measured with a dental spectrophotometer (Vita Easyshade) to produce quantitative values to evaluate the protocol. The paired samples t-test and Pearson correlation test were used to compare the 1st and 2nd selections. The Yates-corrected chi-square test was use to compare qualitative values. Statistical significance was accepted at P shade matching, but they are able to select clinically acceptable shades.

  15. What Captures Gaze in Visual Design - Insights from Cognitive Psychology

    DEFF Research Database (Denmark)

    Andersen, Emil; Maier, Anja

    2016-01-01

    Visual information is vital for user behaviour and thus of utmost importance to design. Consequently, tracking and interpreting gaze data has been the target of increasing amounts of research in design science. This research is in part facilitated by new methods, such as eye-tracking, becoming more...

  16. An eye tracking system for monitoring face scanning patterns reveals the enhancing effect of oxytocin on eye contact in common marmosets.

    Science.gov (United States)

    Kotani, Manato; Shimono, Kohei; Yoneyama, Toshihiro; Nakako, Tomokazu; Matsumoto, Kenji; Ogi, Yuji; Konoike, Naho; Nakamura, Katsuki; Ikeda, Kazuhito

    2017-09-01

    Eye tracking systems are used to investigate eyes position and gaze patterns presumed as eye contact in humans. Eye contact is a useful biomarker of social communication and known to be deficient in patients with autism spectrum disorders (ASDs). Interestingly, the same eye tracking systems have been used to directly compare face scanning patterns in some non-human primates to those in human. Thus, eye tracking is expected to be a useful translational technique for investigating not only social attention and visual interest, but also the effects of psychiatric drugs, such as oxytocin, a neuropeptide that regulates social behavior. In this study, we report on a newly established method for eye tracking in common marmosets as unique New World primates that, like humans, use eye contact as a mean of communication. Our investigation was aimed at characterizing these primates face scanning patterns and evaluating the effects of oxytocin on their eye contact behavior. We found that normal common marmosets spend more time viewing the eyes region in common marmoset's picture than the mouth region or a scrambled picture. In oxytocin experiment, the change in eyes/face ratio was significantly greater in the oxytocin group than in the vehicle group. Moreover, oxytocin-induced increase in the change in eyes/face ratio was completely blocked by the oxytocin receptor antagonist L-368,899. These results indicate that eye tracking in common marmosets may be useful for evaluating drug candidates targeting psychiatric conditions, especially ASDs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Science.gov (United States)

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  18. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements

    Science.gov (United States)

    Kessler, Luise; Schweinberger, Stefan R.

    2016-01-01

    A speaker’s gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., “sniffer dogs cannot smell the difference between identical twins”). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze. PMID:27643789

  19. Gaze, goals and growing up: Effects on imitative grasping.

    Science.gov (United States)

    Brubacher, Sonja P; Roberts, Kim P; Obhi, Sukhvinder S

    2013-09-01

    Developmental differences in the use of social-attention cues to imitation were examined among children aged 3 and 6 years old (n = 58) and adults (n = 29). In each of 20 trials, participants watched a model grasp two objects simultaneously and move them together. On every trial, the model directed her gaze towards only one of the objects. Some object pairs were related and had a clear functional relationship (e.g., flower, vase), while others were functionally unrelated (e.g., cardboard square, ladybug). Owing to attentional effects of eye gaze, it was expected that all participants would more faithfully imitate the grasp on the gazed-at object than the object not gazed-at. Children were expected to imitate less faithfully on trials with functionally related objects than those without, due to goal-hierarchy effects. Results support effects of eye gaze on imitation of grasping. Children's grasping accuracy on functionally related and functionally unrelated trials was similar, but they were more likely to only use one hand on trials where the object pairs were functionally related than unrelated. Implications for theories of imitation are discussed. © 2013 The British Psychological Society.

  20. Investigating gaze of children with ASD in naturalistic settings.

    Directory of Open Access Journals (Sweden)

    Basilio Noris

    Full Text Available BACKGROUND: Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD. Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii whether new atypical elements appear when studying visual behavior across the whole field of view. METHODOLOGY/PRINCIPAL FINDINGS: Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS. The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. CONCLUSIONS/SIGNIFICANCE: The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment.

  1. Comparison of in vitro eye irritation potential by bovine corneal opacity and permeability (BCOP) assay to erythema scores in human eye sting test of surfactant-based formulations.

    Science.gov (United States)

    Cater, Kathleen C; Harbell, John W

    2008-01-01

    The bovine corneal opacity and permeability (BCOP) assay can be used to predict relative eye irritation potential of surfactant-based personal care formulations relative to a corporate benchmark. The human eye sting test is typically used to evaluate product claims of no tears/no stinging for children's bath products. A preliminary investigation was conducted to test a hypothesis that the BCOP assay could be used as a prediction model for relative ranking of human eye irritation responses under conditions of a standard human eye sting test to surfactant-based formulations. BCOP assays and human eye sting tests were conducted on 4 commercial and 1 prototype body wash (BW) developed specifically for children or as mild bath products. In the human eye sting test, 10 mul of a 10% dosing solution is instilled into one eye of each panelist (n = 20), and the contralateral eye is dosed with sterile water as a control. Bulbar conjunctival erythema responses of each eye are graded at 30 seconds by an ophthalmologist. The BCOP assay permeability values (optical density at 490 nm [OD(490)]) for the 5 BWs ranged from 0.438 to 1.252 (i.e., least to most irritating). By comparison, the number of panelists exhibiting erythema responses (mild to moderately pink) ranged from 3 of 20 panelists for the least irritating BW to 10 of 20 panelists for the most irritating BW tested. The relative ranking of eye irritation potential of the 5 BWs in the BCOP assay compares favorably with the relative ranking of the BWs in the human eye sting test. Based on these findings, the permeability endpoint of the BCOP assay, as described for surfactant-based formulations, showed promise as a prediction model for relative ranking of conjunctival erythema responses in the human eye. Consequently, screening of prototype formulations in the BCOP assay would allow for formula optimization of mild bath products prior to investment in a human eye sting test.

  2. Can we resist another person’s gaze?

    Science.gov (United States)

    Marino, Barbara F. M.; Mirabella, Giovanni; Actis-Grosso, Rossana; Bricolo, Emanuela; Ricciardelli, Paola

    2015-01-01

    Adaptive adjustments of strategies are needed to optimize behavior in a dynamic and uncertain world. A key function in implementing flexible behavior and exerting self-control is represented by the ability to stop the execution of an action when it is no longer appropriate for the environmental requests. Importantly, stimuli in our environment are not equally relevant and some are more valuable than others. One example is the gaze of other people, which is known to convey important social information about their direction of attention and their emotional and mental states. Indeed, gaze direction has a significant impact on the execution of voluntary saccades of an observer since it is capable of inducing in the observer an automatic gaze-following behavior: a phenomenon named social or joint attention. Nevertheless, people can exert volitional inhibitory control on saccadic eye movements during their planning. Little is known about the interaction between gaze direction signals and volitional inhibition of saccades. To fill this gap, we administered a countermanding task to 15 healthy participants in which they were asked to observe the eye region of a face with the eyes shut appearing at central fixation. In one condition, participants were required to suppress a saccade, that was previously instructed by a gaze shift toward one of two peripheral targets, when the eyes were suddenly shut down (social condition, SC). In a second condition, participants were asked to inhibit a saccade, that was previously instructed by a change in color of one of the two same targets, when a change of color of a central picture occurred (non-social condition, N-SC). We found that inhibitory control was more impaired in the SC, suggesting that actions initiated and stopped by social cues conveyed by the eyes are more difficult to withhold. This is probably due to the social value intrinsically linked to these cues and the many uses we make of them. PMID:26550008

  3. Behavior of human horizontal vestibulo-ocular reflex in response to high-acceleration stimuli

    Science.gov (United States)

    Maas, E. F.; Huebner, W. P.; Seidman, S. H.; Leigh, R. J.

    1989-01-01

    The horizontal vestibulo-ocular reflex (VOR) during transient, high-acceleration (1900-7100 deg/sec-squared) head rotations was studied in four human subjects. Such stimuli perturbed the angle of gaze and caused illusory movement of a viewed target (oscillopsia). The disturbance of gaze could be attributed to the latency of the VOR (which ranged from 6-15 ms) and inadequate compensatory eye rotations (median VOR gain ranged from 0.61-0.83).

  4. [Unilateral Solar Maculopathy after Gazing at Solar Eclipse].

    Science.gov (United States)

    Mehlan, J; Linke, S J; Wagenfeld, L; Steinberg, J

    2016-06-01

    A 43-year-old male patient with unilateral metamorphosia presented after gazing at an eclipse with only one eye. Damage of the macula was demonstrated funduscopically, with OCT and angiography. Six weeks after initial presentation and oral methylprednisolone therapy (40 mg/d for 10 days), the symptoms and the morphological changes decreased. Solar retinopathy is a photochemical alteration of the retina, usually seen after sun gazing. In younger patients, it mostly presents as bilateral solar maculopathy. Some patients exhibit partial or total recovery.

  5. Digital quantification of human eye color highlights genetic association of three new loci.

    NARCIS (Netherlands)

    F. Liu (Fan); A. Wollstein (Andreas); P.G. Hysi (Pirro); G.A. Ankra-Badu (Georgina); T.D. Spector (Timothy); D.J. Park (Daniel); G. Zhu; M. Larsson (Mats); D.L. Duffy (David); G.W. Montgomery (Grant); D.A. Mackey (David); S. Walsh (Susan); O. Lao Grueso (Oscar); A. Hofman (Albert); F. Rivadeneira Ramirez (Fernando); J.R. Vingerling (Hans); A.G. Uitterlinden (André); N.G. Martin (Nicholas); C.J. Hammond (Christopher); M.H. Kayser (Manfred)

    2010-01-01

    textabstractPrevious studies have successfully identified genetic variants in several genes associated with human iris (eye) color; however, they all used simplified categorical trait information. Here, we quantified continuous eye color variation into hue and saturation values using high-resolution

  6. Measurement of the mechanical stiffness in cyclotorsion of the human eye

    NARCIS (Netherlands)

    H.J. Simonsz (Huib)

    1984-01-01

    textabstractWe have measured the stiffness in cyclotorsion of the human eye using a scleral suction contact ring mounted on a shaft fitted with an eddy current motor to provide the torque to turn the eye and a shaft-position-encoder to register the torsion. The relation proved to be almost linear wi

  7. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    Science.gov (United States)

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  8. Imaging shear stress distribution and evaluating the stress concentration factor of the human eye

    Science.gov (United States)

    Joseph Antony, S.

    2015-03-01

    Healthy eyes are vital for a better quality of human life. Historically, for man-made materials, scientists and engineers use stress concentration factors to characterise the effects of structural non-homogeneities on their mechanical strength. However, such information is scarce for the human eye. Here we present the shear stress distribution profiles of a healthy human cornea surface in vivo using photo-stress analysis tomography, which is a non-intrusive and non-X-ray based method. The corneal birefringent retardation measured here is comparable to that of previous studies. Using this, we derive eye stress concentration factors and the directional alignment of major principal stress on the surface of the cornea. Similar to thermometers being used for monitoring the general health in humans, this report provides a foundation to characterise the shear stress carrying capacity of the cornea, and a potential bench mark for validating theoretical modelling of stresses in the human eye in future.

  9. Efficient Avoidance of the Penalty Zone in Human Eye Movements

    Science.gov (United States)

    Theeuwes, Jan

    2016-01-01

    People use eye movements extremely effectively to find objects of interest in a cluttered visual scene. Distracting, task-irrelevant attention capturing regions in the visual field should be avoided as they jeopardize the efficiency of search. In the current study, we used eye tracking to determine whether people are able to avoid making saccades to a predetermined visual area associated with a financial penalty, while making fast and accurate saccades towards stimuli placed near the penalty area. We found that in comparison to the same task without a penalty area, the introduction of a penalty area immediately affected eye movement behaviour: the proportion of saccades to the penalty area was immediately reduced. Also, saccadic latencies increased, but quite modestly, and mainly for saccades towards stimuli near the penalty area. We conclude that eye movement behaviour is under efficient cognitive control and thus quite flexible: it can immediately be adapted to changing environmental conditions to improve reward outcome. PMID:27930724

  10. A kinematic model for 3-D head-free gaze-shifts.

    Science.gov (United States)

    Daemi, Mehdi; Crawford, J Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.

  11. A Kinematic Model for 3-D Head-Free Gaze-Shifts

    Directory of Open Access Journals (Sweden)

    Mehdi eDaemi

    2015-06-01

    Full Text Available Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR, relative eye and head contributions, the non-commutativity of rotations, and Listing’s and Fick constraints for the eyes and head respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: 1 a saccade generator, 2 a head rotation generator, 3 a VOR predictor. Simulations illustrate that the model can implement: 1 the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters, 2 the experimentally verified constraints on static eye and head orientations during fixation, and 3 the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.

  12. Comprehension of the Communicative Intent behind Pointing and Gazing Gestures by Young Children with Williams Syndrome or Down Syndrome

    Science.gov (United States)

    John, Angela E.; Mervis, Carolyn B.

    2010-01-01

    Purpose: In this study, the authors examined the ability of preschoolers with Williams syndrome (WS) or Down syndrome (DS) to infer communicative intent as expressed through gestures (pointing and eye-gaze shift). Method: Participants were given a communicative or noncommunicative cue involving pointing or gaze shifting in the context of a hiding…

  13. Eye Contact Facilitates Awareness of Faces during Interocular Suppression

    Science.gov (United States)

    Stein, Timo; Senju, Atsushi; Peelen, Marius V.; Sterzer, Philipp

    2011-01-01

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than…

  14. Models for Gaze Tracking Systems

    Directory of Open Access Journals (Sweden)

    Villanueva Arantxa

    2007-01-01

    Full Text Available One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies.

  15. Models for Gaze Tracking Systems

    Directory of Open Access Journals (Sweden)

    Arantxa Villanueva

    2007-10-01

    Full Text Available One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies.

  16. NMR Spectroscopy of Human Eye Tissues: A New Insight into Ocular Biochemistry

    Directory of Open Access Journals (Sweden)

    Tomasz Kryczka

    2014-01-01

    Full Text Available Background. The human eye is a complex organ whose anatomy and functions has been described very well to date. Unfortunately, the knowledge of the biochemistry and metabolic properties of eye tissues varies. Our objective was to reveal the biochemical differences between main tissue components of human eyes. Methods. Corneas, irises, ciliary bodies, lenses, and retinas were obtained from cadaver globes 0-1/2 hours postmortem of 6 male donors (age: 44–61 years. The metabolic profile of tissues was investigated with HR MAS 1H NMR spectroscopy. Results. A total of 29 metabolites were assigned in the NMR spectra of the eye tissues. Significant differences between tissues were revealed in contents of the most distant eye-tissues, while irises and ciliary bodies showed minimal biochemical differences. ATP, acetate, choline, glutamate, lactate, myoinositol, and taurine were identified as the primary biochemical compounds responsible for differentiation of the eye tissues. Conclusions. In this study we showed for the first time the results of the analysis of the main human eye tissues with NMR spectroscopy. The biochemical contents of the selected tissues seemed to correspond to their primary anatomical and functional attributes, the way of the delivery of the nutrients, and the location of the tissues in the eye.

  17. Specific eye-head coordination enhances vision in progressive lens wearers.

    Science.gov (United States)

    Rifai, Katharina; Wahl, Siegfried

    2016-09-01

    In uncorrected vision all combinations of eye and head positions are visually equivalent; thus, there is no need for a specific modification of eye-head coordination in young healthy observers. In contrast, the quality of visual input indeed strongly depends on eye position in the majority of healthy elderly drivers, namely in progressive additional lens (PALs) wearers. For a given distance, only specific combinations of eye and head position provide clear vision in a progressive lens wearer. However, although head movements are an integral part of gaze behavior, it is not known if eye-head coordination takes part in the enhancement of visual input in healthy individuals. In the current study we determined changes in eye-head coordination in progressive lens wearers in challenging tasks with high cognitive load, in the situation of driving. During a real-world drive on an urban round track in Stuttgart, gaze movements and head movements were measured in 17 PAL wearers and eye-head coordination was compared to 27 controls with unrestricted vision. Head movement behavior, specific to progressive lens wearers, was determined in head gain and temporal properties of head movements. Furthermore, vertical eye-head coordination was consistent only among PAL wearers. The observed differences in eye-head coordination clearly demonstrate a contribution of head movements in the enhancement of visual input in the healthy human visual system.

  18. Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes

    National Research Council Canada - National Science Library

    Haddock, Luis J; Kim, David Y; Mukai, Shizuo

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily...

  19. Simple, Inexpensive Technique for High-Quality Smartphone Fundus Photography in Human and Animal Eyes

    National Research Council Canada - National Science Library

    Haddock, Luis J; Kim, David Y; Mukai, Shizuo

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily...

  20. Pharmacokinetics of bevacizumab after topical and intravitreal administration in human eyes

    OpenAIRE

    Moisseiev, Elad; Waisbourd, Michael; Ben-Artsi, Elad; Levinger, Eliya; Barak, Adiel; Daniels, Tad; Csaky, Karl; Loewenstein, Anat; Barequet, Irina S.

    2013-01-01

    Background Topical bevacizumab is a potential treatment modality for corneal neovascularization, and several recent studies have demonstrated its efficacy. No previous study of the pharmacokinetics of topical bevacizumab has been performed in human eyes. The purpose of this study is to investigate the pharmacokinetics of topical administration of bevacizumab in human eyes, and also to compare the pharmacokinetics of intravitreal bevacizumab injections with previously reported data. Methods Tw...

  1. [Bionic model for coordinated head-eye motion control].

    Science.gov (United States)

    Mao, Xiaobo; Chen, Tiejun

    2011-10-01

    The relationships between eye movements and head movements of the primate during gaze shifts are analyzed in detail in the present paper. Applying the mechanisms of neurophysiology to engineering domain, we have improved the robot eye-head coordination. A bionic control strategy of coordinated head-eye motion was proposed. The processes of gaze shifts are composed of an initial fast phase followed by a slow phase. In the fast phase saccade eye movements and slow head movements were combined, which cooperate to bring gaze from an initial resting position toward the new target rapidly, while in the slow phase the gaze stability and target fixation were ensured by the action of the vestibulo-ocular reflex (VOR) where the eyes and head rotate by equal amplitudes in opposite directions. A bionic gaze control model was given. The simulation results confirmed the effectiveness of the model by comparing with the results of neurophysiology experiments.

  2. Dating the time of birth: a radiocarbon calibration curve for human eye lens crystallines

    DEFF Research Database (Denmark)

    Kjeldsen, Henrik; Heinemeier, Jan; Heegaard, Steffen

    2010-01-01

    Radiocarbon bomb-pulse dating has been used to measure the formation age of human eye-lens crystallines. Lens crystallines are special proteins in the eye-lens that consist of virtually inert tissue. The experimental data show that the radiocarbon ages to a large extent reflect the time of birth......, in accordance with expectations. Moreover, it has been possible to develop an age model for the formation of the eye-lens crystallines. From this model a radiocarbon calibration curve for lens crystallines has been calculated. As a consequence, the time of birth of humans can be determined with an accuracy...

  3. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, Casey Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rice, Brandon Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bower, Gordon Ross [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hill, Rachael Ann [Idaho National Lab. (INL), Idaho Falls, ID (United States); LeBlanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  4. Real-time non-invasive eyetracking and gaze-point determination for human-computer interaction and biomedicine

    Science.gov (United States)

    Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.

    2004-01-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.

  5. Investigating social gaze as an action-perception online performance

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2012-04-01

    Full Text Available In interpersonal interactions, linguistic information is complemented by non-linguistic information originating largely from facial expressions. The study of online face-to-face social interaction thus entails investigating the multimodal simultaneous processing of oral and visual percepts. Moreover, gaze in and of itself functions as a powerful communicative channel. In this respect, gaze should not be examined as a purely perceptive process but also as an active social performance. We designed a task involving multimodal deciphering of social information based on virtual characters, embedded in naturalistic backgrounds, who directly address the participant with non-literal speech and meaningful facial expressions. Eighteen adult participants were to interpret an equivocal sentence which could be disambiguated by examining the emotional expressions of the character speaking to them face-to-face. To examine self-control and self-awareness of gaze in this context, visual feedback is provided to the participant by a real-time gaze-contingent viewing window centered on the focal point, while the rest of the display is blurred. Eye-tracking data showed that the viewing window induced changes in gaze behaviour, notably longer visual fixations. Notwithstanding, only half the participants ascribed the window displacements to their eye movements. These results highlight the dissociation between non volitional gaze adaptation and self-ascription of agency. Such dissociation provides support for a two-step account of the sense of agency composed of pre-noetic monitoring mechanisms and reflexive processes. We comment upon these results, which illustrate the relevance of our method for studying online social cognition, especially concerning Autism Spectrum Disorders (ASD where poor pragmatic understanding of oral speech are considered linked to visual peculiarities that impede face exploration.

  6. High-speed adaptive optics line scan confocal retinal imaging for human eye

    Science.gov (United States)

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  7. Demo of Gaze Controlled Flying

    DEFF Research Database (Denmark)

    Alapetite, Alexandre; Hansen, John Paulin; Scott MacKenzie, I.

    2012-01-01

    Development of a control paradigm for unmanned aerial vehicles (UAV) is a new challenge to HCI. The demo explores how to use gaze as input for locomotion in 3D. A low-cost drone will be controlled by tracking user’s point of regard (gaze) on a live video stream from the UAV....

  8. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    Science.gov (United States)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  9. Comprehension and utilisation of pointing gestures and gazing in dog-human communication in relatively complex situations.

    Science.gov (United States)

    Lakatos, Gabriella; Gácsi, Márta; Topál, József; Miklósi, Adám

    2012-03-01

    The aim of the present investigation was to study the visual communication between humans and dogs in relatively complex situations. In the present research, we have modelled more lifelike situations in contrast to previous studies which often relied on using only two potential hiding locations and direct association between the communicative signal and the signalled object. In Study 1, we have provided the dogs with four potential hiding locations, two on each side of the experimenter to see whether dogs are able to choose the correct location based on the pointing gesture. In Study 2, dogs had to rely on a sequence of pointing gestures displayed by two different experimenters. We have investigated whether dogs are able to recognise an 'indirect signal', that is, a pointing toward a pointer. In Study 3, we have examined whether dogs can understand indirect information about a hidden object and direct the owner to the particular location. Study 1 has revealed that dogs are unlikely to rely on extrapolating precise linear vectors along the pointing arm when relying on human pointing gestures. Instead, they rely on a simple rule of following the side of the human gesturing. If there were more targets on the same side of the human, they showed a preference for the targets closer to the human. Study 2 has shown that dogs are able to rely on indirect pointing gestures but the individual performances suggest that this skill may be restricted to a certain level of complexity. In Study 3, we have found that dogs are able to localise the hidden object by utilising indirect human signals, and they are able to convey this information to their owner.

  10. Nonlinear solution for radiation boundary condition of heat transfer process in human eye.

    Science.gov (United States)

    Dehghani, A; Moradi, A; Dehghani, M; Ahani, A

    2011-01-01

    In this paper we propose a new method based on finite element method for solving radiation boundary condition of heat equation inside the human eye and other applications. Using this method, we can solve heat equation inside human eye without need to model radiation boundary condition to a robin boundary condition. Using finite element method we can obtain a nonlinear equation, and finally we use nonlinear algorithm to solve it. The human eye is modeled as a composition of several homogeneous regions. The Ritz method in the finite element method is used for solving heat differential equation. Applying the boundary conditions, the heat radiation condition and the robin condition on the cornea surface of the eye and on the outer part of sclera are used, respectively. Simulation results of solving nonlinear boundary condition show the accuracy of the proposed method.

  11. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    Science.gov (United States)

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  12. Facial Expressions Modulate the Ontogenetic Trajectory of Gaze-Following among Monkeys

    Science.gov (United States)

    Teufel, Christoph; Gutmann, Anke; Pirow, Ralph; Fischer, Julia

    2010-01-01

    Gaze-following, the tendency to direct one's attention to locations looked at by others, is a crucial aspect of social cognition in human and nonhuman primates. Whereas the development of gaze-following has been intensely studied in human infants, its early ontogeny in nonhuman primates has received little attention. Combining longitudinal and…

  13. Concept of a human eye camera to assess laser dazzling interaction

    Science.gov (United States)

    Koerber, Michael; Eberle, Bernd

    2016-10-01

    The increase in availability and application of various laser sources poses an increasing threat to the human eye. Not only the actual damage dealt to the retina or other parts of the eye, but also dazzling during critical tasks has to be faced. However, experiments to verify the actual threat due to dazzling are quite critical, as it is almost never possible or even reasonable, to dazzle human observers. Based on this dilemma, we propose to construct a camera that mimics the perception of laser dazzle by the human eye as close as possible. The human eye camera consists of hardware and a software component, which perform the several tasks of the eye. The hardware controls the eye-movement (saccadic viewing), the adaption of the iris (irradiance control) and the projection of the image onto a sensor. The software receives the image taken by the sensor and includes the density of receptors, the retinal neural operations and a feedback to the hardware. The processed images by the virtual retina are meant to be used to evaluate the degree of dazzling, as it would occur to a human observer.

  14. The More You Look the More You Get: Intention-Based Interface Using Gaze-Tracking.

    Science.gov (United States)

    Milekic, Slavko

    Only a decade ago eye- and gaze-tracking technologies using cumbersome and expensive equipment were confined to university research labs. However, rapid technological advancements (increased processor speed, advanced digital video processing) and mass production have both lowered the cost and dramatically increased the efficacy of eye- and…

  15. Does gaze direction modulate facial expression processing in children with autism spectrum disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent motivational tendency (i.e., an avoidant facial expression with averted eye gaze) than those with an incongruent motivational tendency. Children with ASD (9-14 years old; n = 14) were not affected by the gaze direction of facial stimuli. This finding was replicated in Experiment 2, which presented only the eye region of the face to typically developing children (n = 10) and children with ASD (n = 10). These results demonstrated that children with ASD do not encode and/or integrate multiple communicative signals based on their affective or motivational tendency.

  16. 注视方向影响社交焦虑个体对情绪面孔加工的眼动研究%Gaze direction affects the processing of emotional faces among socially anxious individuals:an eye-tracking study

    Institute of Scientific and Technical Information of China (English)

    李丹; 朱春燕; 汪凯; 余凤琼; 叶榕; 谢新晖; 刘云峰; 李丹丹

    2013-01-01

    Objective To explore the effects of gaze direction in the processing of facial expression among socially anxious individuals.Methods 56 students were selected from Anhui Medical University.Based on Liebowitz Social Anxiety Scale (LSAS) scores,the subjects were grouped into high socially anxious individuals (HSA) and low socially anxious individuals(LSA).Eye movements were recorded while pairs of disgust and neutral faces as experimental stimulus were presented.Results Under the direct gaze condition,there were significant differences (P < 0.05) between the total dwell time of disgust faces ((2311.09 ± 521.41) ms) and the neutral faces((1910.69 ±607.59)ms) in HSA,while there were no significant differences in LSA(P>0.05).Under the averted gaze condition,the total dwell time were not significant between disgust faces and neutral faces in HSA and LSA (P > 0.05).Conclusion Gaze direction affects the processing of facial expressions among socially anxious individuals.Disgust face with direct gaze may be perceived as a social threat information for socially anxiety individuals,whereas disgust face with averted gaze is not a clear social threat information.%目的 探讨注视方向线索对社交焦虑个体面部表情加工的影响.方法 从安徽医科大学抽取56名本科生,依Liebowitz社交焦虑量表(LSAS)得分将被试分为高社交焦虑组(HSA)和低社交焦虑组(LSA),采用配对的厌恶和中性面孔作为实验刺激材料并记录眼动数据.结果 直视条件下,HSA组注视厌恶面孔的时间为(2311.09±521.41)ms,注视中性面孔的时间为(1910.69±607.59)ms,差异具有统计学意义(P<0.05),LSA组注视厌恶面孔与中性面孔的时间差异无统计学意义(P>0.05);斜视条件下,HSA与LSA注视厌恶面孔与中性面孔的时间差异均无统计学意义(P>0.05).结论 注视方向影响了社交焦虑个体对厌恶表情的加工,表明直视的厌恶面孔对于HSA个体来说是一种社交威胁信

  17. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    Science.gov (United States)

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  18. The Microstructure of Infants' Gaze as They View Adult Shifts in Overt Attention

    Science.gov (United States)

    Gredeback, Gustaf; Theuring, Carolin; Hauf, Petra; Kenward, Ben

    2008-01-01

    We presented infants (5, 6, 9, and 12 months old) with movies in which a female model turned toward and fixated 1 of 2 toys placed on a table. Infants' gaze was measured using a Tobii 1750 eye tracker. Six-, 9-, and 12-month-olds' first gaze shift from the model's face (after the model started turning) was directed to the attended toy. The…

  19. Age-related changes in the integration of gaze direction and facial expressions of emotion.

    Science.gov (United States)

    Slessor, Gillian; Phillips, Louise H; Bull, Rebecca

    2010-08-01

    Gaze direction influences younger adults' perception of emotional expressions, with direct gaze enhancing the perception of anger and joy, while averted gaze enhances the perception of fear. Age-related declines in emotion recognition and eye-gaze processing have been reported, indicating that there may be age-related changes in the ability to integrate these facial cues. As there is evidence of a positivity bias with age, age-related difficulties integrating these cues may be greatest for negative emotions. The present research investigated age differences in the extent to which gaze direction influenced explicit perception (e.g., anger, fear and joy; Study 1) and social judgments (e.g., of approachability; Study 2) of emotion faces. Gaze direction did not influence the perception of fear in either age group. In both studies, age differences were found in the extent to which gaze direction influenced judgments of angry and joyful faces, with older adults showing less integration of gaze and emotion cues than younger adults. Age differences were greatest when interpreting angry expressions. Implications of these findings for older adults' social functioning are discussed.

  20. How does gaze direction affect facial processing in social anxiety? -An ERP study.

    Science.gov (United States)

    Li, Dan; Yu, Fengqiong; Ye, Rong; Chen, Xingui; Xie, Xinhui; Zhu, Chunyan; Wang, Kai

    2017-02-09

    Previous behavioral studies have demonstrated an effect of eye gaze direction on the processing of emotional expressions in adults with social anxiety. However, specific brain responses to the interaction between gaze direction and facial expressions in social anxiety remain unclear. The present study aimed to explore the time course of such interaction using event-related potentials (ERPs) in participants with social anxiety. High socially anxious individuals and low socially anxious individuals were asked to identify the gender of angry or neutral faces with direct or averted gaze while their behavioral performance and electrophysiological data were monitored. We found that identification of angry faces with direct but not averted gaze elicited larger N2 amplitude in high socially anxious individuals compared to low socially anxious individuals, while identification of neutral faces did not produce any gaze modulation effect. Moreover, the N2 was correlated with increased anxiety severity upon exposure to angry faces with direct gaze. Therefore, our results suggest that gaze direction modulates the processing of threatening faces in social anxiety. The N2 component elicited by angry faces with direct gaze could be a state-dependent biomarker of social anxiety and may be an important reference biomarker for social anxiety diagnosis and intervention.

  1. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces.

    Science.gov (United States)

    Abbott, W W; Faisal, A A

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s(-1), more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark--the control of the video arcade game 'Pong'.

  2. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    Science.gov (United States)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  3. The expression of Mas-receptor of the renin-angiotensin system in the human eye.

    Science.gov (United States)

    Vaajanen, A; Kalesnykas, G; Vapaatalo, H; Uusitalo, H

    2015-07-01

    The local renin-angiotensin system has been held to be expressed in many organs, including the eye. It has an important role in the regulation of local fluid homeostasis, cell proliferation, fibrosis, and vascular tone. Mas-receptor (Mas-R) is a potential receptor acting mainly opposite to the well-known angiotensin II receptor type 1. The aim of this study was to determine if Mas-R is expressed in the human eye. Seven enucleated human eyes were used in immunohistochemical detection of Mas-R and its endogenous ligand angiotensin (1-7) [Ang(1-7)]. Both light microscopy and immunofluorescent detection methods were used. A human kidney preparation sample was used as control. The Mas-R was found to have nuclear localization, and localized in the retinal nuclear layers and in the structures of the anterior segment of the eye. A cytoplasmic immunostaining pattern of Ang(1-7) was found in the inner and outer nuclear and plexiform layers of the retina and in the ciliary body. To the best of our knowledge, this is the first report showing Mas-R expression in the human eye. Its localization suggests that it may have a role in physiological and pathological processes in the anterior part of the eye and in the retina.

  4. Research of the Exploitation of Human Resources in Blind Prevention and Primary Eye Care

    Institute of Scientific and Technical Information of China (English)

    JingxianWei; YonglongZhao

    1995-01-01

    Purpose:This research studied how to establish a relatively advanced blindness prevention and eye care cause in economically underdeveloped countryside.Methods:Ophthalmic vocational schools and professional lectures were held to train“practical type”primary eye care workers for the coumtryside.Further study in high-level(above provincial)hospitals was taken to train blindness preention &eye care backbones and leaders.Results:In 1986,the ratio of the number of the eye care workers of all levels to the number of the whole population in the prefecture was1:26000.In1992,it roseto1:17000.Aneye care network of 222stations had been established in tb countryside.Ten in the 13county hospitals had a seperated ophthalmology ed- partment,in which 3were awarded“National advanced blindness prevention County”.Twenty one hospitals were appointed as the Unit of Surgical Vision-Rehabilitation of Cataract.Blindness prevention and eye care convered1000000population(eye care avaliable within 5kilometers),23.5%of the whol popula-tion.Conclusions:In a demographically large but economically underdexeloped country-side area,the key to wide-range blindness prevention and eye oare is to exploti human resources effectively.We should train“Practical type”primary eye care workers,and have a number of edpartment leaders who are authoritive,influential in this field and ready to sacrifice to this cause.

  5. Effect of tissue and atmosphere's parameters on human eye temperature distribution.

    Science.gov (United States)

    Firoozan, Mohammad Sadegh; Porkhial, Soheil; Nejad, Ali Salmani

    2015-01-01

    A three dimensional finite element method analysis was employed to investigate the effect of tissue and atmosphere parameters namely, ambient temperature, ambient convection coefficient, local blood temperature, and blood convection coefficient upon temperature distribution of human eyes. As a matter of simplification, only eye ball and skull bone are considered as the system of eye modeling. Decreasing the local blood temperature and keeping it cool is one of the most important ways to control bleeding during surgeries. By lower temperature of body organs such as the eye, the need for oxygenated blood is reduced, allowing for an extension in time for surgery. With this in mind, this study is done to see which one of parameters, such as ambient temperature, ambient convection coefficient, local blood temperature, and blood convection coefficient, has an effective role in decreasing the temperature of the eye. To this end, 3 different paths were employed to find out about the temperature distribution through the eye. The analysis of the three paths demonstrates the interaction of ambient and blood temperature in modeling temperature changes in specific locations of the eye. These data will be important in applications such as eye surgery, relaxation, and sleep therapy.

  6. Infant Eyes: A Window on Cognitive Development

    Science.gov (United States)

    Aslin, Richard N.

    2012-01-01

    Eye-trackers suitable for use with infants are now marketed by several commercial vendors. As eye-trackers become more prevalent in infancy research, there is the potential for users to be unaware of dangers lurking "under the hood" if they assume the eye-tracker introduces no errors in measuring infants' gaze. Moreover, the influx of voluminous…

  7. Activation of neural progenitor cells in human eyes with proliferative vitreoretinopathy.

    Science.gov (United States)

    Johnsen, Erik O; Frøen, Rebecca C; Albert, Réka; Omdal, Bente K; Sarang, Zsolt; Berta, András; Nicolaissen, Bjørn; Petrovski, Goran; Moe, Morten C

    2012-05-01

    In addition to the ability for self-renewal and functional differentiation, neural stem/progenitor cells (NSCs) can respond to CNS injuries by targeted migration. In lower vertebrates, retinal injury is known to activate NSCs in the ciliary marginal zone (CMZ). Cells expressing markers of NSCs are also present in the ciliary body epithelium (CE) and in Müller glia in the peripheral retina (PR) of the adult human eye. However, these cells seem to be quiescent in the adult human eye and recent reports have shown that CE cells have limited properties of NSCs. In order to further clarify whether NSCs exist in the adult human eye, we tested whether NSC-like cells could be activated in eyes with proliferative vitreoretinopathy (PVR). The PR and CE were studied for NSC-associated markers in human enucleated control eyes and eyes with confirmed PVR, as well as in a mouse model of PVR. Furthermore, cells isolated from vitreous samples obtained during vitrectomies for retinal detachment were directly fixed or cultured in a stem cell-promoting medium and compared to cells cultured from the post-mortem retina and CE. In situ characterization of the normal eyes revealed robust expression of markers present in NSCs (Nestin, Sox2, Pax6) only around peripheral cysts of the proximal pars plana region and the PR, the latter population also staining for the glial marker GFAP. Although there were higher numbers of dividing cells in the CE of PVR eyes than in controls, we did not detect NSC-associated markers in the CE except around the proximal pars plana cysts. In the mice PVR eyes, Nestin activation was also found in the CE. In human PVR eyes, proliferation of both non-glial and glial cells co-staining NSC-associated markers was evident around the ora serrata region. Spheres formed in 7/10 vitreous samples from patients with PVR compared to 2/15 samples from patients with no known PVR, and expressed glial - and NSC-associated markers both after direct fixation and repetitive

  8. The Human Eye Position Control System in a Rehabilitation Setting

    Directory of Open Access Journals (Sweden)

    Yvonne Nolan

    2005-01-01

    Full Text Available Our work at Ireland’s National Rehabilitation Hospital involves designing communication systems for people suffering from profound physical disabilities. One such system uses the electro-oculogram, which is an (x,y system of voltages picked up by pairs of electrodes placed, respectively, above and below and on either side of the eyes. The eyeball has a dc polarisation between cornea and back, arising from the photoreceptor rods and cones in the retina. As the eye rotates, the varying voltages projected onto the electrodes drive a cursor over a mimic keyboard on a computer screen. Symbols are selected with a switching action derived, for example, from a blink. Experience in using this mode of communication has given us limited facilities to study the eye position control system. We present here a resulting new feedback model for rotation in either the vertical or the horizontal plane, which involves the eyeball controlled by an agonist-antagonist muscle pair, modelled by a single equivalent bidirectional muscle with torque falling off linearly with angular velocity. We have incorporated muscle spindles and have tuned them by pole assignment associated with an optimum stability criterion.

  9. Elastic modulus of orbicularis oculi muscle in normal humans, humans with Graves' eye disease, and cynomolgus monkeys.

    Science.gov (United States)

    Oestreicher, J H; Frueh, B R

    1995-06-01

    We built an experimental apparatus to investigate the passive elastic characteristics of orbicularis oculi muscle and examined specimens from normal humans, humans with stable Graves' eye disease, and cynomolgus monkeys. Stress-strain curves were determined and found to be exponential. The elastic modulus (Young's modulus), analogous to the stiffness of the material, was calculated as a function of strain. Elastic modulus as a function of instantaneous stress was linear. Monkey elastic modulus values were determined, but did not allow meaningful interspecies comparison because of the small sample size. No significant difference was found between normal humans and humans with Graves' eye disease with respect to elastic modulus values.

  10. To Estimate the Axial Elastic Modulus of Eye and Posterior Wall Thickness in Healthy Human Eye by Ultrasound Images and their Relation with Age and Gender

    Directory of Open Access Journals (Sweden)

    S. Shahbazi

    2007-06-01

    Full Text Available Introduction: Based on the invasive studies it has been shown that factors such as age, the progress of eye disorders, lens fibers compression and the biochemical changes of ocular matrix alter the physical characteristics and elastic properties of eye. In this study, a noninvasive method of estimating human eye elasticityis proposed and its relation with age and gender is evaluated using ultrasound images. Materials and Methods: To estimate eye elasticity, an especial loading system was designed and an external stress of 2614 ± 146 Pa which is less than the intraocular pressure of eye was applied to 20 eyes in an in vivo study. The pressure was measured using digital force gauge. The ultrasound images of B-mode were acquired prior to and post applying the stress. For the offline study throughout the loading process, the ultrasound images were saved as multi-frames into the computer by video grabber board. Monitoring, saving and further study of images were provided for the extraction of eye axial length and posterior wall thickness (PWT. The elasticity was estimated by measuring the relative changes of the axial length of eye, the posterior wall thickness and the applied stress. The statistical correlation of elastic modulus was analyzed based on age and gender. Results: The elastic modulus of the eye and the posterior wall thickness was estimated to be 51,777 ± 27304 and 14603 ± 4636 Pa, respectively. The obtained results indicated that there was no significant difference between the elastic parameters of the eye and the posterior wall thickness based on gender in both male and female group. The correlation analysis of the elastic parameter showed that there was significant difference between the eye and the posterior wall thickness based on age with a 95% confidence interval. Discussion and Conclusion: Based on the results obtained in this study the ultrasonic instruments might be used to estimate the hardness of eye lesions as well as eye

  11. Proton Dose Assessment to the Human Eye Using Monte Carlo N-Particle Transport Code (MCNPX)

    Science.gov (United States)

    2006-08-01

    objective of this project was to develop a simple MCNPX model of the human eye to approximate dose delivered from proton therapy. The calculated dose...computer code MCNPX that approximates dose delivered during proton therapy. The calculations considered proton interactions and secondary interactions...Volume Calculation The MCNPX code has limited ability to compute the volumes of defined cells. The dosimetric volumes in the outer wall of the eye are

  12. Directional eye fixation sensor using birefringence-based foveal detection

    Science.gov (United States)

    Gramatikov, Boris I.; Zalloum, Othman H. Y.; Wu, Yi Kai; Hunter, David G.; Guyton, David L.

    2007-04-01

    We recently developed and reported an eye fixation monitor that detects the fovea by its radial orientation of birefringent nerve fibers. The instrument used a four-quadrant photodetector and a normalized difference function to check for a best match between the detector quadrants and the arms of the bow-tie pattern of polarization states surrounding the fovea. This function had a maximum during central fixation but could not tell where the subject was looking relative to the center. We propose a linear transformation to obtain horizontal and vertical eye position coordinates from the four photodetector signals, followed by correction based on a priori calibration information. The method was verified on both a computer model and on human eyes. The major advantage of this new eye-tracking method is that it uses true information coming from the fovea, rather than reflections from other structures, to identify the direction of foveal gaze.

  13. Eye-head coordination in cats.

    Science.gov (United States)

    Guitton, D; Douglas, R M; Volle, M

    1984-12-01

    Gaze is the position of the visual axis in space and is the sum of the eye movement relative to the head plus head movement relative to space. In monkeys, a gaze shift is programmed with a single saccade that will, by itself, take the eye to a target, irrespective of whether the head moves. If the head turns simultaneously, the saccade is correctly reduced in size (to prevent gaze overshoot) by the vestibuloocular reflex (VOR). Cats have an oculomotor range (OMR) of only about +/- 25 degrees, but their field of view extends to about +/- 70 degrees. The use of the monkey's motor strategy to acquire targets lying beyond +/- 25 degrees requires the programming of saccades that cannot be physically made. We have studied, in cats, rapid horizontal gaze shifts to visual targets within and beyond the OMR. Heads were either totally unrestrained or attached to an apparatus that permitted short unexpected perturbations of the head trajectory. Qualitatively, similar rapid gaze shifts of all sizes up to at least 70 degrees could be accomplished with the classic single-eye saccade and a saccade-like head movement. For gaze shifts greater than 30 degrees, this classic pattern frequently was not observed, and gaze shifts were accomplished with a series of rapid eye movements whose time separation decreased, frequently until they blended into each other, as head velocity increased. Between discrete rapid eye movements, gaze continued in constant velocity ramps, controlled by signals added to the VOR-induced compensatory phase that followed a saccade. When the head was braked just prior to its onset in a 10 degrees gaze shift, the eye attained the target. This motor strategy is the same as that reported for monkeys. However, for larger target eccentricities (e.g., 50 degrees), the gaze shift was interrupted by the brake and the average saccade amplitude was 12-15 degrees, well short of the target and the OMR. Gaze shifts were completed by vestibularly driven eye movements when the

  14. Horizontal gaze palsy with progressive scoliosis: CT and MR findings

    Energy Technology Data Exchange (ETDEWEB)

    Bomfim, Rodrigo C.; Tavora, Daniel G.F.; Nakayama, Mauro; Gama, Romulo L. [Sarah Network of Rehabilitation Hospitals, Department of Radiology, Ceara (Brazil)

    2009-02-15

    Horizontal gaze palsy with progressive scoliosis (HGPPS) is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and progressive scoliosis developing in childhood and adolescence. We present a child with clinical and neuroimaging findings typical of HGPPS. CT and MRI of the brain demonstrated pons hypoplasia, absence of the facial colliculi, butterfly configuration of the medulla and a deep midline pontine cleft. We briefly discuss the imaging aspects of this rare entity in light of the current literature. (orig.)

  15. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie; Tourassi, Georgia D. [Biomedical Science and Engineering Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pinto, Frank [School of Engineering, Science, and Technology, Virginia State University, Petersburg, Virginia 23806 (United States); Morin-Ducote, Garnetta; Hudson, Kathleen B. [Department of Radiology, University of Tennessee Medical Center at Knoxville, Knoxville, Tennessee 37920 (United States)

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  16. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie [ORNL; Pinto, Frank M [ORNL; Morin-Ducote, Garnetta [University of Tennessee, Knoxville (UTK); Hudson, Kathy [University of Tennessee, Knoxville (UTK); Tourassi, Georgia [ORNL

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  17. Analyzing Eye-Tracking Information in Visualization and Data Space: From Where on the Screen to What on the Screen.

    Science.gov (United States)

    Alam, Sayeed Safayet; Jianu, Radu

    2017-05-01

    Eye-tracking data is currently analyzed in the image space that gaze-coordinates were recorded in, generally with the help of overlays such as heatmaps or scanpaths, or with the help of manually defined areas of interest (AOI). Such analyses, which focus predominantly on where on the screen users are looking, require significant manual input and are not feasible for studies involving many subjects, long sessions, and heavily interactive visual stimuli. Alternatively, we show that it is feasible to collect and analyze eye-tracking information in data space. Specifically, the visual layout of visualizations with open source code that can be instrumented is known at rendering time, and thus can be used to relate gaze-coordinates to visualization and data objects that users view, in real time. We demonstrate the effectiveness of this approach by showing that data collected using this methodology from nine users working with an interactive visualization, was well aligned with the tasks that those users were asked to solve, and similar to annotation data produced by five human coders. Moreover, we introduce an algorithm that, given our instrumented visualization, could translate gaze-coordinates into viewed objects with greater accuracy than simply binning gazes into dynamically defined AOIs. Finally, we discuss the challenges, opportunities, and benefits of analyzing eye-tracking in visualization and data space.

  18. Chimpanzees and humans mimic pupil-size of conspecifics

    NARCIS (Netherlands)

    Kret, M.E.; Tomonaga, M.; Matsuzawa, T.

    2014-01-01

    Group-living typically provides benefits to individual group members but also confers costs. To avoid incredulity and betrayal and allow trust and cooperation, individuals must understand the intentions and emotions of their group members. Humans attend to other's eyes and from gaze and pupil-size c

  19. Human eye analytical and mesh-geometry models for ophthalmic dosimetry using MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Angelocci, Lucas V.; Fonseca, Gabriel P.; Yoriyaz, Helio, E-mail: hyoriyaz@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Eye tumors can be treated with brachytherapy using Co-60 plaques, I-125 seeds, among others materials. The human eye has regions particularly vulnerable to ionizing radiation (e.g. crystalline) and dosimetry for this region must be taken carefully. A mathematical model was proposed in the past [1] for the eye anatomy to be used in Monte Carlo simulations to account for dose distribution in ophthalmic brachytherapy. The model includes the description for internal structures of the eye that were not treated in previous works. The aim of this present work was to develop a new eye model based on the Mesh geometries of the MCNP6 code. The methodology utilized the ABAQUS/CAE (Simulia 3DS) software to build the Mesh geometry. For this work, an ophthalmic applicator containing up to 24 model Amersham 6711 I-125 seeds (Oncoseed) was used, positioned in contact with a generic tumor defined analytically inside the eye. The absorbed dose in eye structures like cornea, sclera, choroid, retina, vitreous body, lens, optical nerve and optical nerve wall were calculated using both models: analytical and MESH. (author)

  20. Remote Gaze Tracking System on a Large Display

    Directory of Open Access Journals (Sweden)

    Jihun Cha

    2013-10-01

    Full Text Available We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC for detecting eye position and an auto-focusing narrow view camera (NVC for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user’s facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  1. Curvature sensor for the measurement of the static corneal topography and the dynamic tear film topography in the human eye

    Science.gov (United States)

    Gruppetta, Steve; Koechlin, Laurent; Lacombe, François; Puget, Pascal

    2005-10-01

    A system to measure the topography of the first optical surface of the human eye noninvasively by using a curvature sensor is described. The static corneal topography and the dynamic topography of the tear film can both be measured, and the topographies obtained are presented. The system makes possible the study of the dynamic aberrations introduced by the tear film to determine their contribution to the overall ocular aberrations in healthy eyes, eyes with corneal pathologies, and eyes wearing contact lenses.

  2. Goat′s eye integrated with a human cataractous lens: A training model for phacoemulsification

    Directory of Open Access Journals (Sweden)

    Sabyasachi Sengupta

    2015-01-01

    Full Text Available A relatively simple and inexpensive technique to train surgeons in phacoemulsification using a goat′s eye integrated with a human cataractous nucleus is described. The goat′s eye is placed on a bed of cotton within the lumen of a cylindrical container. This is then mounted on a rectangular thermocol so that the limbus is presented at the surgical field. After making a clear corneal entry with a keratome, the trainer makes a 5-5.5 mm continuous curvilinear capsulorhexis in the anterior lens capsule, creates a crater of adequate depth in the cortex and inserts the human nucleus within this crater in the goat′s capsular bag. The surgical wound is sutured, and the goat′s eye is ready for training. Creating the capsulorhexis with precision and making the crater of adequate depth to snugly accommodate the human nucleus are the most important steps to prevent excessive wobbling of the nucleus while training.

  3. Identification of lymphatics in the ciliary body of the human eye: a novel "uveolymphatic" outflow pathway.

    Science.gov (United States)

    Yücel, Yeni H; Johnston, Miles G; Ly, Tina; Patel, Manoj; Drake, Brian; Gümüş, Ersin; Fraenkl, Stephan A; Moore, Sara; Tobbia, Dalia; Armstrong, Dianna; Horvath, Eva; Gupta, Neeru

    2009-11-01

    Impaired aqueous humor flow from the eye may lead to elevated intraocular pressure and glaucoma. Drainage of aqueous fluid from the eye occurs through established routes that include conventional outflow via the trabecular meshwork, and an unconventional or uveoscleral outflow pathway involving the ciliary body. Based on the assumption that the eye lacks a lymphatic circulation, the possible role of lymphatics in the less well defined uveoscleral pathway has been largely ignored. Advances in lymphatic research have identified specific lymphatic markers such as podoplanin, a transmembrane mucin-type glycoprotein, and lymphatic vessel endothelial hyaluronan receptor-1 (LYVE-1). Lymphatic channels were identified in the human ciliary body using immunofluorescence with D2-40 antibody for podoplanin, and LYVE-1 antibody. In keeping with the criteria for lymphatic vessels in conjunctiva used as positive control, D2-40 and LYVE-1-positive lymphatic channels in the ciliary body had a distinct lumen, were negative for blood vessel endothelial cell marker CD34, and were surrounded by either discontinuous or no collagen IV-positive basement membrane. Cryo-immunogold electron microscopy confirmed the presence D2-40-immunoreactivity in lymphatic endothelium in the human ciliary body. Fluorescent nanospheres injected into the anterior chamber of the sheep eye were detected in LYVE-1-positive channels of the ciliary body 15, 30, and 45 min following injection. Four hours following intracameral injection, Iodine-125 radio-labeled human serum albumin injected into the sheep eye (n = 5) was drained preferentially into cervical, retropharyngeal, submandibular and preauricular lymph nodes in the head and neck region compared to reference popliteal lymph nodes (P human ciliary body, and that fluid and solutes flow at least partially through this system. The discovery of a uveolymphatic pathway in the eye is novel and highly relevant to studies of glaucoma and other eye diseases.

  4. Robot Arm Control and Having Meal Aid System with Eye Based Human-Computer Interaction (HCI)

    Science.gov (United States)

    Arai, Kohei; Mardiyanto, Ronny

    Robot arm control and having meal aid system with eye based HCI is proposed. The proposed system allows disabled person to select desirable food from the meal tray by their eyes only. Robot arm which is used for retrieving the desirable food is controlled by human eye. At the tip of the robot arm, tiny camera is equipped. Disabled person wear a glass of which a single Head Mount Display: HMD and tiny camera is mounted so that disabled person can take a look at the desired food and retrieve it by looking at the food displayed onto HMD. Experimental results show that disabled person can retrieve the desired food successfully. It also is confirmed that robot arm control by eye based HCI is much faster than that by hands.

  5. Basement membrane abnormalities in human eyes with diabetic retinopathy

    DEFF Research Database (Denmark)

    Ljubimov, A V; Burgeson, R E; Butkowski, R J

    1996-01-01

    discontinuously for laminin-1, entactin/nidogen, and alpha3-alpha4 Type IV collagen, in contrast to non-DR corneas. Major BM alterations were found in DR retinas compared to normals and non-DR diabetics. The inner limiting membrane (retinal BM) of DR eyes had accumulations of fibronectin (including cellular......) and Types I, III, IV (alpha1-alpha2), and V collagen. The BM zone of new retinal blood vessels in neovascularized areas accumulated tenascin and Type XII collagen, whereas normal, diabetic, and adjacent DR retinas showed only weak and irregular staining. In preretinal membranes, perlecan, bamacan, and Types...... VI, VIII, XII, and XIV collagen were newly identified. Diabetic BM thickening appears to involve qualitative alterations of specific BM markers at an advanced disease stage, with the appearance of DR....

  6. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  7. APPLICATION OF EYE TRACKING FOR MEASUREMENT AND EVALUATION IN HUMAN FACTORS STUDIES IN CONTROL ROOM MODERNIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, C.; Spielman, Z.; LeBlanc, K.; Rice, B.

    2017-05-01

    An important element of human factors engineering (HFE) pertains to measurement and evaluation (M&E). The role of HFE-M&E should be integrated throughout the entire control room modernization (CRM) process and be used for human-system performance evaluation and diagnostic purposes with resolving potential human engineering deficiencies (HEDs) and other human machine interface (HMI) design issues. NUREG-0711 describes how HFE in CRM should employ a hierarchical set of measures, particularly during integrated system validation (ISV), including plant performance, personnel task performance, situation awareness, cognitive workload, and anthropometric/ physiological factors. Historically, subjective measures have been primarily used since they are easier to collect and do not require specialized equipment. However, there are pitfalls with relying solely on subjective measures in M&E such that negatively impact reliability, sensitivity, and objectivity. As part of comprehensively capturing a diverse set of measures that strengthen findings and inferences made of the benefits from emerging technologies like advanced displays, this paper discusses the value of using eye tracking as an objective method that can be used in M&E. A brief description of eye tracking technology and relevant eye tracking measures is provided. Additionally, technical considerations and the unique challenges with using eye tracking in full-scaled simulations are addressed. Finally, this paper shares preliminary findings regarding the use of a wearable eye tracking system in a full-scale simulator study. These findings should help guide future full-scale simulator studies using eye tracking as a methodology to evaluate human-system performance.

  8. In the eye of the beholder: eye contact increases resistance to persuasion.

    Science.gov (United States)

    Chen, Frances S; Minson, Julia A; Schöne, Maren; Heinrichs, Markus

    2013-11-01

    Popular belief holds that eye contact increases the success of persuasive communication, and prior research suggests that speakers who direct their gaze more toward their listeners are perceived as more persuasive. In contrast, we demonstrate that more eye contact between the listener and speaker during persuasive communication predicts less attitude change in the direction advocated. In Study 1, participants freely watched videos of speakers expressing various views on controversial sociopolitical issues. Greater direct gaze at the speaker's eyes was associated with less attitude change in the direction advocated by the speaker. In Study 2, we instructed participants to look at either the eyes or the mouths of speakers presenting arguments counter to participants' own attitudes. Intentionally maintaining direct eye contact led to less persuasion than did gazing at the mouth. These findings suggest that efforts at increasing eye contact may be counterproductive across a variety of persuasion contexts.

  9. Examining the durability of incidentally learned trust from gaze cues.

    Science.gov (United States)

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  10. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study.

    Science.gov (United States)

    Tsujimura, Akira; Kiuchi, Hiroshi; Soda, Tetsuji; Takezawa, Kentaro; Fukuhara, Shinichiro; Takao, Tetsuya; Sekiguchi, Yuki; Iwasa, Atsushi; Nonomura, Norio; Miyagawa, Yasushi

    2017-09-01

    Very little has been elucidated about sexual interest in female-to-male (FtM) transsexual persons. To investigate the sexual interest of FtM transsexual persons vs that of men using an eye-tracking system. The study included 15 men and 13 FtM transsexual subjects who viewed three sexual videos (clip 1: sexy clothed young woman kissing the region of the male genitals covered by underwear; clip 2: naked actor and actress kissing and touching each other; and clip 3: heterosexual intercourse between a naked actor and actress) in which several regions were designated for eye-gaze analysis in each frame. The designation of each region was not visible to the participants. Visual attention was measured across each designated region according to gaze duration. For clip 1, there was a statistically significant sex difference in the viewing pattern between men and FtM transsexual subjects. Longest gaze time was for the eyes of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. For clip 2, there also was a statistically significant sex difference. Longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects, and there was a significant difference between regions with longest gaze time. The most apparent difference was in the gaze time for the body of the actor: the percentage of time spent gazing at the body of the actor was 8.35% in FtM transsexual subjects, whereas it was only 0.03% in men. For clip 3, there were no statistically significant differences in viewing patterns between men and FtM transsexual subjects, although longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. We suggest that the characteristics of sexual interest of FtM transsexual persons are not the same as those of biological men. Tsujimura A, Kiuchi H, Soda T, et al. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons

  11. Dose conversion coefficients for neutron exposure to the lens of the human eye

    Energy Technology Data Exchange (ETDEWEB)

    Manger, Ryan P [ORNL; Bellamy, Michael B [ORNL; Eckerman, Keith F [ORNL

    2011-01-01

    Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 x 10{sup -9} to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematical phantom in the MCNPX transport calculations.

  12. Single dose testosterone administration alleviates gaze avoidance in women with Social Anxiety Disorder.

    Science.gov (United States)

    Enter, Dorien; Terburg, David; Harrewijn, Anita; Spinhoven, Philip; Roelofs, Karin

    2016-01-01

    Gaze avoidance is one of the most characteristic and persistent social features in people with Social Anxiety Disorder (SAD). It signals social submissiveness and hampers adequate social interactions. Patients with SAD typically show reduced testosterone levels, a hormone that facilitates socially dominant gaze behavior. Therefore we tested as a proof of principle whether single dose testosterone administration can reduce gaze avoidance in SAD. In a double-blind, within-subject design, 18 medication-free female participants with SAD and 19 female healthy control participants received a single dose of 0.5mg testosterone and a matched placebo, at two separate days. On each day, their spontaneous gaze behavior was recorded using eye-tracking, while they looked at angry, happy, and neutral facial expressions. Testosterone enhanced the percentage of first fixations to the eye-region in participants with SAD compared to healthy controls. In addition, SAD patients' initial gaze avoidance in the placebo condition was associated with more severe social anxiety symptoms and this relation was no longer present after testosterone administration. These findings indicate that single dose testosterone administration can alleviate gaze avoidance in SAD. They support theories on the dominance enhancing effects of testosterone and extend those by showing that effects are particularly strong in individuals featured by socially submissive behavior. The finding that this core characteristic of SAD can be directly influenced by single dose testosterone administration calls for future inquiry into the clinical utility of testosterone in the treatment of SAD.

  13. The effect of human image in B2C website design: an eye-tracking study

    Science.gov (United States)

    Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo

    2014-09-01

    On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.

  14. Mathematical models of the dynamics of the human eye

    CERN Document Server

    Collins, Richard

    1980-01-01

    A rich and abundant literature has developed during the last half century dealing with mechanical aspects of the eye, mainly from clinical and, experimental points of view. For the most part, workers have attempted to shed light on the complex set of conditions known by the general term glaucoma. These conditions are characterised by an increase in intraocular pressure sufficient to cause de­ generation of the optic disc and concomitant defects in the visual field, which, if not controlled, lead to inevitable permanent blindness. In the United States alone, an estimated 50,000 persons are blind as a result of glaucoma, which strikes about 2% of the population over 40 years of age (Vaughan and Asbury, 1974). An understanding of the underlying mechanisms of glaucoma is hindered by the fact that elevated intraocular pressure, like a runny nose, is but a symptom which may have a variety of causes. Only by turning to the initial pathology can one hope to understand this important class of medical problems.

  15. Structural and Biochemical Analyses of Choroidal Thickness in Human Donor Eyes

    Science.gov (United States)

    Sohn, Elliott H.; Khanna, Aditi; Tucker, Budd A.; Abràmoff, Michael D.; Stone, Edwin M.; Mullins, Robert F.

    2014-01-01

    Purpose. The choroid plays a vital role in the health of the outer retina. While measurements of choroid using optical coherence tomography show altered thickness in aging and macular disease, detailed histopathologic and proteomic analyses are lacking. In this study we sought to evaluate biochemical differences in human donor eyes between very thin and thick choroids. Methods. One hundred forty-one eyes from 104 donors (mean age ± standard deviation, 81.5 ± 12.2) were studied. Macular sections were collected, and the distance between Bruch's membrane and the inner surface of the sclera was measured in control, early/dry age-related macular degeneration (AMD), neovascular AMD, and geographic atrophy eyes. Proteins from the RPE-choroid of eyes with thick and thin choroids were analyzed using two-dimensional electrophoresis and/or mass spectrometry. Two proteins with altered abundance were confirmed using Western blot analysis. Results. Donor eyes showed a normal distribution of thicknesses. Eyes with geographic atrophy had significantly thinner choroids than age-matched controls or early AMD eyes. Proteomic analysis showed higher levels of the serine protease SERPINA3 in thick choroids and increased levels of tissue inhibitor of metalloproteinases-3 (TIMP3) in thin choroids. Conclusions. Consistent with clinical imaging observations, geographic atrophy was associated with choroidal thinning. Biochemical data suggest an alteration in the balance between proteases and protease inhibitors in eyes that lie at the extremes of choroidal thickness. An improved understanding of the basic mechanisms associated with choroidal thinning may guide the development of new therapies for AMD. PMID:24519422

  16. SPECFACE - A Dataset of Human Faces Wearing Spectacles

    OpenAIRE

    2015-01-01

    This paper presents a database of human faces for persons wearing spectacles. The database consists of images of faces having significant variations with respect to illumination, head pose, skin color, facial expressions and sizes, and nature of spectacles. The database contains data of 60 subjects. This database is expected to be a precious resource for the development and evaluation of algorithms for face detection, eye detection, head tracking, eye gaze tracking, etc., for subjects wearing...

  17. Gaze angle: a possible mechanism of visual stress in virtual reality headsets.

    Science.gov (United States)

    Mon-Williams, M; Plooy, A; Burgess-Limerick, R; Wann, J

    1998-03-01

    It is known that some Virtual Reality (VR) head-mounted displays (HMDs) can cause temporary deficits in binocular vision. On the other hand, the precise mechanism by which visual stress occurs is unclear. This paper is concerned with a potential source of visual stress that has not been previously considered with regard to VR systems: inappropriate vertical gaze angle. As vertical gaze angle is raised or lowered the 'effort' required of the binocular system also changes. The extent to which changes in vertical gaze angle alter the demands placed upon the vergence eye movement system was explored. The results suggested that visual stress may depend, in part, on vertical gaze angle. The proximity of the display screens within an HMD means that a VR headset should be in the correct vertical location for any individual user. This factor may explain some previous empirical results and has important implications for headset design. Fortuitously, a reasonably simple solution exists.

  18. Importance of non-synonymous OCA2 variants in human eye colour prediction

    DEFF Research Database (Denmark)

    Andersen, Jeppe Dyrberg; Pietroni, Carlotta; Johansen, Peter

    2016-01-01

    in the promotor region of OCA2 (OMIM #611409). Nevertheless, many eye colors cannot be explained by only considering rs12913832:A>G. Methods: In this study, we searched for additional variants in OCA2 to explain human eye color by sequencing a 500 kbp region, encompassing OCA2 and its promotor region. Results: We......Background: The color of the eyes is one of the most prominent phenotypes in humans and it is often used to describe the appearance of an individual. The intensity of pigmentation in the iris is strongly associated with one single-nucleotide polymorphism (SNP), rs12913832:A>G that is located...... identified three nonsynonymous OCA2 variants as important for eye color, including rs1800407:G>A (p.Arg419Gln) and two variants, rs74653330:A>T (p.Ala481Thr) and rs121918166:G>A (p.Val443Ile), not previously described as important for eye color variation. It was shown that estimated haplotypes consisting...

  19. Controlled delivery of antiangiogenic drug to human eye tissue using a MEMS device

    KAUST Repository

    Pirmoradi, Fatemeh Nazly

    2013-01-01

    We demonstrate an implantable MEMS drug delivery device to conduct controlled and on-demand, ex vivo drug transport to human eye tissue. Remotely operated drug delivery to human post-mortem eyes was performed via a MEMS device. The developed curved packaging cover conforms to the eyeball thereby preventing the eye tissue from contacting the actuating membrane. By pulsed operation of the device, using an externally applied magnetic field, the drug released from the device accumulates in a cavity adjacent to the tissue. As such, docetaxel (DTX), an antiangiogenic drug, diffuses through the eye tissue, from sclera and choroid to retina. DTX uptake by sclera and choroid were measured to be 1.93±0.66 and 7.24±0.37 μg/g tissue, respectively, after two hours in pulsed operation mode (10s on/off cycles) at 23°C. During this period, a total amount of 192 ng DTX diffused into the exposed tissue. This MEMS device shows great potential for the treatment of ocular posterior segment diseases such as diabetic retinopathy by introducing a novel way of drug administration to the eye. © 2013 IEEE.

  20. Brain activation related to combinations of gaze position, visual input, and goal-directed hand movements.

    Science.gov (United States)

    Bédard, Patrick; Wu, Min; Sanes, Jerome N

    2011-06-01

    Humans reach to and acquire objects by transforming visual targets into action commands. How the brain integrates goals specified in a visual framework to signals into a suitable framework for an action plan requires clarification whether visual input, per se, interacts with gaze position to formulate action plans. To further evaluate brain control of visual-motor integration, we assessed brain activation, using functional magnetic resonance imaging. Humans performed goal-directed movements toward visible or remembered targets while fixating gaze left or right from center. We dissociated movement planning from performance using a delayed-response task and manipulated target visibility by its availability throughout the delay or blanking it 500 ms after onset. We found strong effects of gaze orientation on brain activation during planning and interactive effects of target visibility and gaze orientation on movement-related activation during performance in parietal and premotor cortices (PM), cerebellum, and basal ganglia, with more activation for rightward gaze at a visible target and no gaze modulation for movements directed toward remembered targets. These results demonstrate effects of gaze position on PM and movement-related processes and provide new information how visual signals interact with gaze position in transforming visual inputs into motor goals.

  1. Image-size differences worsen stereopsis independent of eye position

    Science.gov (United States)

    Vlaskamp, Björn N. S.; Filippini, Heather R.; Banks, Martin S.

    2010-01-01

    With the eyes in forward gaze, stereo performance worsens when one eye’s image is larger than the other’s. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference. PMID:19271927

  2. Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Keiko Sakurai

    2017-01-01

    Full Text Available A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor.

  3. Two eyes, one vision: binocular motion perception in human visual cortex

    NARCIS (Netherlands)

    Barendregt, M.

    2016-01-01

    An important aspect of human vision is the fact that it is binocular, i.e. that we have two eyes. As a result, the brain nearly always receives two slightly different images of the same visual scene. Yet, we only perceive a single image and thus our brain has to actively combine the binocular visual

  4. Two eyes, one vision: binocular motion perception in human visual cortex

    NARCIS (Netherlands)

    Barendregt, M.|info:eu-repo/dai/nl/371576792

    2016-01-01

    An important aspect of human vision is the fact that it is binocular, i.e. that we have two eyes. As a result, the brain nearly always receives two slightly different images of the same visual scene. Yet, we only perceive a single image and thus our brain has to actively combine the binocular visual

  5. Relation between Local Acoustic Parameters and Protein Distribution in Human and Porcine Eye Lenses

    NARCIS (Netherlands)

    Korte, de C.L.; Steen, van der A.F.W.; Thijssen, J.M.; Duindam, J.J.; Otto, Cees; Puppels, G.J.

    1994-01-01

    The purpose of this study is to characterize the eye lens (human, porcine) by acoustic measurements and to investigate whether relations exist with the local protein content. The acoustic measurements were performed with a 'scanning acoustic microscope' (SAM), operating at a frequency of 20 MHz. At

  6. The cortical eye proprioceptive signal modulates neural activity in higher-order visual cortex as predicted by the variation in visual sensitivity

    DEFF Research Database (Denmark)

    Balslev, Daniela; Siebner, Hartwig R; Paulson, Olaf B

    2012-01-01

    Whereas the links between eye movements and the shifts in visual attention are well established, less is known about how eye position affects the prioritization of visual space. It was recently observed that visual sensitivity varies with the direction of gaze and the level of excitability...... in the eye proprioceptive representation in human left somatosensory cortex (S1(EYE)), so that after 1Hz repetitive transcranial magnetic stimulation (rTMS) over S1(EYE), targets presented nearer the center of the orbit are detected more accurately. Here we used whole-brain functional magnetic resonance...... target when the right eye was rotated leftwards as compared with when it was rotated rightwards. This effect was larger after S1(EYE)-rTMS than after rTMS of a control area in the motor cortex. The neural response to retinally identical stimuli in this area could be predicted from the changes in visual...

  7. Visuomotor transformations for eye-hand coordination.

    Science.gov (United States)

    Henriques, D Y P; Medendorp, W P; Khan, A Z; Crawford, J D

    2002-01-01

    In recent years the scientific community has come to appreciate that the early cortical representations for visually guided arm movements are probably coded in a visual frame, i.e. relative to retinal landmarks. While this scheme accounts for many behavioral and neurophysiological observations, it also poses certain problems for manual control. For example, how are these oculocentric representations updated across eye movements, and how are they then transformed into useful commands for accurate movements of the arm relative to the body? Also, since we have two eyes, which is used as the reference point in eye-hand alignment tasks like pointing? We show that patterns of errors in human pointing suggest that early oculocentric representations for arm movement are remapped relative to the gaze direction during each saccade. To then transform these oculocentric representations into useful commands for accurate movements of the arm relative to the body, the brain correctly incorporates the three-dimensional, rotary geometry of the eyes when interpreting retinal images. We also explore the possibility that the eye-hand coordination system uses a strategy like ocular dominance, but switches alignment between the left and right eye in order to maximize eye-hand coordination in the best field of view. Finally, we describe the influence of eye position on eye-hand alignment, and then consider how head orientation influences the linkage between oculocentric visual frames and bodycentric motor frames. These findings are framed in terms of our 'conversion-on-demand' model, which suggests a virtual representation of egocentric space, i.e. one in which only those representations selected for action are put through the complex visuomotor transformations required for interaction with actual objects in personal space.

  8. Mirror Neurons of Ventral Premotor Cortex Are Modulated by Social Cues Provided by Others' Gaze.

    Science.gov (United States)

    Coudé, Gino; Festante, Fabrizia; Cilia, Adriana; Loiacono, Veronica; Bimbi, Marco; Fogassi, Leonardo; Ferrari, Pier Francesco

    2016-03-16

    Mirror neurons (MNs) in the inferior parietal lobule and ventral premotor cortex (PMv) can code the intentions of other individuals using contextual cues. Gaze direction is an important social cue that can be used for understanding the meaning of actions made by other individuals. Here we addressed the issue of whether PMv MNs are influenced by the gaze direction of another individual. We recorded single-unit activity in macaque PMv while the monkey was observing an experimenter performing a grasping action and orienting his gaze either toward (congruent gaze condition) or away (incongruent gaze condition) from a target object. The results showed that one-half of the recorded MNs were modulated by the gaze direction of the human agent. These gaze-modulated neurons were evenly distributed between those preferring a gaze direction congruent with the direction where the grasping action was performed and the others that preferred an incongruent gaze. Whereas the presence of congruent responses is in line with the usual coupling of hand and gaze in both executed and observed actions, the incongruent responses can be explained by the long exposure of the monkeys to this condition. Our results reveal that the representation of observed actions in PMv is influenced by contextual information not only extracted from physical cues, but also from cues endowed with biological or social value. In this study, we present the first evidence showing that social cues modulate MNs in the monkey ventral premotor cortex. These data suggest that there is an integrated representation of other's hand actions and gaze direction at the single neuron level in the ventral premotor cortex, and support the hypothesis of a functional role of MNs in decoding actions and understanding motor intentions. Copyright © 2016 the authors 0270-6474/16/363145-12$15.00/0.

  9. Increased Eye Contact during Conversation Compared to Play in Children with Autism

    Science.gov (United States)

    Jones, Rebecca M.; Southerland, Audrey; Hamo, Amarelle; Carberry, Caroline; Bridges, Chanel; Nay, Sarah; Stubbs, Elizabeth; Komarow, Emily; Washington, Clay; Rehg, James M.; Lord, Catherine; Rozga, Agata

    2017-01-01

    Children with autism have atypical gaze behavior but it is unknown whether gaze differs during distinct types of reciprocal interactions. Typically developing children (N = 20) and children with autism (N = 20) (4-13 years) made similar amounts of eye contact with an examiner during a conversation. Surprisingly, there was minimal eye contact…

  10. Trait Anxiety Impacts the Perceived Gaze Direction of Fearful But Not Angry Faces

    Directory of Open Access Journals (Sweden)

    Zhonghua Hu

    2017-07-01

    Full Text Available Facial expression and gaze direction play an important role in social communication. Previous research has demonstrated the perception of anger is enhanced by direct gaze, whereas, it is unclear whether perception of fear is enhanced by averted gaze. In addition, previous research has shown the anxiety affects the processing of facial expression and gaze direction, but hasn’t measured or controlled for depression. As a result, firm conclusions cannot be made regarding the impact of individual differences in anxiety and depression on perceptions of face expressions and gaze direction. The current study attempted to reexamine the effect of the anxiety level on the processing of facial expressions and gaze direction by matching participants on depression scores. A reliable psychophysical index of the range of eye gaze angles judged as being directed at oneself [the cone of direct gaze (CoDG] was used as the dependent variable in this study. Participants were stratified into high/low trait anxiety groups and asked to judge the gaze of angry, fearful, and neutral faces across a range of gaze directions. The result showed: (1 the perception of gaze direction was influenced by facial expression and this was modulated by trait anxiety. For the high trait anxiety group, the CoDG for angry expressions was wider than for fearful and neutral expressions, and no significant difference emerged between fearful and neutral expressions; For the low trait anxiety group, the CoDG for both angry and fearful expressions was wider than for neutral, and no significant difference emerged between angry and fearful expressions. (2 Trait anxiety modulated the perception of gaze direction only in the fearful condition, such that the fearful CoDG for the high trait anxiety group was narrower than the low trait anxiety group. This demonstrated that anxiety distinctly affected gaze perception in expressions that convey threat (angry, fearful, such that a high trait anxiety

  11. Gaze behavior of pre-adolescent children afflicted with Asperger syndrome.

    Science.gov (United States)

    Wiklund, Mari

    2012-01-01

    Asperger syndrome (AS) is a form of high-functioning autism characterized by qualitative impairment in social interaction. People afflicted with AS typically have abnormal nonverbal behaviors which are often manifested by avoiding eye contact. Gaze constitutes an important interactional resource, and an AS person's tendency to avoid eye contact may affect the fluidity of conversations and cause misunderstandings. For this reason, it is important to know the precise ways in which this avoidance is done, and in what ways it affects the interaction. The objective of this article is to describe the gaze behavior of preadolescent AS children in institutional multiparty conversations. Methodologically, the study is based on conversation analysis and a multimodal study of interaction. The findings show that three main patterns are used for avoiding eye contact: (1) fixing one's gaze straight ahead; (2) letting one's gaze wander around; and (3) looking at one's own hands when speaking. The informants of this study do not look at the interlocutors at all in the beginning or the middle of their turn. However, sometimes they turn to look at the interlocutors at the end of their turn. This proves that these children are able to use gaze as a source offeedback. When listening, looking at the speaker also seems to be easier for them than looking at the listeners when speaking

  12. Holistic gaze strategy to categorize facial expression of varying intensities.

    Directory of Open Access Journals (Sweden)

    Kun Guo

    Full Text Available Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region, however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.

  13. [Detection of carotenoids in the vitreous body of the human eye during prenatal development].

    Science.gov (United States)

    Iakovleva, M A; Panova, I G; Fel'dman, T B; Zak, P P; Tatikolov, A S; Sukhikh, G T; Ostrovskiĭ, M A

    2007-01-01

    Carotenoids were found for the first time in the vitreous body of human eye during the fetal period from week 15 until week 28. Their maximum content was timed to week 16-22. No carotenoids were found the vitreous body of 31-week fetuses, as well as adult humans, which corresponds to the published data. It was shown using HPLC that chromatographic characteristics of these carotenoids correspond to those of lutein and zeaxanthin, characteristic pigments of the retinal yellow macula.

  14. Coherent fiber optic sensor for early detection of cataractogenesis in a human eye lens

    Science.gov (United States)

    Dhadwal, Harbans S.; Ansari, Rafat R.; Dellavecchia, Michael A.

    1993-01-01

    A lensless backscatter fiber optic probe is used to measure the size distribution of protein molecules inside an excised, but intact, human eye lens. The fiber optic probe, about 5 mm in diameter, can be positioned arbitrarily close to the anterior surface of the eye; it is a trans-receiver, which delivers a Gaussian laser beam into a small region inside the lens and provides a coherent detection of the laser light scattered by the protein molecules in the backward direction. Protein sizes determined from the fast and slow diffusion coefficients show good correlation with the age of the lens and cataractogenesis.

  15. [THE STRUCTURE OF LYMPHATIC CAPILLARIES OF THE CILIARY BODY OF THE HUMAN EYE].

    Science.gov (United States)

    Borodin, Yu I; Bgatova, N P; Chernykh, V V; Trunov, A N; Pozhidayeva, A A; Konenkov, V I

    2015-01-01

    Using light microscopy, immunohistochemistry and electron microscopy, the structural organization of interstitial spaces and vessels of the ciliary body of the human eye (n = 5) were studied. The ciliary body was found to contain wide interstitial spaces--tissue clefts bound by collagen fibers and fibroblasts. Organ-specific lymphatic capillaries were also demonstrated in the ciliary body. According to the present findings and the lymphatic region concept, the first 2 elements of the lymphatic region of the eye were described: tissue clefts--prelymphatics and lymphatic capillaries of the ciliary body. The third element of the lymphatic region are the lymph nodes of the head and neck.

  16. Eye Typing using Markov and Active Appearance Models

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin; Nielsen, Mads

    2002-01-01

    We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced...

  17. Gaze Direction Detection in Autism Spectrum Disorder

    Science.gov (United States)

    Forgeot d'Arc, Baudouin; Delorme, Richard; Zalla, Tiziana; Lefebvre, Aline; Amsellem, Frédérique; Moukawane, Sanaa; Letellier, Laurence; Leboyer, Marion; Mouren, Marie-Christine; Ramus, Franck

    2017-01-01

    Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of…

  18. Can Speaker Gaze Modulate Syntactic Structuring and Thematic Role Assignment during Spoken Sentence Comprehension?

    Science.gov (United States)

    Knoeferle, Pia; Kreysa, Helene

    2012-01-01

    During comprehension, a listener can rapidly follow a frontally seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects are unclear. In two "visual-world" eye-tracking experiments participants watched a video of a speaker, seated at an angle, describing transitive (non-depicted) actions between two of three Second Life characters on a computer screen. Sentences were in German and had either subject(NP1)-verb-object(NP2) or object(NP1)-verb-subject(NP2) structure; the speaker either shifted gaze to the NP2 character or was obscured. Several seconds later, participants verified either the sentence referents or their role relations. When participants had seen the speaker's gaze shift, they anticipated the NP2 character before its mention and earlier than when the speaker was obscured. This effect was more pronounced for SVO than OVS sentences in both tasks. Interactions of speaker gaze and sentence structure were more pervasive in role-relations verification: participants verified the role relations faster for SVO than OVS sentences, and faster when they had seen the speaker shift gaze than when the speaker was obscured. When sentence and template role-relations matched, gaze-following even eliminated the SVO-OVS response-time differences. Thus, gaze-following is robust even when the speaker is seated at an angle to the listener; it varies depending on the syntactic structure and thematic role relations conveyed by a sentence; and its effects can extend to delayed post-sentence comprehension processes. These results suggest that speaker gaze effects contribute pervasively to visual attention and comprehension processes and should thus be accommodated by accounts of situated language comprehension.

  19. Can speaker gaze modulate syntactic structuring and thematic role assignment during spoken sentence comprehension?

    Directory of Open Access Journals (Sweden)

    Pia eKnoeferle

    2012-12-01

    Full Text Available During comprehension, a listener can rapidly follow a frontally-seated speaker's gaze to an object before its mention, a behavior which can shorten latencies in speeded sentence verification. However, the robustness of gaze-following, its interaction with core comprehension processes such as syntactic structuring, and the persistence of its effects are unclear. In two ``visual-world'' eye-tracking experiments participants watched a video of a speaker, seated at an angle, describing transitive (non-depicted actions between two of three Second Life characters on a computer screen. Sentences were in German and had either subject(NP1-verb-object(NP2 or object(NP1-verb-subject(NP2 structure; the speaker either shifted gaze to the NP2 character or was obscured. Several seconds later,participants verified either the sentence referents or their role relations. When participants had seen the speaker's gaze shift, they anticipated the NP2 character before its mention and earlier than when the speaker was obscured. This effect was more pronounced for SVO than OVS sentences in both tasks. Interactions of speaker gaze and sentence structure were more pervasive in role-relations verification: Participants verified the role relations faster for SVO than OVS sentences, and faster when they had seen the speaker shift gaze than when the speaker was obscured. When sentence and template role relations matched, gaze-following even eliminated the SVO-OVS response time differences. Thus, gaze-following is robust even when the speaker is seated at an angle to the listener; it varies depending on the syntactic structure and thematic role relations conveyed by a sentence; and its effects can extend to delayed post-sentence comprehension processes. These results suggest that speaker gaze effects contribute pervasively to visual attention and comprehension processes and should thus be accommodated by accounts of situated language comprehension.

  20. Human Response to Ductless Personalised Ventilation: Impact of Air Movement, Temperature and Cleanness on Eye Symptoms

    DEFF Research Database (Denmark)

    Dalewski, Mariusz; Fillon, Maelys; Bivolarova, Maria;

    2013-01-01

    The performance of ductless personalized ventilation (DPV) in conjunction with displacement ventilation (DV) was studied in relation to peoples’ health, comfort and performance. This paper presents results on the impact of room air temperature, using of DPV and local air filtration on eye blink...... rate and tear film quality. In a test room with DV and six workstations 30 human subjects were exposed for four hours to each of the following 5 experimental conditions: 23 °C and DV only, 23 °C and DPV with air filter, 29 °C and DV only, 29 °C and DPV, and 29 °C and DPV with air filter. At warm...... environment facially applied individually controlled air movement of room air, with or without local filtering, did not have significant impact on eye blink frequency and tear film quality. The local air movement and air cleaning resulted in increased eye blinking frequency and improvement of tear film...

  1. Gaze shifts during dual-tasking stair descent.

    Science.gov (United States)

    Miyasike-daSilva, Veronica; McIlroy, William E

    2016-11-01

    To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.

  2. DIAGNOSIS OF MYASTHENIA GRAVIS USING FUZZY GAZE TRACKING SOFTWARE

    Directory of Open Access Journals (Sweden)

    Javad Rasti

    2015-04-01

    Full Text Available Myasthenia Gravis (MG is an autoimmune disorder, which may lead to paralysis and even death if not treated on time. One of its primary symptoms is severe muscular weakness, initially arising in the eye muscles. Testing the mobility of the eyeball can help in early detection of MG. In this study, software was designed to analyze the ability of the eye muscles to focus in various directions, thus estimating the MG risk. Progressive weakness in gazing at the directions prompted by the software can reveal abnormal fatigue of the eye muscles, which is an alert sign for MG. To assess the user’s ability to keep gazing at a specified direction, a fuzzy algorithm was applied to images of the user’s eyes to determine the position of the iris in relation to the sclera. The results of the tests performed on 18 healthy volunteers and 18 volunteers in early stages of MG confirmed the validity of the suggested software.

  3. Enhancing sensorimotor activity by controlling virtual objects with gaze.

    Directory of Open Access Journals (Sweden)

    Cristián Modroño

    Full Text Available This fMRI work studies brain activity of healthy volunteers who manipulated a virtual object in the context of a digital game by applying two different control methods: using their right hand or using their gaze. The results show extended activations in sensorimotor areas, not only when participants played in the traditional way (using their hand but also when they used their gaze to control the virtual object. Furthermore, with the exception of the primary motor cortex, regional motor activity was similar regardless of what the effector was: the arm or the eye. These results have a potential application in the field of the neurorehabilitation as a new approach to generate activation of the sensorimotor system to support the recovery of the motor functions.

  4. Influence of Gaze Direction on Face Recognition: A Sensitive Effect

    Directory of Open Access Journals (Sweden)

    Noémy Daury

    2011-08-01

    Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.

  5. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  6. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    Science.gov (United States)

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  7. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    Science.gov (United States)

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  8. Edge detection of iris of the eye for human biometric identification system

    Directory of Open Access Journals (Sweden)

    Kateryna O. Tryfonova

    2015-03-01

    Full Text Available Method of human biometric identification by iris of the eye is considered as one of the most accurate and reliable methods of identification. Aim of the research is to solve the problem of edge detection of digital image of the human eye iris to be able to implement human biometric identification system by means of mobile device. To achieve this aim the algorithm of edge detection by Canny is considered in work. It consists of the following steps: smoothing, finding gradients, non-maximum suppression, double thresholding with hysteresis. The software implementation of the Canny algorithm is carried out for the Android mobile platform with the use of high level programming language Java.

  9. Thermal behavior of human eye in relation with change in blood perfusion, porosity, evaporation and ambient temperature.

    Science.gov (United States)

    Rafiq, Aasma; Khanday, M A

    2016-12-01

    Extreme environmental and physiological conditions present challenges for thermal processes in body tissues including multi-layered human eye. A mathematical model has been formulated in this direction to study the thermal behavior of the human eye in relation with the change in blood perfusion, porosity, evaporation and environmental temperatures. In this study, a comprehensive thermal analysis has been performed on the multi-layered eye using Pennes' bio-heat equation with appropriate boundary and interface conditions. The variational finite element method and MATLAB software were used for the solution purpose and simulation of the results. The thermoregulatory effect due to blood perfusion rate, porosity, ambient temperature and evaporation at various regions of human eye was illustrated mathematically and graphically. The main applications of this model are associated with the medical sciences while performing laser therapy and other thermoregulatory investigation on human eye.

  10. The inversion effect on gaze perception reflects processing of component information.

    Science.gov (United States)

    Schwaninger, Adrian; Lobmaier, Janek S; Fischer, Martin H

    2005-11-01

    When faces are turned upside-down they are much more difficult to recognize than other objects. This "face inversion effect" has often been explained in terms of configural processing, which is impaired when faces are rotated away from the upright. Here we report a "gaze inversion effect" and discuss whether it is related to configural face processing of the whole face. Observers reported the gaze locations of photographed upright or inverted faces. When whole faces were presented, we found an inversion effect both for constant errors and observer sensitivity. These results were closely replicated when only the eyes were visible. Together, our findings suggest that gaze processing is largely based on component-based information from the eye region. Processing this information is orientation-sensitive and does not seem to rely on configural processing of the whole face.

  11. GIBS block speller: toward a gaze-independent P300-based BCI.

    Science.gov (United States)

    Pires, Gabriel; Nunes, Urbano; Castelo-Branco, Miguel

    2011-01-01

    Brain-computer interface (BCI) opens a new communication channel for individuals with severe motor disorders. In P300-based BCIs, gazing the target event plays an important role in the BCI performance. Individuals who have their eye movements affected may lose the ability to gaze targets that are in the visual periphery. This paper presents a novel P300-based paradigm called gaze independent block speller (GIBS), and compares its performance with that of the standard row-column (RC) speller. GIBS paradigm requires extra selections of blocks of letters. The online experiments made with able-bodied participants show that the users can effectively control GIBS without moving the eyes (covert attention), while this task is not possible with RC speller. Furthermore, with overt attention, the results show that the improved classification accuracy of GIBS over RC speller compensates the extra selections, thereby achieving similar practical bit rates.

  12. Near-to-eye displays with embedded eye-tracking by bi-directional OLED microdisplay

    Science.gov (United States)

    Vogel, Uwe; Wartenberg, Philipp; Richter, Bernd; Brenner, Stephan; Baumgarten, Judith; Thomschke, Michael; Fehse, Karsten; Hild, Olaf

    2015-09-01

    Near-to-eye (NTE) projection is the major approach to "Smart Glasses", which have gained lot of traction during the last few years. Micro-displays based on organic light-emitting diodes (OLEDs) achieve high optical performance with excellent contrast ratio and large dynamic range at low power consumption, making them suitable for such application. In state-of-the-art applications the micro-display typically acts as a purely unidirectional output device. With the integration of an additional image sensor, the functionality of the micro-display can be extended to a bidirectional optical input/output device, aiming for implementation of eye-tracking capabilities in see-through (ST-)NTE applications to achieve gaze-based human-display-interaction. This paper describes a new bi-directional OLED microdisplay featuring SVGA resolution for both image display and acquisition, and its implementation with see-through NTE optics.

  13. Intraoperative length and tension curves of human eye muscles. Including stiffness in passive horizontal eye movement in awake volunteers

    NARCIS (Netherlands)

    H.J. Simonsz (Huib); G.H. Kolling (Gerold); H. Kaufmann (Herbert); B. van Dijk (Bob)

    1986-01-01

    textabstractIntraoperative continuous-registration length and tension curves of attached and detached eye muscles were made in 18 strabismic patients under general anesthesia. For relaxed eye muscles, we found an exponential relation between length and tension. An increased stiffness was quantified

  14. Vision research: losing sight of eye dominance.

    Science.gov (United States)

    Carey, D P

    2001-10-16

    Most people prefer to use their right eye for viewing. New evidence reveals that this dominance is much more plastic than that for one hand or foot: it changes from one eye to the other depending on angle of gaze. Remarkably, sighting dominance depends on the hand being directed towards the visual target.

  15. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    , the user simply captures an image using the HMD’s camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can......To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  16. A bird’s eye view of human language evolution

    Directory of Open Access Journals (Sweden)

    Robert eBerwick

    2012-04-01

    Full Text Available Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input-output channel for the externalization of language proper. Strikingly, the capability for auditory-vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioural, neural and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory-vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones.

  17. Gaze maintenance and autism spectrum disorder.

    Science.gov (United States)

    Kaye, Leah; Kurtz, Marie; Tierney, Cheryl; Soni, Ajay; Augustyn, Marilyn

    2014-01-01

    were equal and reactive without afferent pupillary defect, and normal visual tracking as assessed through pursuit and saccades. There were some head jerking motions observed which were not thought to be part of Chase's attempts to view objects. Gaze impersistence was noted, although it was not clear if this was due to a lack of attention or a true inability to maintain a gaze in the direction instructed. On review of the school's speech and language report, they state that he is >90% intelligible. He has occasional lip trills. Testing with the Clinical Evaluation of Language Fundamentals shows mild delays in receptive language, especially those that require visual attention. Verbal Motor Production Assessment for Children reveals focal oromotor control and sequencing skills that are below average, with groping when asked to imitate single oromotor nonspeech movements and sequenced double oromotor nonspeech movements. At 5½ years, he returns for follow-up, and he is outgoing and imaginative, eager to play and socialize. He makes eye contact but does not always maintain it. He asks and responds to questions appropriately, and he is able to follow verbal directions and verbal redirection. He is very interested in Toy Story characters but willing to share them and plays with other toys. Chase's speech has predictable, easy to decode sound substitutions. On interview with him, you feel that he has borderline cognitive abilities. He also demonstrates good eye contact but lack of visual gaze maintenance; this is the opposite of the pattern you are accustomed to in patients with autism spectrum disorder. What do you do next?

  18. Transverse chromatic aberration across the visual field of the human eye.

    Science.gov (United States)

    Winter, Simon; Sabesan, Ramkumar; Tiruveedhula, Pavan; Privitera, Claudio; Unsbo, Peter; Lundström, Linda; Roorda, Austin

    2016-11-01

    The purpose of this study was to measure the transverse chromatic aberration (TCA) across the visual field of the human eye objectively. TCA was measured at horizontal and vertical field angles out to ±15° from foveal fixation in the right eye of four subjects. Interleaved retinal images were taken at wavelengths 543 nm and 842 nm in an adaptive optics scanning laser ophthalmoscope (AOSLO). To obtain true measures of the human eye's TCA, the contributions of the AOSLO system's TCA were measured using an on-axis aligned model eye and subtracted from the ocular data. The increase in TCA was found to be linear with eccentricity, with an average slope of 0.21 arcmin/degree of visual field angle (corresponding to 0.41 arcmin/degree for 430 nm to 770 nm). The absolute magnitude of ocular TCA varied between subjects, but was similar to the resolution acuity at 10° in the nasal visual field, encompassing three to four cones. Therefore, TCA can be visually significant. Furthermore, for high-resolution imaging applications, whether visualizing or stimulating cellular features in the retina, it is important to consider the lateral displacements between wavelengths and the variation in blur over the visual field.

  19. From time series analysis to a biomechanical multibody model of the human eye

    Energy Technology Data Exchange (ETDEWEB)

    Pascolo, P. [Dipartimento di Ingegneria Civile, Universita di Udine, Udine (Italy); Dipartimento di Bioingegneria, CISM, Udine (Italy)], E-mail: paolo.pascolo@uniud.it; Carniel, R. [Dipartimento di Georisorse e Territorio, Universita di Udine, Udine (Italy)], E-mail: roberto.carniel@uniud.it

    2009-04-30

    A mechanical model of the human eye is presented aimed at estimating the level of muscular activation. The applicability of the model in the biomedical field is discussed. Human eye movements studied in the laboratory are compared with the ones produced by a virtual eye described in kinematical terms and subject to the dynamics of six actuators, as many as the muscular systems devoted to the eye motion control. The definition of an error function between the experimental and the numerical response and the application of a suitable law that links activation and muscular force are at the base of the proposed methodology. The aim is the definition of a simple conceptual tool that could help the specialist in the diagnosis of potential physiological disturbances of saccadic and nystagmic movements but can also be extended in a second phase when more sophisticated data become available. The work is part of a collaboration between the Functional Mechanics Laboratory of the University and the Neurophysiopatology Laboratory of the 'S. Maria della Misericordia' Hospital in Udine, Italy.

  20. Effects of aqueous humor hydrodynamics on human eye heat transfer under external heat sources.

    Science.gov (United States)

    Tiang, Kor L; Ooi, Ean H

    2016-08-01

    The majority of the eye models developed in the late 90s and early 00s considers only heat conduction inside the eye. This assumption is not entirely correct, since the anterior and posterior chambers are filled aqueous humor (AH) that is constantly in motion due to thermally-induced buoyancy. In this paper, a three-dimensional model of the human eye is developed to investigate the effects AH hydrodynamics have on the human eye temperature under exposure to external heat sources. If the effects of AH flow are negligible, then future models can be developed without taking them into account, thus simplifying the modeling process. Two types of external thermal loads are considered; volumetric and surface irradiation. Results showed that heat convection due to AH flow contributes to nearly 95% of the total heat flow inside the anterior chamber. Moreover, the circulation inside the anterior chamber can cause an upward shift of the location of hotspot. This can have significant consequences to our understanding of heat-induced cataractogenesis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.