WorldWideScience

Sample records for face identification eye-tracking

  1. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  2. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  3. Statistical Analysis of Online Eye and Face-tracking Applications in Marketing

    Science.gov (United States)

    Liu, Xuan

    Eye-tracking and face-tracking technology have been widely adopted to study viewers' attention and emotional response. In the dissertation, we apply these two technologies to investigate effective online contents that are designed to attract and direct attention and engage viewers emotional responses. In the first part of the dissertation, we conduct a series of experiments that use eye-tracking technology to explore how online models' facial cues affect users' attention on static e-commerce websites. The joint effects of two facial cues, gaze direction and facial expression on attention, are estimated by Bayesian ANOVA, allowing various distributional assumptions. We also consider the similarities and differences in the effects of facial cues among American and Chinese consumers. This study offers insights on how to attract and retain customers' attentions for advertisers that use static advertisement on various websites or ad networks. In the second part of the dissertation, we conduct a face-tracking study where we investigate the relation between experiment participants' emotional responseswhile watching comedy movie trailers and their watching intentions to the actual movies. Viewers' facial expressions are collected in real-time and converted to emo- tional responses with algorithms based on facial coding system. To analyze the data, we propose to use a joint modeling method that link viewers' longitudinal emotion measurements and their watching intentions. This research provides recommenda- tions to filmmakers on how to improve the effectiveness of movie trailers, and how to boost audiences' desire to watch the movies.

  4. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm.

    Directory of Open Access Journals (Sweden)

    Eero Ahtola

    Full Text Available To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm.Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus.The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants.The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals.Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

  5. Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

    Science.gov (United States)

    Ahtola, Eero; Stjerna, Susanna; Yrttiaho, Santeri; Nelson, Charles A.; Leppänen, Jukka M.; Vanhatalo, Sampsa

    2014-01-01

    Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

  6. Do Faces Capture the Attention of Individuals with Williams Syndrome or Autism? Evidence from Tracking Eye Movements

    Science.gov (United States)

    Riby, Deborah M.; Hancock, Peter J. B.

    2009-01-01

    The neuro-developmental disorders of Williams syndrome (WS) and autism can reveal key components of social cognition. Eye-tracking techniques were applied in two tasks exploring attention to pictures containing faces. Images were (i) scrambled pictures containing faces or (ii) pictures of scenes with embedded faces. Compared to individuals who…

  7. Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study

    Science.gov (United States)

    Farzin, Faraz; Rivera, Susan M.; Hessl, David

    2009-01-01

    Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…

  8. Technology survey on video face tracking

    Science.gov (United States)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  9. The Way Dogs (Canis familiaris Look at Human Emotional Faces Is Modulated by Oxytocin. An Eye-Tracking Study

    Directory of Open Access Journals (Sweden)

    Anna Kis

    2017-10-01

    Full Text Available Dogs have been shown to excel in reading human social cues, including facial cues. In the present study we used eye-tracking technology to further study dogs’ face processing abilities. It was found that dogs discriminated between human facial regions in their spontaneous viewing pattern and looked most to the eye region independently of facial expression. Furthermore dogs played most attention to the first two images presented, afterwards their attention dramatically decreases; a finding that has methodological implications. Increasing evidence indicates that the oxytocin system is involved in dogs’ human-directed social competence, thus as a next step we investigated the effects of oxytocin on processing of human facial emotions. It was found that oxytocin decreases dogs’ looking to the human faces expressing angry emotional expression. More interestingly, however, after oxytocin pre-treatment dogs’ preferential gaze toward the eye region when processing happy human facial expressions disappears. These results provide the first evidence that oxytocin is involved in the regulation of human face processing in dogs. The present study is one of the few empirical investigations that explore eye gaze patterns in naïve and untrained pet dogs using a non-invasive eye-tracking technique and thus offers unique but largely untapped method for studying social cognition in dogs.

  10. Face landmark point tracking using LK pyramid optical flow

    Science.gov (United States)

    Zhang, Gang; Tang, Sikan; Li, Jiaquan

    2018-04-01

    LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.

  11. Emerging applications of eye-tracking technology in dermatology.

    Science.gov (United States)

    John, Kevin K; Jensen, Jakob D; King, Andy J; Pokharel, Manusheela; Grossman, Douglas

    2018-04-06

    Eye-tracking technology has been used within a multitude of disciplines to provide data linking eye movements to visual processing of various stimuli (i.e., x-rays, situational positioning, printed information, and warnings). Despite the benefits provided by eye-tracking in allowing for the identification and quantification of visual attention, the discipline of dermatology has yet to see broad application of the technology. Notwithstanding dermatologists' heavy reliance upon visual patterns and cues to discriminate between benign and atypical nevi, literature that applies eye-tracking to the study of dermatology is sparse; and literature specific to patient-initiated behaviors, such as skin self-examination (SSE), is largely non-existent. The current article provides a review of eye-tracking research in various medical fields, culminating in a discussion of current applications and advantages of eye-tracking for dermatology research. Copyright © 2018 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  12. Using eye-tracking methodology in consumer science

    DEFF Research Database (Denmark)

    Bialkova, S.; Mueller Loose, Simone; Scholderer, Joachim

    experimental designs will be discussed that can be recommended for eye-tracking studies in consumer science. The application potential will then be demonstrated in four case presentations, focusing on different product categories (from dairy products to alcoholic beverages), measurement contexts (laboratory...... versus point of purchase) and study goals (appearance studies, package design, identification of food choice motives). Furthermore, the presentations will discuss how different components of attention can be distinguished based on eye-tracking data (stimulus-driven versus task-driven processes) and how...

  13. Deformable Models for Eye Tracking

    DEFF Research Database (Denmark)

    Vester-Christensen, Martin; Leimberg, Denis; Ersbøll, Bjarne Kjær

    2005-01-01

    A deformable template method for eye tracking on full face images is presented. The strengths of the method are that it is fast and retains accuracy independently of the resolution. We compare the me\\$\\backslash\\$-thod with a state of the art active contour approach, showing that the heuristic...

  14. Visual attention to emotional face in schizophrenia: an eye tracking study.

    Directory of Open Access Journals (Sweden)

    Mania Asgharpour

    2015-03-01

    Full Text Available Deficits in the processing of facial emotions have been reported extensively in patients with schizophrenia. To explore whether restricted attention is the cause of impaired emotion processing in these patients, we examined visual attention through tracking eye movements in response to emotional and neutral face stimuli in a group of patients with schizophrenia and healthy individuals. We also examined the correlation between visual attention allocation and symptoms severity in our patient group.Thirty adult patients with schizophrenia and 30 matched healthy controls participated in this study. Visual attention data were recorded while participants passively viewed emotional-neutral face pairs for 500 ms. The relationship between the visual attention and symptoms severity were assessed by the Positive and Negative Syndrome Scale (PANSS in the schizophrenia group. Repeated Measures ANOVAs were used to compare the groups.Comparing the number of fixations made during face-pairs presentation, we found that patients with schizophrenia made fewer fixations on faces, regardless of the expression of the face. Analysis of the number of fixations on negative-neutral pairs also revealed that the patients made fewer fixations on both neutral and negative faces. Analysis of number of fixations on positive-neutral pairs only showed more fixations on positive relative to neutral expressions in both groups. We found no correlations between visual attention pattern to faces and symptom severity in schizophrenic patients.The results of this study suggest that the facial recognition deficit in schizophrenia is related to decreased attention to face stimuli. Finding of no difference in visual attention for positive-neutral face pairs between the groups is in line with studies that have shown increased ability to positive emotional perception in these patients.

  15. Task-irrelevant own-race faces capture attention: eye-tracking evidence.

    Science.gov (United States)

    Cao, Rong; Wang, Shuzhen; Rao, Congquan; Fu, Jia

    2013-04-01

    To investigate attentional capture by face's race, the current study recorded saccade latencies of eye movement measurements in an inhibition of return (IOR) task. Compared to Caucasian (other-race) faces, Chinese (own-race) faces elicited longer saccade latency. This phenomenon disappeared when faces were inverted. The results indicated that own-race faces capture attention automatically with high-level configural processing. © 2013 The Authors. Scandinavian Journal of Psychology © 2013 The Scandinavian Psychological Associations.

  16. Looking at vision : Eye/face/head tracking of consumers for improved marketing decisions

    NARCIS (Netherlands)

    Wedel, M.; Pieters, R.; Moutinho, L.; Bigné, E.; Manrai (eds.), A.K.

    Against the backdrop of the rapid growth of the use eye tracking and facial recognition methodology, this chapter discusses the measurement of eye movements, facial expression of emotions, pupil dilation, eye blinks and head movements. After discussing some of the main research findings in the

  17. EYE GAZE TRACKING

    DEFF Research Database (Denmark)

    2017-01-01

    This invention relates to a method of performing eye gaze tracking of at least one eye of a user, by determining the position of the center of the eye, said method comprising the steps of: detecting the position of at least three reflections on said eye, transforming said positions to spanning...... a normalized coordinate system spanning a frame of reference, wherein said transformation is performed based on a bilinear transformation or a non linear transformation e.g. a möbius transformation or a homographic transformation, detecting the position of said center of the eye relative to the position...... of said reflections and transforming this position to said normalized coordinate system, tracking the eye gaze by tracking the movement of said eye in said normalized coordinate system. Thereby calibration of a camera, such as knowledge of the exact position and zoom level of the camera, is avoided...

  18. Using eye tracking to test for individual differences in attention to attractive faces.

    Science.gov (United States)

    Valuch, Christian; Pflüger, Lena S; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli.

  19. Using eye tracking to test for individual differences in attention to attractive faces

    Directory of Open Access Journals (Sweden)

    Christian eValuch

    2015-02-01

    Full Text Available We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times (SRTs. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a Attractive faces hold or capture attention more effectively than less attractive faces; (b men show a stronger bias toward attractive opposite-sex faces than women; and (c blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a and (b but not (c. By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach toward studying inter-individual differences in visual attention with naturalistic stimuli.

  20. Similarity and Difference in the Processing of Same- and Other-Race Faces as Revealed by Eye Tracking in 4- to 9-Month-Olds

    Science.gov (United States)

    Liu, Shaoying; Quinn, Paul C.; Wheeler, Andrea; Xiao, Naiqi; Ge, Liezhong; Lee, Kang

    2011-01-01

    Fixation duration for same-race (i.e., Asian) and other-race (i.e., Caucasian) female faces by Asian infant participants between 4 and 9 months of age was investigated with an eye-tracking procedure. The age range tested corresponded with prior reports of processing differences between same- and other-race faces observed in behavioral looking time…

  1. Brief Report: Patterns of Eye Movements in Face to Face Conversation Are Associated with Autistic Traits--Evidence from a Student Sample

    Science.gov (United States)

    Vabalas, Andrius; Freeth, Megan

    2016-01-01

    The current study investigated whether the amount of autistic traits shown by an individual is associated with viewing behaviour during a face-to-face interaction. The eye movements of 36 neurotypical university students were recorded using a mobile eye-tracking device. High amounts of autistic traits were neither associated with reduced looking…

  2. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Science.gov (United States)

    Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  3. Trustworthy-looking face meets brown eyes.

    Directory of Open Access Journals (Sweden)

    Karel Kleisner

    Full Text Available We tested whether eye color influences perception of trustworthiness. Facial photographs of 40 female and 40 male students were rated for perceived trustworthiness. Eye color had a significant effect, the brown-eyed faces being perceived as more trustworthy than the blue-eyed ones. Geometric morphometrics, however, revealed significant correlations between eye color and face shape. Thus, face shape likewise had a significant effect on perceived trustworthiness but only for male faces, the effect for female faces not being significant. To determine whether perception of trustworthiness was being influenced primarily by eye color or by face shape, we recolored the eyes on the same male facial photos and repeated the test procedure. Eye color now had no effect on perceived trustworthiness. We concluded that although the brown-eyed faces were perceived as more trustworthy than the blue-eyed ones, it was not brown eye color per se that caused the stronger perception of trustworthiness but rather the facial features associated with brown eyes.

  4. Eye movement identification based on accumulated time feature

    Science.gov (United States)

    Guo, Baobao; Wu, Qiang; Sun, Jiande; Yan, Hua

    2017-06-01

    Eye movement is a new kind of feature for biometrical recognition, it has many advantages compared with other features such as fingerprint, face, and iris. It is not only a sort of static characteristics, but also a combination of brain activity and muscle behavior, which makes it effective to prevent spoofing attack. In addition, eye movements can be incorporated with faces, iris and other features recorded from the face region into multimode systems. In this paper, we do an exploring study on eye movement identification based on the eye movement datasets provided by Komogortsev et al. in 2011 with different classification methods. The time of saccade and fixation are extracted from the eye movement data as the eye movement features. Furthermore, the performance analysis was conducted on different classification methods such as the BP, RBF, ELMAN and SVM in order to provide a reference to the future research in this field.

  5. Eye tracking and the translation process

    DEFF Research Database (Denmark)

    Hvelplund, Kristian Tangsgaard

    2014-01-01

    Eye tracking has become increasingly popular as a quantitative research method in translation research. This paper discusses some of the major methodological issues involved in the use of eye tracking in translation research. It focuses specifically on challenges in the analysis and interpretation...... of eye-tracking data as reflections of cognitive processes during translation. Four types of methodological issues are discussed in the paper. The first part discusses the preparatory steps that precede the actual recording of eye-tracking data. The second part examines critically the general assumptions...... linking eye movements to cognitive processing in the context of translation research. The third part of the paper discusses two popular eye-tracking measures often used in translation research, fixations and pupil size, while the fourth part proposes a method to evaluate the quality of eye-tracking data....

  6. Rotational symmetric HMD with eye-tracking capability

    Science.gov (United States)

    Liu, Fangfang; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian

    2016-10-01

    As an important auxiliary function of head-mounted displays (HMDs), eye tracking has an important role in the field of intelligent human-machine interaction. In this paper, an eye-tracking HMD system (ET-HMD) is designed based on the rotational symmetric system. The tracking principle in this paper is based on pupil-corneal reflection. The ET-HMD system comprises three optical paths for virtual display, infrared illumination, and eye tracking. The display optics is shared by three optical paths and consists of four spherical lenses. For the eye-tracking path, an extra imaging lens is added to match the image sensor and achieve eye tracking. The display optics provides users a 40° diagonal FOV with a ״ 0.61 OLED, the 19 mm eye clearance, and 10 mm exit pupil diameter. The eye-tracking path can capture 15 mm × 15 mm of the users' eyes. The average MTF is above 0.1 at 26 lp/mm for the display path, and exceeds 0.2 at 46 lp/mm for the eye-tracking path. Eye illumination is simulated using LightTools with an eye model and an 850 nm near-infrared LED (NIR-LED). The results of the simulation show that the illumination of the NIR-LED can cover the area of the eye model with the display optics that is sufficient for eye tracking. The integrated optical system HMDs with eye-tracking feature can help improve the HMD experience of users.

  7. Exploring the time course of face matching: temporal constraints impair unfamiliar face identification under temporally unconstrained viewing.

    Science.gov (United States)

    Ozbek, Müge; Bindemann, Markus

    2011-10-01

    The identification of unfamiliar faces has been studied extensively with matching tasks, in which observers decide if pairs of photographs depict the same person (identity matches) or different people (mismatches). In experimental studies in this field, performance is usually self-paced under the assumption that this will encourage best-possible accuracy. Here, we examined the temporal characteristics of this task by limiting display times and tracking observers' eye movements. Observers were required to make match/mismatch decisions to pairs of faces shown for 200, 500, 1000, or 2000ms, or for an unlimited duration. Peak accuracy was reached within 2000ms and two fixations to each face. However, intermixing exposure conditions produced a context effect that generally reduced accuracy on identity mismatch trials, even when unlimited viewing of faces was possible. These findings indicate that less than 2s are required for face matching when exposure times are variable, but temporal constraints should be avoided altogether if accuracy is truly paramount. The implications of these findings are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Eye tracking in user experience design

    CERN Document Server

    Romano Bergstorm, Jennifer

    2014-01-01

    Eye Tracking for User Experience Design explores the many applications of eye tracking to better understand how users view and interact with technology. Ten leading experts in eye tracking discuss how they have taken advantage of this new technology to understand, design, and evaluate user experience. Real-world stories are included from these experts who have used eye tracking during the design and development of products ranging from information websites to immersive games. They also explore recent advances in the technology which tracks how users interact with mobile devices, large-screen displays and video game consoles. Methods for combining eye tracking with other research techniques for a more holistic understanding of the user experience are discussed. This is an invaluable resource to those who want to learn how eye tracking can be used to better understand and design for their users. * Includes highly relevant examples and information for those who perform user research and design interactive experi...

  9. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    Directory of Open Access Journals (Sweden)

    Karine Durand

    Full Text Available This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48 were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  10. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    While existing eye detection and tracking algorithms can work reasonably well in a controlled environment, they tend to perform poorly under real world imaging conditions where the lighting produces shadows and the person's eyes can be occluded by e.g. glasses or makeup. As a result, pixel clusters...... associated with the eyes tend to be grouped together with background-features. This problem occurs both for eye detection and eye tracking. Problems that especially plague eye tracking include head movement, eye blinking and light changes, all of which can cause the eyes to suddenly disappear. The usual...... approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...

  11. Face perception in the mind's eye.

    Science.gov (United States)

    Righart, Ruthger; Burra, Nicolas; Vuilleumier, Patrik

    2011-03-01

    Perceptual filling-in occurs when visual stimuli are recognized in impoverished viewing conditions. Whether missing information is filled-in during face perception and which stages might be involved in this process are still unresolved questions. Because an identity can be brought to mind by seeing eyes only, we hypothesized that missing information might be filled-in from a memory trace for the whole face identity. We presented participants with faces in phase 1 and later we presented eyes-only in phase 2. For some of these eyes in phase 2, the whole face had been presented in the previous phase, for others identical eyes had been presented. Event-related potentials (ERPs) revealed an N170 component that was more negative when eyes were preceded by a whole face in the previous phase compared to eyes preceded by identical eyes-only. A more positive-going late positive complex (LPC) was also found, suggesting enhanced retrieval of face memory representations when eyes were preceded by whole faces. Our results show that pre-existing representations of face identity can influence early stages of visual encoding, 170 ms after stimulus onset. These effects may reflect top-down modulation by memory on visual recognition processes by filling-in the missing facial information.

  12. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    Science.gov (United States)

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech

  13. 1st Workshop on Eye Tracking and Visualization

    CERN Document Server

    Chuang, Lewis; Fisher, Brian; Schmidt, Albrecht; Weiskopf, Daniel

    2017-01-01

    This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.

  14. A novel strong tracking finite-difference extended Kalman filter for nonlinear eye tracking

    Institute of Scientific and Technical Information of China (English)

    ZHANG ZuTao; ZHANG JiaShu

    2009-01-01

    Non-Intrusive methods for eye tracking are Important for many applications of vision-based human computer interaction. However, due to the high nonlinearity of eye motion, how to ensure the robust-ness of external interference and accuracy of eye tracking poses the primary obstacle to the integration of eye movements into today's interfaces. In this paper, we present a strong tracking finite-difference extended Kalman filter algorithm, aiming to overcome the difficulty In modeling nonlinear eye tracking. In filtering calculation, strong tracking factor is introduced to modify a priori covariance matrix and im-prove the accuracy of the filter. The filter uses finite-difference method to calculate partial derivatives of nonlinear functions for eye tracking. The latest experimental results show the validity of our method for eye tracking under realistic conditions.

  15. Head-mounted eye tracking of a chimpanzee under naturalistic conditions.

    Directory of Open Access Journals (Sweden)

    Fumihiro Kano

    Full Text Available This study offers a new method for examining the bodily, manual, and eye movements of a chimpanzee at the micro-level. A female chimpanzee wore a lightweight head-mounted eye tracker (60 Hz on her head while engaging in daily interactions with the human experimenter. The eye tracker recorded her eye movements accurately while the chimpanzee freely moved her head, hands, and body. Three video cameras recorded the bodily and manual movements of the chimpanzee from multiple angles. We examined how the chimpanzee viewed the experimenter in this interactive setting and how the eye movements were related to the ongoing interactive contexts and actions. We prepared two experimentally defined contexts in each session: a face-to-face greeting phase upon the appearance of the experimenter in the experimental room, and a subsequent face-to-face task phase that included manual gestures and fruit rewards. Overall, the general viewing pattern of the chimpanzee, measured in terms of duration of individual fixations, length of individual saccades, and total viewing duration of the experimenter's face/body, was very similar to that observed in previous eye-tracking studies that used non-interactive situations, despite the differences in the experimental settings. However, the chimpanzee viewed the experimenter and the scene objects differently depending on the ongoing context and actions. The chimpanzee viewed the experimenter's face and body during the greeting phase, but viewed the experimenter's face and hands as well as the fruit reward during the task phase. These differences can be explained by the differential bodily/manual actions produced by the chimpanzee and the experimenter during each experimental phase (i.e., greeting gestures, task cueing. Additionally, the chimpanzee's viewing pattern varied depending on the identity of the experimenter (i.e., the chimpanzee's prior experience with the experimenter. These methods and results offer new

  16. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition.

    Science.gov (United States)

    Hills, Peter J; Eaton, Elizabeth; Pake, J Michael

    2016-01-01

    Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.

  17. Double attention bias for positive and negative emotional faces in clinical depression: evidence from an eye-tracking study.

    Science.gov (United States)

    Duque, Almudena; Vázquez, Carmelo

    2015-03-01

    According to cognitive models, attentional biases in depression play key roles in the onset and subsequent maintenance of the disorder. The present study examines the processing of emotional facial expressions (happy, angry, and sad) in depressed and non-depressed adults. Sixteen unmedicated patients with Major Depressive Disorder (MDD) and 34 never-depressed controls (ND) completed an eye-tracking task to assess different components of visual attention (orienting attention and maintenance of attention) in the processing of emotional faces. Compared to ND, participants with MDD showed a negative attentional bias in attentional maintenance indices (i.e. first fixation duration and total fixation time) for sad faces. This attentional bias was positively associated with the severity of depressive symptoms. Furthermore, the MDD group spent a marginally less amount of time viewing happy faces compared with the ND group. No differences were found between the groups with respect to angry faces and orienting attention indices. The current study is limited by its cross-sectional design. These results support the notion that attentional biases in depression are specific to depression-related information and that they operate in later stages in the deployment of attention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Eye tracking social preferences

    NARCIS (Netherlands)

    Jiang, Ting; Potters, Jan; Funaki, Yukihiko

    We hypothesize that if people are motivated by a particular social preference, then choosing in accordance with this preference will lead to an identifiable pattern of eye movements. We track eye movements while subjects make choices in simple three-person distribution experiments. We characterize

  19. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    Science.gov (United States)

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  20. Sampling strong tracking nonlinear unscented Kalman filter and its application in eye tracking

    International Nuclear Information System (INIS)

    Zu-Tao, Zhang; Jia-Shu, Zhang

    2010-01-01

    The unscented Kalman filter is a developed well-known method for nonlinear motion estimation and tracking. However, the standard unscented Kalman filter has the inherent drawbacks, such as numerical instability and much more time spent on calculation in practical applications. In this paper, we present a novel sampling strong tracking nonlinear unscented Kalman filter, aiming to overcome the difficulty in nonlinear eye tracking. In the above proposed filter, the simplified unscented transform sampling strategy with n + 2 sigma points leads to the computational efficiency, and suboptimal fading factor of strong tracking filtering is introduced to improve robustness and accuracy of eye tracking. Compared with the related unscented Kalman filter for eye tracking, the proposed filter has potential advantages in robustness, convergence speed, and tracking accuracy. The final experimental results show the validity of our method for eye tracking under realistic conditions. (classical areas of phenomenology)

  1. 29 CFR 1926.102 - Eye and face protection.

    Science.gov (United States)

    2010-07-01

    ... spectacles, when required by this regulation to wear eye protection, shall be protected by goggles or... 29 Labor 8 2010-07-01 2010-07-01 false Eye and face protection. 1926.102 Section 1926.102 Labor... § 1926.102 Eye and face protection. (a) General. (1) Employees shall be provided with eye and face...

  2. Eye tracking in Library and Information Science

    DEFF Research Database (Denmark)

    Lund, Haakon

    2016-01-01

    Purpose The purpose of this paper is to present a systematic literature review of the application of eye-tracking technology within the field of library and information science. Eye-tracking technology has now reached a level of maturity, which makes the use of the technology more accessible....... Subsequently, a growing interest in employing eye tracking as a methodology within library and information science research must be anticipated. Design/methodology/approach The review follows the guidelines set in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses recommendations. Two...... reference databases are searched for relevant references: Library and Information Science Abstracts and Library, Information Science and Technology Abstracts. The main selection criteria are peer-reviewed literature that describes the experimental setting, including which eye-tracking equipment was used...

  3. Attachment Avoidance Is Significantly Related to Attentional Preference for Infant Faces: Evidence from Eye Movement Data.

    Science.gov (United States)

    Jia, Yuncheng; Cheng, Gang; Zhang, Dajun; Ta, Na; Xia, Mu; Ding, Fangyuan

    2017-01-01

    Objective: To determine the influence of adult attachment orientations on infant preference. Methods: We adopted eye-tracking technology to monitor childless college women's eye movements when looking at pairs of faces, including one adult face (man or woman) and one infant face, with three different expressions (happy, sadness, and neutral). The participants ( N = 150; 84% Han ethnicity) were aged 18-29 years ( M = 19.22, SD = 1.72). A random intercepts multilevel linear regression analysis was used to assess the unique contribution of attachment avoidance, determined using the Experiences in Close Relationships scale, to preference for infant faces. Results: Women with higher attachment avoidance showed less infant preference, as shown by less sustained overt attentional bias to the infant face than the adult face based on fixation time and count. Conclusion: Adult attachment might be related to infant preference according to eye movement indices. Women with higher attachment avoidance may lack attentional preference for infant faces. The findings may aid the treatment and remediation of the interactions between children and mothers with insecure attachment.

  4. Eye tracking for visual marketing

    NARCIS (Netherlands)

    Wedel, M.; Pieters, R.

    2008-01-01

    We provide the theory of visual attention and eye-movements that serves as a basis for evaluating eye-tracking research and for discussing salient and emerging issues in visual marketing. Motivated from its rising importance in marketing practice and its potential for theoretical contribution, we

  5. The effect of texture on face identification and configural information processing

    Directory of Open Access Journals (Sweden)

    Tzschaschel Eva Alica

    2014-01-01

    Full Text Available Shape and texture are an integral part of face identity. In the present study, the importance of face texture for face identification and detection of configural manipulation (i.e., spatial relation among facial features was examined by comparing grayscale face photographs (i.e., real faces and line drawings of the same faces. Whereas real faces provide information about texture and shape of faces, line drawings are lacking texture cues. A change-detection task and a forced-choice identification task were used with both stimuli categories. Within the change detection task, participants had to decide whether the size of the eyes of two sequentially presented faces had changed or not. After having made this decision, three faces were shown to the subjects and they had to identify the previously shown face among them. Furthermore, context (full vs. cropped faces and orientation (upright vs. inverted were manipulated. The results obtained in the change detection task suggest that configural information was used in processing real faces, while part-based and featural information was used in processing line-drawings. Additionally, real faces were identified more accurately than line drawings, and identification was less context but more orientation sensitive than identification of line drawings. Taken together, the results of the present study provide new evidence stressing the importance of face texture for identity encoding and configural face processing.

  6. Can eye tracking boost usability evaluation of computer games?

    DEFF Research Database (Denmark)

    Johansen, Sune Alstrup; Noergaard, Mie; Soerensen, Janus Rau

    2008-01-01

    Good computer games need to be challenging while at the same time being easy to use. Accordingly, besides struggling with well known challenges for usability work, such as persuasiveness, the computer game industry also faces system-specific challenges, such as identifying methods that can provide...... data on players' attention during a game. This position paper discusses how eye tracking may address three core challenges faced by computer game producer IO Interactive in their on-going work to ensure games that are fun, usable, and challenging. These challenges are: (1) Persuading game designers...... about the relevance of usability results, (2) involving game designers in usability work, and (3) identifying methods that provide new data about user behaviour and experience....

  7. Eye-tracking measurements and their link to a normative model of monitoring behaviour.

    Science.gov (United States)

    Hasse, Catrin; Bruder, Carmen

    2015-01-01

    Increasing automation necessitates operators monitoring appropriately (OMA) and raises the question of how to identify them in future selections. A normative model was developed providing criteria for the identification of OMA. According to this model, the monitoring process comprises distinct monitoring phases (orientation, anticipation, detection and recheck) in which attention should be focused on relevant areas. The current study tests the normative model on the basis of eye tracking. The eye-tracking data revealed increased concentration on relevant areas during the orientation and anticipation phase in comparison to the other phases. For the assessment of monitoring behaviour in the context of personnel selection, this implies that the anticipation and orientation phases should be considered separately as they appear to be more important in the context of monitoring than the other phases. A normative model was developed for the assessment of monitoring behaviour. Using the eye-tracking method, this model was tested with applicants for an Air Traffic Controller training programme. The results are relevant for the future selection of human operators, who will have to monitor highly automated systems.

  8. Reasoning strategies with rational numbers revealed by eye tracking.

    Science.gov (United States)

    Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J

    2017-07-01

    Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.

  9. Robust online face tracking-by-detection

    NARCIS (Netherlands)

    Comaschi, F.; Stuijk, S.; Basten, T.; Corporaal, H.

    2016-01-01

    The problem of online face tracking from unconstrained videos is still unresolved. Challenges range from coping with severe online appearance variations to coping with occlusion. We propose RFTD (Robust Face Tracking-by-Detection), a system which combines tracking and detection into a single

  10. Eye contrast polarity is critical for face recognition by infants.

    Science.gov (United States)

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Implicit negative affect predicts attention to sad faces beyond self-reported depressive symptoms in healthy individuals: An eye-tracking study.

    Science.gov (United States)

    Bodenschatz, Charlott Maria; Skopinceva, Marija; Kersting, Anette; Quirin, Markus; Suslow, Thomas

    2018-04-04

    Cognitive theories of depression assume biased attention towards mood-congruent information as a central vulnerability and maintaining factor. Among other symptoms, depression is characterized by excessive negative affect (NA). Yet, little is known about the impact of naturally occurring NA on the allocation of attention to emotional information. The study investigates how implicit and explicit NA as well as self-reported depressive symptoms predict attentional biases in a sample of healthy individuals (N = 104). Attentional biases were assessed using eye-tracking during a free viewing task in which images of sad, angry, happy and neutral faces were shown simultaneously. Participants' implicit affectivity was measured indirectly using the Implicit Positive and Negative Affect Test. Questionnaires were administered to assess actual and habitual explicit NA and presence of depressive symptoms. Higher levels of depressive symptoms were associated with sustained attention to sad faces and reduced attention to happy faces. Implicit but not explicit NA significantly predicted gaze behavior towards sad faces independently from depressive symptoms. The present study supports the idea that naturally occurring implicit NA is associated with attention allocation to dysphoric facial expression. The findings demonstrate the utility of implicit affectivity measures in studying individual differences in depression-relevant attentional biases and cognitive vulnerability. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A laser-based eye-tracking system.

    Science.gov (United States)

    Irie, Kenji; Wilson, Bruce A; Jones, Richard D; Bones, Philip J; Anderson, Tim J

    2002-11-01

    This paper reports on the development of a new eye-tracking system for noninvasive recording of eye movements. The eye tracker uses a flying-spot laser to selectively image landmarks on the eye and, subsequently, measure horizontal, vertical, and torsional eye movements. Considerable work was required to overcome the adverse effects of specular reflection of the flying-spot from the surface of the eye onto the sensing elements of the eye tracker. These effects have been largely overcome, and the eye-tracker has been used to document eye movement abnormalities, such as abnormal torsional pulsion of saccades, in the clinical setting.

  13. Integrating eye tracking in virtual reality for stroke rehabilitation

    OpenAIRE

    Alves, Júlio Miguel Gomes Rebelo

    2014-01-01

    This thesis reports on research done for the integration of eye tracking technology into virtual reality environments, with the goal of using it in rehabilitation of patients who suffered from stroke. For the last few years, eye tracking has been a focus on medical research, used as an assistive tool  to help people with disabilities interact with new technologies  and as an assessment tool  to track the eye gaze during computer interactions. However, tracking more complex gaze behavio...

  14. Eye-tracking Post-editing Behaviour in an Interactive Translation Prediction Environment

    DEFF Research Database (Denmark)

    Mesa-Lao, Bartolomé

    2013-01-01

    challenges faced by translators. This paper reports on a preliminary pilot test within the CasMaCat project. Based in user activity data (key-logging and eye-tracking), this project aims at defining the functionalities of a new translator's workbench focusing on post-editing and advanced computer......-aided translation methods. The main aim of this preliminary pilot was to assess one of the new features implemented in the second prototype of the workbench: the interactive translation prediction (ITP) feature. This ITP feature is set to provide translators with different suggestion as they post......-edit. For this purpose 6 translators were asked to post-edit 1,000 words from English to Spanish in five different tasks while their eye movements were being tracked. Each task was designed to test different modalities of ITP. Translators were also asked to fill out a questionnaire expressing their attitudes towards...

  15. Who is the Usual Suspect? Evidence of a Selection Bias Toward Faces That Make Direct Eye Contact in a Lineup Task

    Science.gov (United States)

    van Golde, Celine; Verstraten, Frans A. J.

    2017-01-01

    The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly) increase the perceived familiarity of a face. PMID:28203355

  16. Who is the Usual Suspect? Evidence of a Selection Bias Toward Faces That Make Direct Eye Contact in a Lineup Task

    Directory of Open Access Journals (Sweden)

    Jessica Taubert

    2017-02-01

    Full Text Available The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly increase the perceived familiarity of a face.

  17. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    Directory of Open Access Journals (Sweden)

    Mau-Tsuen Yang

    2014-08-01

    Full Text Available There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home’s entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  18. Combining user logging with eye tracking for interactive and dynamic applications.

    Science.gov (United States)

    Ooms, Kristien; Coltekin, Arzu; De Maeyer, Philippe; Dupont, Lien; Fabrikant, Sara; Incoul, Annelies; Kuhn, Matthias; Slabbinck, Hendrik; Vansteenkiste, Pieter; Van der Haegen, Lise

    2015-12-01

    User evaluations of interactive and dynamic applications face various challenges related to the active nature of these displays. For example, users can often zoom and pan on digital products, and these interactions cause changes in the extent and/or level of detail of the stimulus. Therefore, in eye tracking studies, when a user's gaze is at a particular screen position (gaze position) over a period of time, the information contained in this particular position may have changed. Such digital activities are commonplace in modern life, yet it has been difficult to automatically compare the changing information at the viewed position, especially across many participants. Existing solutions typically involve tedious and time-consuming manual work. In this article, we propose a methodology that can overcome this problem. By combining eye tracking with user logging (mouse and keyboard actions) with cartographic products, we are able to accurately reference screen coordinates to geographic coordinates. This referencing approach allows researchers to know which geographic object (location or attribute) corresponds to the gaze coordinates at all times. We tested the proposed approach through two case studies, and discuss the advantages and disadvantages of the applied methodology. Furthermore, the applicability of the proposed approach is discussed with respect to other fields of research that use eye tracking-namely, marketing, sports and movement sciences, and experimental psychology. From these case studies and discussions, we conclude that combining eye tracking and user-logging data is an essential step forward in efficiently studying user behavior with interactive and static stimuli in multiple research fields.

  19. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    Science.gov (United States)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  20. Eye tracking and nutrition label use

    DEFF Research Database (Denmark)

    Graham, Dan J.; Orquin, Jacob Lund; Visschers, Vivianne H.M.

    2012-01-01

    cameras monitoring consumer visual attention (i.e., eye tracking) has begun to identify ways in which label design could be modified to improve consumers’ ability to locate and effectively utilize nutrition information. The present paper reviews all published studies of nutrition label use that have...... utilized eye tracking methodology, identifies directions for further research in this growing field, and makes research-based recommendations for ways in which labels could be modified to improve consumers’ ability to use nutrition labels to select healthful foods....

  1. 49 CFR 214.117 - Eye and face protection.

    Science.gov (United States)

    2010-10-01

    ... corrective lenses, when required by this section to wear eye protection, shall be protected by goggles or... 49 Transportation 4 2010-10-01 2010-10-01 false Eye and face protection. 214.117 Section 214.117..., DEPARTMENT OF TRANSPORTATION RAILROAD WORKPLACE SAFETY Bridge Worker Safety Standards § 214.117 Eye and face...

  2. 29 CFR 1915.153 - Eye and face protection.

    Science.gov (United States)

    2010-07-01

    ... protection that incorporates the prescription in its design, unless the employee is protected by eye... 29 Labor 7 2010-07-01 2010-07-01 false Eye and face protection. 1915.153 Section 1915.153 Labor... (PPE) § 1915.153 Eye and face protection. (a) General requirements. (1) The employer shall ensure that...

  3. Eye tracking a comprehensive guide to methods and measures

    CERN Document Server

    Holmqvist, Kenneth; Andersson, Richard; Dewhurst, Richard; Jarodzka, Halszka; Weijer, Joost van de

    2011-01-01

    We make 3-5 eye movements per second, and these movements are crucial in helping us deal with the vast amounts of information we encounter in our everyday lives. In recent years, thanks to the development of eye tracking technology, there has been a growing interest in monitoring and measuring these movements, with a view to understanding how we attend to and process the visual information we encounter Eye tracking as a research tool is now more accessible than ever, and is growing in popularity amongst researchers from a whole host of different disciplines. Usability analysts, sports scientists, cognitive psychologists, reading researchers, psycholinguists, neurophysiologists, electrical engineers, and others, all have a vested interest in eye tracking for different reasons. The ability to record eye-movements has helped advance our science and led to technological innovations. However, the growth of eye tracking in recent years has also presented a variety of challenges - in particular the issue of how to d...

  4. Eye tracking detects disconjugate eye movements associated with structural traumatic brain injury and concussion.

    Science.gov (United States)

    Samadani, Uzma; Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H; McStay, Christopher; Todd, S Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul

    2015-04-15

    Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury.

  5. Eye-tracking research in computer-mediated language learning

    NARCIS (Netherlands)

    Michel, Marije; Smith, Bryan

    2017-01-01

    Though eye-tracking technology has been used in reading research for over 100 years, researchers have only recently begun to use it in studies of computer-assisted language learning (CALL). This chapter provides an overview of eye-tracking research to date, which is relevant to computer-mediated

  6. 29 CFR 1918.101 - Eye and face protection.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Eye and face protection. 1918.101 Section 1918.101 Labor... that protective eye and face devices comply with any of the following consensus standards: (A) ANSI Z87... face protection devices that are constructed in accordance with one of the above consensus standards...

  7. Real time eye tracking using Kalman extended spatio-temporal context learning

    Science.gov (United States)

    Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu

    2017-06-01

    Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.

  8. Eye-Tracking Study of Complexity in Gas Law Problems

    Science.gov (United States)

    Tang, Hui; Pienta, Norbert

    2012-01-01

    This study, part of a series investigating students' use of online tools to assess problem solving, uses eye-tracking hardware and software to explore the effect of problem difficulty and cognitive processes when students solve gas law word problems. Eye movements are indices of cognition; eye-tracking data typically include the location,…

  9. Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research.

    Science.gov (United States)

    Gibaldi, Agostino; Vanegas, Mauricio; Bex, Peter J; Maiello, Guido

    2017-06-01

    The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.

  10. Sad people are more accurate at expression identification with a smaller own-ethnicity bias than happy people.

    Science.gov (United States)

    Hills, Peter J; Hill, Dominic M

    2017-07-12

    Sad individuals perform more accurately at face identity recognition (Hills, Werno, & Lewis, 2011), possibly because they scan more of the face during encoding. During expression identification tasks, sad individuals do not fixate on the eyes as much as happier individuals (Wu, Pu, Allen, & Pauli, 2012). Fixating on features other than the eyes leads to a reduced own-ethnicity bias (Hills & Lewis, 2006). This background indicates that sad individuals would not view the eyes as much as happy individuals and this would result in improved expression recognition and a reduced own-ethnicity bias. This prediction was tested using an expression identification task, with eye tracking. We demonstrate that sad-induced participants show enhanced expression recognition and a reduced own-ethnicity bias than happy-induced participants due to scanning more facial features. We conclude that mood affects eye movements and face encoding by causing a wider sampling strategy and deeper encoding of facial features diagnostic for expression identification.

  11. Measuring social attention and motivation in autism spectrum disorder using eye-tracking: Stimulus type matters.

    Science.gov (United States)

    Chevallier, Coralie; Parish-Morris, Julia; McVey, Alana; Rump, Keiran M; Sasson, Noah J; Herrington, John D; Schultz, Robert T

    2015-10-01

    Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  12. Information booklet on personal protective equipment: eye and face protection

    International Nuclear Information System (INIS)

    1992-01-01

    In all work places where hazards of various kinds are present and the same cannot be totally controlled by engineering methods, suitable personal protective equipment (PPE) shall be used. There are several types of eye and face protection devices available in the market and it is important that employees use the proper type for the particular job. The main classes of eye and face protection devices required for the industrial operations are as follows: (a) eye protection devices which includes: (i) safety goggles (ii) safety spectacles (iii) safety clipons and eye and face protection devices which are (i) eye shield, (ii) face shield, (iii) wire mesh screen guard. Guide lines for selecting appropriate ear and face protection equipment for nuclear installations are given. (M.K.V.). 4 annexures, 1 appendix

  13. Three-Dimensional Eye Tracking in a Surgical Scenario.

    Science.gov (United States)

    Bogdanova, Rositsa; Boulanger, Pierre; Zheng, Bin

    2015-10-01

    Eye tracking has been widely used in studying the eye behavior of surgeons in the past decade. Most eye-tracking data are reported in a 2-dimensional (2D) fashion, and data for describing surgeons' behaviors on stereoperception are often missed. With the introduction of stereoscopes in laparoscopic procedures, there is an increasing need for studying the depth perception of surgeons under 3D image-guided surgery. We developed a new algorithm for the computation of convergence points in stereovision by measuring surgeons' interpupillary distance, the distance to the view target, and the difference between gaze locations of the 2 eyes. To test the feasibility of our new algorithm, we recruited 10 individuals to watch stereograms using binocular disparity and asked them to develop stereoperception using a cross-eyed viewing technique. Participants' eye motions were recorded by the Tobii eye tracker while they performed the trials. Convergence points between normal and stereo-viewing conditions were computed using the developed algorithm. All 10 participants were able to develop stereovision after a short period of training. During stereovision, participants' eye convergence points were 14 ± 1 cm in front of their eyes, which was significantly closer than the convergence points under the normal viewing condition (77 ± 20 cm). By applying our method of calculating convergence points using eye tracking, we were able to elicit the eye movement patterns of human operators between the normal and stereovision conditions. Knowledge from this study can be applied to the design of surgical visual systems, with the goal of improving surgical performance and patient safety. © The Author(s) 2015.

  14. Eye-Tracking Study on Facial Emotion Recognition Tasks in Individuals with High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Tsang, Vicky

    2018-01-01

    The eye-tracking experiment was carried out to assess fixation duration and scan paths that individuals with and without high-functioning autism spectrum disorders employed when identifying simple and complex emotions. Participants viewed human photos of facial expressions and decided on the identification of emotion, the negative-positive emotion…

  15. Using Eye-Tracking in Applied Linguistics and Second Language Research

    Science.gov (United States)

    Conklin, Kathy; Pellicer-Sánchez, Ana

    2016-01-01

    With eye-tracking technology the eye is thought to give researchers a window into the mind. Importantly, eye-tracking has significant advantages over traditional online processing measures: chiefly that it allows for more "natural" processing as it does not require a secondary task, and that it provides a very rich moment-to-moment data…

  16. System and Method for Eye Tracking

    DEFF Research Database (Denmark)

    2017-01-01

    A method and system for monitoring the motion of one or both eyes, includes capturing a sequence of overlapping images of a subject's face including an eye and the corresponding non-eye region; identifying a plurality of keypoints in each image; mapping corresponding keypoints in two or more images...... of the sequence; assigning the keypoints to the eye and to the corresponding non-eye region; calculating individual velocities of the corresponding keypoints in the eye and the corresponding non-eye region to obtain a distribution of velocities; extracting at least one velocity measured for the eye and at least...... one velocity measured for the corresponding non-eye region; calculating the eye-in-head velocity for the eye based upon the measured velocity for the eye and the measured velocity for the corresponding non-eye region; and calculating the eye-in-head position based upon the eye- in-head velocity....

  17. Using Genetic Algorithm for Eye Detection and Tracking in Video Sequence

    Directory of Open Access Journals (Sweden)

    Takuya Akashi

    2007-04-01

    Full Text Available We propose a high-speed size and orientation invariant eye tracking method, which can acquire numerical parameters to represent the size and orientation of the eye. In this paper, we discuss that high tolerance in human head movement and real-time processing that are needed for many applications, such as eye gaze tracking. The generality of the method is also important. We use template matching with genetic algorithm, in order to overcome these problems. A high speed and accuracy tracking scheme using Evolutionary Video Processing for eye detection and tracking is proposed. Usually, a genetic algorithm is unsuitable for a real-time processing, however, we achieved real-time processing. The generality of this proposed method is provided by the artificial iris template used. In our simulations, an eye tracking accuracy is 97.9% and, an average processing time of 28 milliseconds per frame.

  18. Age differences in accuracy and choosing in eyewitness identification and face recognition.

    Science.gov (United States)

    Searcy, J H; Bartlett, J C; Memon, A

    1999-05-01

    Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.

  19. Eye contact facilitates awareness of faces during interocular suppression

    NARCIS (Netherlands)

    Stein, T.; Senju, A.; Peelen, M.V.; Sterzer, P.

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct

  20. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  1. Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Zenghai Chen

    2018-01-01

    Full Text Available Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject’s eye movements. A gaze deviation (GaDe image is then proposed to characterize the subject’s eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image’s features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method.

  2. Scanning mid-IR laser apparatus with eye tracking for refractive surgery

    Science.gov (United States)

    Telfair, William B.; Yoder, Paul R., Jr.; Bekker, Carsten; Hoffman, Hanna J.; Jensen, Eric F.

    1999-06-01

    A robust, real-time, dynamic eye tracker has been integrated with the short pulse mid-infrared laser scanning delivery system previously described. This system employs a Q- switched Nd:YAG laser pumped optical parametric oscillator operating at 2.94 micrometers. Previous ablation studies on human cadaver eyes and in-vivo cat eyes demonstrated very smooth ablations with extremely low damage levels similar to results with an excimer. A 4-month healing study with cats indicated no adverse healing effects. In order to treat human eyes, the tracker is required because the eyes move during the procedure due to both voluntary and involuntary motions such as breathing, heartbeat, drift, loss of fixation, saccades and microsaccades. Eye tracking techniques from the literature were compared. A limbus tracking system was best for this application. Temporal and spectral filtering techniques were implemented to reduce tracking errors, reject stray light, and increase signal to noise ratio. The expanded-capability system (IRVision AccuScan 2000 Laser System) has been tested in the lab on simulated eye targets, glass eyes, cadaver eyes, and live human subjects. Circular targets ranging from 10-mm to 14-mm diameter were successfully tracked. The tracker performed beyond expectations while the system performed myopic photorefractive keratectomy procedures on several legally blind human subjects.

  3. USING EYE TRACKING TO MEASURE ONLINE INTERACTIVITY: A THEORETICAL FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Adriana-Emilia ROBU

    2013-06-01

    Full Text Available Notwithstanding that each and every company, even the sweetshop around the corner has found its way to the Internet, some customers still hesitate to shop online or to shop from one site and ignore the other. In order to build an online effective communication between the participants, one of the most important factors is interactivity. In the last decade it received extensive attention in the marketing literature, but few are the studies which have seen new methods to measure it. Eye tracking technology has been broadly used in the cognitive sciences. The purpose of this study is to investigate the existing literature in order to give insights into the eye tracking methodology when measuring the online interactivity. It also describes the eye tracking technology in general, extracts various examples from the eye tracking research field, with different applications, highlights its importance when analyzing the online consumer behavior, giving examples from various studies and finds the key points of the methodological difficulties. Finally, this work has an important merit for the future studies when taking into consideration the eye tracking technology in the online interactivity research and further, it is relevant for marketers, regarding the enhancement of online interactive interfaces and web or mobile applications.

  4. Eye Contact Facilitates Awareness of Faces during Interocular Suppression

    Science.gov (United States)

    Stein, Timo; Senju, Atsushi; Peelen, Marius V.; Sterzer, Philipp

    2011-01-01

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than…

  5. Unaware person recognition from the body when face identification fails.

    Science.gov (United States)

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  6. Markerless 3D Face Tracking

    DEFF Research Database (Denmark)

    Walder, Christian; Breidt, Martin; Bulthoff, Heinrich

    2009-01-01

    We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects...

  7. The Effect of Fearful Expressions on Multiple Face Tracking

    Directory of Open Access Journals (Sweden)

    Hongjun Jin

    2015-07-01

    Full Text Available How does the visual system realize dynamic tracking? This topic has become popular within cognitive science in recent years. The classical theory argues that multiple object tracking is accomplished via pre-attention visual indexes as part of a cognitively impenetrable low-level visual system. The present research aimed to investigate whether and how tracking processes are influenced by facial expressions that convey abundant social information about one’s mental state and situated environment. The results showed that participants tracked fearful faces more effectively than neutral faces. However, this advantage was only present under the low-attentional load condition, and distractor face emotion did not impact tracking performance. These findings imply that visual tracking is not driven entirely by low-level vision and encapsulated by high-level representations; rather, that facial expressions, a kind of social information, are able to influence dynamic tracking. Furthermore, the effect of fearful expressions on multiple face tracking is mediated by the availability of attentional resources.

  8. Research on Eye Movement Tracking in ESL Reading

    Directory of Open Access Journals (Sweden)

    Wenlian ZHAN

    2014-06-01

    Full Text Available Eye movement behavior in reading can reflect on-line cognitive process. Through the on-line measure of eye movement, under relatively natural reading condition, data of the reader’ s eye movement in the text can be obtained in processing information, and thus help to reveal the internal cognitive mechanisms in reading. With the development of intelligentization, serialization and portable direction in eye tracker, there exist great number of studies on eye movement tracking, but studies on eye movement features in ESL reading are rare. In such circumstances, this paper mainly illustrates eye movement patterns, the relationship between eye movement and perceptual processing, and eye movement control in ESL reading.

  9. Fusing Eye-gaze and Speech Recognition for Tracking in an Automatic Reading Tutor

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua

    2013-01-01

    In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment the langu......In this paper we present a novel approach for automatically tracking the reading progress using a combination of eye-gaze tracking and speech recognition. The two are fused by first generating word probabilities based on eye-gaze information and then using these probabilities to augment...

  10. The medical eye: Conclusions from eye tracking research on expertise development in medicine

    NARCIS (Netherlands)

    Jarodzka, Halszka; Jaarsma, Thomas; Dewhurst, Richard; Boshuizen, Els

    2013-01-01

    Jarodzka, H., Jaarsma, T., Dewhurst, R., & Boshuizen, H. P. A. (2012, October). The medical eye: Conclusions from eye tracking research on expertise development in medicine. Paper presented at the New tools and practices for seeing and learning in medicine ’12, University of Turku, Turku, Finland.

  11. Face scanning in autism spectrum disorder (ASD and attention deficit/hyperactivity disorder (ADHD: human versus dog face scanning

    Directory of Open Access Journals (Sweden)

    Mauro eMuszkat

    2015-10-01

    Full Text Available This study used eye-tracking to explore attention allocation to human and dog faces in children and adolescents with autism spectrum disorder (ASD, attention deficit/hyperactivity disorder (ADHD, and typical development (TD. Significant differences were found among the three groups. TD participants looked longer at the eyes than ASD and ADHD ones, irrespective of the faces presented. In spite of this difference, groups were similar in that they looked more to the eyes than to the mouth areas of interest. The ADHD group gazed longer at the mouth region than the other groups. Furthermore, groups were also similar in that they looked more to the dog than to the human faces. The eye tracking technology proved to be useful for behavioral investigation in different neurodevelopmental disorders.

  12. Eye/head tracking technology to improve HCI with iPad applications.

    Science.gov (United States)

    Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña

    2015-01-22

    In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future.

  13. Eye/Head Tracking Technology to Improve HCI with iPad Applications

    Directory of Open Access Journals (Sweden)

    Asier Lopez-Basterretxea

    2015-01-01

    Full Text Available In order to improve human computer interaction (HCI for people with special needs, this paper presents an alternative form of interaction, which uses the iPad’s front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses. These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future.

  14. Screening for Dyslexia Using Eye Tracking during Reading.

    Science.gov (United States)

    Nilsson Benfatto, Mattias; Öqvist Seimyr, Gustaf; Ygge, Jan; Pansell, Tony; Rydberg, Agneta; Jacobson, Christer

    2016-01-01

    Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.

  15. Screening for Dyslexia Using Eye Tracking during Reading.

    Directory of Open Access Journals (Sweden)

    Mattias Nilsson Benfatto

    Full Text Available Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.

  16. An overview of how eye tracking is used in communication research

    NARCIS (Netherlands)

    Bol, N.; Boerman, S.C.; Romano Bergstrom, J.C.; Kruikemeier, S.; Antona, M.; Stephanidis, C.

    2016-01-01

    Eye tracking gives communication scholars the opportunity to move beyond self-reported measures by examining more precisely how much visual attention is paid to information. However, we lack insight into how eye-tracking data is used in communication research. This literature review provides an

  17. Systems and methods of eye tracking calibration

    DEFF Research Database (Denmark)

    2014-01-01

    Methods and systems to facilitate eye tracking control calibration are provided. One or more objects are displayed on a display of a device, where the one or more objects are associated with a function unrelated to a calculation of one or more calibration parameters. The one or more calibration...... parameters relate to a calibration of a calculation of gaze information of a user of the device, where the gaze information indicates where the user is looking. While the one or more objects are displayed, eye movement information associated with the user is determined, which indicates eye movement of one...... or more eye features associated with at least one eye of the user. The eye movement information is associated with a first object location of the one or more objects. The one or more calibration parameters are calculated based on the first object location being associated with the eye movement information....

  18. Long-range eye tracking: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Jayaweera, S.K.; Lu, Shin-yee

    1994-08-24

    The design considerations for a long-range Purkinje effects based video tracking system using current technology is presented. Past work, current experiments, and future directions are thoroughly discussed, with an emphasis on digital signal processing techniques and obstacles. It has been determined that while a robust, efficient, long-range, and non-invasive eye tracking system will be difficult to develop, such as a project is indeed feasible.

  19. Neural correlates of the eye dominance effect in human face perception: the left-visual-field superiority for faces revisited.

    Science.gov (United States)

    Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung; Lee, Seung-Hwan

    2017-08-01

    Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE's effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. © The Author (2017). Published by Oxford University Press.

  20. How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.

    Science.gov (United States)

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.

  1. An eye tracking study of bloodstain pattern analysts during pattern classification.

    Science.gov (United States)

    Arthur, R M; Hoogenboom, J; Green, R D; Taylor, M C; de Bruin, K G

    2018-05-01

    Bloodstain pattern analysis (BPA) is the forensic discipline concerned with the classification and interpretation of bloodstains and bloodstain patterns at the crime scene. At present, it is unclear exactly which stain or pattern properties and their associated values are most relevant to analysts when classifying a bloodstain pattern. Eye tracking technology has been widely used to investigate human perception and cognition. Its application to forensics, however, is limited. This is the first study to use eye tracking as a tool for gaining access to the mindset of the bloodstain pattern expert. An eye tracking method was used to follow the gaze of 24 bloodstain pattern analysts during an assigned task of classifying a laboratory-generated test bloodstain pattern. With the aid of an automated image-processing methodology, the properties of selected features of the pattern were quantified leading to the delineation of areas of interest (AOIs). Eye tracking data were collected for each AOI and combined with verbal statements made by analysts after the classification task to determine the critical range of values for relevant diagnostic features. Eye-tracking data indicated that there were four main regions of the pattern that analysts were most interested in. Within each region, individual elements or groups of elements that exhibited features associated with directionality, size, colour and shape appeared to capture the most interest of analysts during the classification task. The study showed that the eye movements of trained bloodstain pattern experts and their verbal descriptions of a pattern were well correlated.

  2. Suppression of Face Perception during Saccadic Eye Movements

    Directory of Open Access Journals (Sweden)

    Mehrdad Seirafi

    2014-01-01

    Full Text Available Lack of awareness of a stimulus briefly presented during saccadic eye movement is known as saccadic omission. Studying the reduced visibility of visual stimuli around the time of saccade—known as saccadic suppression—is a key step to investigate saccadic omission. To date, almost all studies have been focused on the reduced visibility of simple stimuli such as flashes and bars. The extension of the results from simple stimuli to more complex objects has been neglected. In two experimental tasks, we measured the subjective and objective awareness of a briefly presented face stimuli during saccadic eye movement. In the first task, we measured the subjective awareness of the visual stimuli and showed that in most of the trials there is no conscious awareness of the faces. In the second task, we measured objective sensitivity in a two-alternative forced choice (2AFC face detection task, which demonstrated chance-level performance. Here, we provide the first evidence of complete suppression of complex visual stimuli during the saccadic eye movement.

  3. The µ-opioid system promotes visual attention to faces and eyes.

    Science.gov (United States)

    Chelnokova, Olga; Laeng, Bruno; Løseth, Guro; Eikemo, Marie; Willoch, Frode; Leknes, Siri

    2016-12-01

    Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    Science.gov (United States)

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes. © 2015 Wiley Periodicals, Inc.

  5. On Biometrics With Eye Movements.

    Science.gov (United States)

    Zhang, Youming; Juhola, Martti

    2017-09-01

    Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.

  6. Face pose tracking using the four-point algorithm

    Science.gov (United States)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  7. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, A.; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face

  8. The effect of emotionally valenced eye region images on visuocortical processing of surprised faces.

    Science.gov (United States)

    Li, Shuaixia; Li, Ping; Wang, Wei; Zhu, Xiangru; Luo, Wenbo

    2018-05-01

    In this study, we presented pictorial representations of happy, neutral, and fearful expressions projected in the eye regions to determine whether the eye region alone is sufficient to produce a context effect. Participants were asked to judge the valence of surprised faces that had been preceded by a picture of an eye region. Behavioral results showed that affective ratings of surprised faces were context dependent. Prime-related ERPs with presentation of happy eyes elicited a larger P1 than those for neutral and fearful eyes, likely due to the recognition advantage provided by a happy expression. Target-related ERPs showed that surprised faces in the context of fearful and happy eyes elicited dramatically larger C1 than those in the neutral context, which reflected the modulation by predictions during the earliest stages of face processing. There were larger N170 with neutral and fearful eye contexts compared to the happy context, suggesting faces were being integrated with contextual threat information. The P3 component exhibited enhanced brain activity in response to faces preceded by happy and fearful eyes compared with neutral eyes, indicating motivated attention processing may be involved at this stage. Altogether, these results indicate for the first time that the influence of isolated eye regions on the perception of surprised faces involves preferential processing at the early stages and elaborate processing at the late stages. Moreover, higher cognitive processes such as predictions and attention can modulate face processing from the earliest stages in a top-down manner. © 2017 Society for Psychophysiological Research.

  9. Understanding eye movements in face recognition using hidden Markov models.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.

  10. Eye tracking to explore attendance in health-state descriptions.

    Directory of Open Access Journals (Sweden)

    Anna Selivanova

    Full Text Available A crucial assumption in health valuation methods is that respondents pay equal attention to all information components presented in the response task. So far, there is no solid evidence that respondents are fulfilling this condition. The aim of our study is to explore the attendance to various information cues presented in the discrete choice (DC response tasks.Eye tracking was used to study the eye movements and fixations on specific information areas. This was done for seven DC response tasks comprising health-state descriptions. A sample of 10 respondents participated in the study. Videos of their eye movements were recorded and are presented graphically. Frequencies were computed for length of fixation and number of fixations, so differences in attendance were demonstrated for particular attributes in the tasks.All respondents completed the survey. Respondents were fixating on the left-sided health-state descriptions slightly longer than on the right-sided. Fatigue was not observed, as the time spent did not decrease in the final response tasks. The time spent on the tasks depended on the difficulty of the task and the amount of information presented.Eye tracking proved to be a feasible method to study the process of paying attention and fixating on health-state descriptions in the DC response tasks. Eye tracking facilitates the investigation of whether respondents fully read the information in health descriptions or whether they ignore particular elements.

  11. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    Science.gov (United States)

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  12. In vivo imaging of palisades of Vogt in dry eye versus normal subjects using en-face spectral-domain optical coherence tomography.

    Directory of Open Access Journals (Sweden)

    Wajdene Ghouali

    Full Text Available To evaluate a possible clinical application of spectral-domain optical coherence tomography (SD-OCT using en-face module for the imaging of the corneoscleral limbus in normal subjects and dry eye patients.Seventy-six subjects were included in this study. Seventy eyes of 35 consecutive patients with dry eye disease and 82 eyes of 41 healthy control subjects were investigated. All subjects were examined with the Avanti RTVue® anterior segment OCT. En-face OCT images of the corneoscleral limbus were acquired in four quadrants (inferior, superior, nasal and temporal and then were analyzed semi-quantitatively according to whether or not palisades of Vogt (POV were visible. En-face OCT images were then compared to in vivo confocal microscopy (IVCM in eleven eyes of 7 healthy and dry eye patients.En-face SD-OCT showed POV as a radially oriented network, located in superficial corneoscleral limbus, with a good correlation with IVCM features. It provided an easy and reproducible identification of POV without any special preparation or any direct contact, with a grading scale from 0 (no visualization to 3 (high visualization. The POV were found predominantly in superior (P<0.001 and inferior (P<0.001 quadrants when compared to the nasal and temporal quadrants for all subjects examined. The visibility score decreased with age (P<0.001 and was lower in dry eye patients (P<0.01. In addition, the score decreased in accordance with the severity of dry eye disease (P<0.001.En-face SD-OCT is a non-contact imaging technique that can be used to evaluate the POV, thus providing valuable information about differences in the limbal anatomy of dry eye patients as compared to healthy patients.

  13. Elevated intracranial pressure and reversible eye-tracking changes detected while viewing a film clip.

    Science.gov (United States)

    Kolecki, Radek; Dammavalam, Vikalpa; Bin Zahid, Abdullah; Hubbard, Molly; Choudhry, Osamah; Reyes, Marleen; Han, ByoungJun; Wang, Tom; Papas, Paraskevi Vivian; Adem, Aylin; North, Emily; Gilbertson, David T; Kondziolka, Douglas; Huang, Jason H; Huang, Paul P; Samadani, Uzma

    2018-03-01

    OBJECTIVE The precise threshold differentiating normal and elevated intracranial pressure (ICP) is variable among individuals. In the context of several pathophysiological conditions, elevated ICP leads to abnormalities in global cerebral functioning and impacts the function of cranial nerves (CNs), either or both of which may contribute to ocular dysmotility. The purpose of this study was to assess the impact of elevated ICP on eye-tracking performed while patients were watching a short film clip. METHODS Awake patients requiring placement of an ICP monitor for clinical purposes underwent eye tracking while watching a 220-second continuously playing video moving around the perimeter of a viewing monitor. Pupil position was recorded at 500 Hz and metrics associated with each eye individually and both eyes together were calculated. Linear regression with generalized estimating equations was performed to test the association of eye-tracking metrics with changes in ICP. RESULTS Eye tracking was performed at ICP levels ranging from -3 to 30 mm Hg in 23 patients (12 women, 11 men, mean age 46.8 years) on 55 separate occasions. Eye-tracking measures correlating with CN function linearly decreased with increasing ICP (p short film clip. These results suggest that eye tracking may be used as a noninvasive, automatable means to quantitate the physiological impact of elevated ICP, which has clinical application for assessment of shunt malfunction, pseudotumor cerebri, concussion, and prevention of second-impact syndrome.

  14. In the twinkling of an eye: synchronization of EEG and eye tracking based on blink signatures

    DEFF Research Database (Denmark)

    Bækgaard, Per; Petersen, Michael Kai; Larsen, Jakob Eg

    2014-01-01

    function based algorithm to correlate the signals. Comparing the accuracy of the method against a state of the art EYE-EEG plug-in for offline analysis of EEG and eye tracking data, we propose our approach could be applied for robust synchronization of biometric sensor data collected in a mobile context.......ACHIEVING ROBUST ADAPTIVE SYNCHRONIZATION OF MULTIMODAL BIOMETRIC INPUTS: The recent arrival of wireless EEG headsets that enable mobile real-time 3D brain imaging on smartphones, and low cost eye trackers that provide gaze control of tablets, will radically change how biometric sensors might...... be integrated into next generation user interfaces. In experimental lab settings EEG neuroimaging and eye tracking data are traditionally combined using external triggers to synchronize the signals. However, with biometric sensors increasingly being applied in everyday usage scenarios, there will be a need...

  15. Template aging in eye movement-driven biometrics

    Science.gov (United States)

    Komogortsev, Oleg V.; Holland, Corey D.; Karpov, Alex

    2014-05-01

    This paper presents a template aging study of eye movement biometrics, considering three distinct biometric techniques on multiple stimuli and eye tracking systems. Short-to-midterm aging effects are examined over two-weeks, on a highresolution eye tracking system, and seven-months, on a low-resolution eye tracking system. We find that, in all cases, aging effects are evident as early as two weeks after initial template collection, with an average 28% (±19%) increase in equal error rates and 34% (±12%) reduction in rank-1 identification rates. At seven months, we observe an average 18% (±8%) increase in equal error rates and 44% (±20%) reduction in rank-1 identification rates. The comparative results at two-weeks and seven-months suggests that there is little difference in aging effects between the two intervals; however, whether the rate of decay increases more drastically in the long-term remains to be seen.

  16. Loneliness and hypervigilance to social cues in females: an eye-tracking study.

    Directory of Open Access Journals (Sweden)

    Gerine M A Lodder

    Full Text Available The goal of the present study was to examine whether lonely individuals differ from nonlonely individuals in their overt visual attention to social cues. Previous studies showed that loneliness was related to biased post-attentive processing of social cues (e.g., negative interpretation bias, but research on whether lonely and nonlonely individuals also show differences in an earlier information processing stage (gazing behavior is very limited. A sample of 25 lonely and 25 nonlonely students took part in an eye-tracking study consisting of four tasks. We measured gazing (duration, number of fixations and first fixation at the eyes, nose and mouth region of faces expressing emotions (Task 1, at emotion quadrants (anger, fear, happiness and neutral expression (Task 2, at quadrants with positive and negative social and nonsocial images (Task 3, and at the facial area of actors in video clips with positive and negative content (Task 4. In general, participants tended to gaze most often and longest at areas that conveyed most social information, such as the eye region of the face (T1, and social images (T3. Participants gazed most often and longest at happy faces (T2 in still images, and more often and longer at the facial area in negative than in positive video clips (T4. No differences occurred between lonely and nonlonely participants in their gazing times and frequencies, nor at first fixations at social cues in the four different tasks. Based on this study, we found no evidence that overt visual attention to social cues differs between lonely and nonlonely individuals. This implies that biases in social information processing of lonely individuals may be limited to other phases of social information processing. Alternatively, biased overt attention to social cues may only occur under specific conditions, for specific stimuli or for specific lonely individuals.

  17. Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing.

    Science.gov (United States)

    Sammaknejad, Negar; Pouretemad, Hamidreza; Eslahchi, Changiz; Salahirad, Alireza; Alinejad, Ashkan

    2017-01-01

    Studies have revealed superior face recognition skills in females, partially due to their different eye movement strategies when encoding faces. In the current study, we utilized these slight but important differences and proposed a model that estimates the gender of the viewers and classifies them into two subgroups, males and females. An eye tracker recorded participant's eye movements while they viewed images of faces. Regions of interest (ROIs) were defined for each face. Results showed that the gender dissimilarity in eye movements was not due to differences in frequency of fixations in the ROI s per se. Instead, it was caused by dissimilarity in saccade paths between the ROIs. The difference enhanced when saccades were towards the eyes. Females showed significant increase in transitions from other ROI s to the eyes. Consequently, the extraction of temporal transient information of saccade paths through a transition probability matrix, similar to a first order Markov chain model, significantly improved the accuracy of the gender classification results.

  18. Combination of multiple measurement cues for visual face tracking

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2014-01-01

    Visual face tracking is an important building block for all intelligent living and working spaces, as it is able to locate persons without any human intervention or the need for the users to carry sensors on themselves. In this paper we present a novel face tracking system built on a particle...

  19. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  20. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    Science.gov (United States)

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  1. Tracking and recognition face in videos with incremental local sparse representation model

    Science.gov (United States)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  2. Identification system by eye retinal pattern

    International Nuclear Information System (INIS)

    Sunagawa, Takahisa; Shibata, Susumu

    1987-01-01

    Identification system by eye retinal pattern is introduced from the view-point of history of R and D, measurement, apparatus, evaluation tests, safety and application. According to our evaluation tests, enrolling time is approximately less than 1 min, verification time is a few seconds and false accept rate is 0 %. Evaluation tests at Sandia National Laboratories in USA show the comparison data of false accept rates such as 0 % for eye retinal pattern, 10.5 % for finger-print, 5.8 % for signature dynamics and 17.7 % for speaker voice. The identification system by eye retinal pattern has only three applications in Japan, but there has been a number of experience in USA. This fact suggests that the system will become an important means for physical protections not only in nuclear field but also in other industrial fields in Japan. (author)

  3. Automated night/day standoff detection, tracking, and identification of personnel for installation protection

    Science.gov (United States)

    Lemoff, Brian E.; Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; McCormick, William; Ice, Robert

    2013-06-01

    The capability to positively and covertly identify people at a safe distance, 24-hours per day, could provide a valuable advantage in protecting installations, both domestically and in an asymmetric warfare environment. This capability would enable installation security officers to identify known bad actors from a safe distance, even if they are approaching under cover of darkness. We will describe an active-SWIR imaging system being developed to automatically detect, track, and identify people at long range using computer face recognition. The system illuminates the target with an eye-safe and invisible SWIR laser beam, to provide consistent high-resolution imagery night and day. SWIR facial imagery produced by the system is matched against a watch-list of mug shots using computer face recognition algorithms. The current system relies on an operator to point the camera and to review and interpret the face recognition results. Automation software is being developed that will allow the system to be cued to a location by an external system, automatically detect a person, track the person as they move, zoom in on the face, select good facial images, and process the face recognition results, producing alarms and sharing data with other systems when people are detected and identified. Progress on the automation of this system will be presented along with experimental night-time face recognition results at distance.

  4. Symmetric Kullback-Leibler Metric Based Tracking Behaviors for Bioinspired Robotic Eyes.

    Science.gov (United States)

    Liu, Hengli; Luo, Jun; Wu, Peng; Xie, Shaorong; Li, Hengyu

    2015-01-01

    A symmetric Kullback-Leibler metric based tracking system, capable of tracking moving targets, is presented for a bionic spherical parallel mechanism to minimize a tracking error function to simulate smooth pursuit of human eyes. More specifically, we propose a real-time moving target tracking algorithm which utilizes spatial histograms taking into account symmetric Kullback-Leibler metric. In the proposed algorithm, the key spatial histograms are extracted and taken into particle filtering framework. Once the target is identified, an image-based control scheme is implemented to drive bionic spherical parallel mechanism such that the identified target is to be tracked at the center of the captured images. Meanwhile, the robot motion information is fed forward to develop an adaptive smooth tracking controller inspired by the Vestibuloocular Reflex mechanism. The proposed tracking system is designed to make the robot track dynamic objects when the robot travels through transmittable terrains, especially bumpy environment. To perform bumpy-resist capability under the condition of violent attitude variation when the robot works in the bumpy environment mentioned, experimental results demonstrate the effectiveness and robustness of our bioinspired tracking system using bionic spherical parallel mechanism inspired by head-eye coordination.

  5. Symmetric Kullback-Leibler Metric Based Tracking Behaviors for Bioinspired Robotic Eyes

    Directory of Open Access Journals (Sweden)

    Hengli Liu

    2015-01-01

    Full Text Available A symmetric Kullback-Leibler metric based tracking system, capable of tracking moving targets, is presented for a bionic spherical parallel mechanism to minimize a tracking error function to simulate smooth pursuit of human eyes. More specifically, we propose a real-time moving target tracking algorithm which utilizes spatial histograms taking into account symmetric Kullback-Leibler metric. In the proposed algorithm, the key spatial histograms are extracted and taken into particle filtering framework. Once the target is identified, an image-based control scheme is implemented to drive bionic spherical parallel mechanism such that the identified target is to be tracked at the center of the captured images. Meanwhile, the robot motion information is fed forward to develop an adaptive smooth tracking controller inspired by the Vestibuloocular Reflex mechanism. The proposed tracking system is designed to make the robot track dynamic objects when the robot travels through transmittable terrains, especially bumpy environment. To perform bumpy-resist capability under the condition of violent attitude variation when the robot works in the bumpy environment mentioned, experimental results demonstrate the effectiveness and robustness of our bioinspired tracking system using bionic spherical parallel mechanism inspired by head-eye coordination.

  6. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    Science.gov (United States)

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  7. Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition

    Directory of Open Access Journals (Sweden)

    Anya Chakraborty

    2018-02-01

    Full Text Available We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.

  8. Objective assessment of the contribution of dental esthetics and facial attractiveness in men via eye tracking.

    Science.gov (United States)

    Baker, Robin S; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2018-04-01

    Recently, greater emphasis has been placed on smile esthetics in dentistry. Eye tracking has been used to objectively evaluate attention to the dentition (mouth) in female models with different levels of dental esthetics quantified by the aesthetic component of the Index of Orthodontic Treatment Need (IOTN). This has not been accomplished in men. Our objective was to determine the visual attention to the mouth in men with different levels of dental esthetics (IOTN levels) and background facial attractiveness, for both male and female raters, using eye tracking. Facial images of men rated as unattractive, average, and attractive were digitally manipulated and paired with validated oral images, IOTN levels 1 (no treatment need), 7 (borderline treatment need), and 10 (definite treatment need). Sixty-four raters meeting the inclusion criteria were included in the data analysis. Each rater was calibrated in the eye tracker and randomly viewed the composite images for 3 seconds, twice for reliability. Reliability was good or excellent (intraclass correlation coefficients, 0.6-0.9). Significant interactions were observed with factorial repeated-measures analysis of variance and the Tukey-Kramer method for density and duration of fixations in the interactions of model facial attractiveness by area of the face (P models was eye, mouth, and nose, but for men of average attractiveness, it was mouth, eye, and nose. For dental esthetics by area, at IOTN 7, the mouth had significantly more visual attention than it did at IOTN 1 and significantly more than the nose. At IOTN 10, the mouth received significantly more attention than at IOTN 7 and surpassed the nose and eye. These findings were irrespective of facial attractiveness levels. For rater sex by area in visual density, women showed significantly more attention to the eyes than did men, and only men showed significantly more attention to the mouth over the nose. Visual attention to the mouth was the greatest in men of

  9. An eye-tracking method to reveal the link between gazing patterns and pragmatic abilities in high functioning autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Ouriel eGrynszpan

    2015-01-01

    Full Text Available The present study illustrates the potential advantages of an eye-tracking method for exploring the association between visual scanning of faces and inferences of mental states. Participants watched short videos involving social interactions and had to explain what they had seen. The number of cognition verbs (e.g. think, believe, know in their answers were counted. Given the possible use of peripheral vision that could confound eye-tracking measures, we added a condition using a gaze-contingent viewing window: the entire visual display is blurred, expect for an area that moves with the participant’s gaze. Eleven typical adults and eleven high functioning adults with ASD were recruited. The condition employing the viewing window yielded strong correlations between the average duration of fixations, the ratio of cognition verbs and standard measures of social disabilities.

  10. A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI

    Directory of Open Access Journals (Sweden)

    Piotr Stawicki

    2017-04-01

    Full Text Available Steady state visual evoked potentials (SSVEPs-based Brain-Computer interfaces (BCIs, as well as eyetracking devices, provide a pathway for re-establishing communication for people with severe disabilities. We fused these control techniques into a novel eyetracking/SSVEP hybrid system, which utilizes eye tracking for initial rough selection and the SSVEP technology for fine target activation. Based on our previous studies, only four stimuli were used for the SSVEP aspect, granting sufficient control for most BCI users. As Eye tracking data is not used for activation of letters, false positives due to inappropriate dwell times are avoided. This novel approach combines the high speed of eye tracking systems and the high classification accuracies of low target SSVEP-based BCIs, leading to an optimal combination of both methods. We evaluated accuracy and speed of the proposed hybrid system with a 30-target spelling application implementing all three control approaches (pure eye tracking, SSVEP and the hybrid system with 32 participants. Although the highest information transfer rates (ITRs were achieved with pure eye tracking, a considerable amount of subjects was not able to gain sufficient control over the stand-alone eye-tracking device or the pure SSVEP system (78.13% and 75% of the participants reached reliable control, respectively. In this respect, the proposed hybrid was most universal (over 90% of users achieved reliable control, and outperformed the pure SSVEP system in terms of speed and user friendliness. The presented hybrid system might offer communication to a wider range of users in comparison to the standard techniques.

  11. Low Cost vs. High-End Eye Tracking for Usability Testing

    DEFF Research Database (Denmark)

    Johansen, Sune Alstrup; San Agustin, Javier; Jensen, Henrik Tomra Skovsgaard Hegner

    2011-01-01

    Accuracy of an open source remote eye tracking system and a state-of-the-art commercial eye tracker was measured 4 times during a usability test. Results from 9 participants showed both devices to be fairly stable over time, but the commercial tracker was more accurate with a mean error of 31 pix...

  12. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan

    2016-10-25

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face clustering and face tracking from real world videos. The motivation for the proposed research is that face clustering and face tracking can provide useful information and constraints to each other, thus can bootstrap and improve the performances of each other. To this end, we introduce a Coupled Hidden Markov Random Field (CHMRF) to simultaneously model face clustering, face tracking, and their interactions. We provide an effective algorithm based on constrained clustering and optimal tracking for the joint optimization of cluster labels and face tracking. We demonstrate significant improvements over state-of-the-art results in face clustering and tracking on several videos.

  13. WEB ANALYTICS COMBINED WITH EYE TRACKING FOR SUCCESSFUL USER EXPERIENCE DESIGN: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Magdalena BORYS

    2016-12-01

    Full Text Available The authors propose a new approach for the mobile user experience design process by means of web analytics and eye-tracking. The proposed method was applied to design the LUT mobile website. In the method, to create the mobile website design, data of various users and their behaviour were gathered and analysed using the web analytics tool. Next, based on the findings from web analytics, the mobile prototype for the website was created and validated in eye-tracking usability testing. The analysis of participants’ behaviour during eye-tracking sessions allowed improvements of the prototype.

  14. Eye tracking measures of uncertainty during perceptual decision making.

    Science.gov (United States)

    Brunyé, Tad T; Gardony, Aaron L

    2017-10-01

    Perceptual decision making involves gathering and interpreting sensory information to effectively categorize the world and inform behavior. For instance, a radiologist distinguishing the presence versus absence of a tumor, or a luggage screener categorizing objects as threatening or non-threatening. In many cases, sensory information is not sufficient to reliably disambiguate the nature of a stimulus, and resulting decisions are done under conditions of uncertainty. The present study asked whether several oculomotor metrics might prove sensitive to transient states of uncertainty during perceptual decision making. Participants viewed images with varying visual clarity and were asked to categorize them as faces or houses, and rate the certainty of their decisions, while we used eye tracking to monitor fixations, saccades, blinks, and pupil diameter. Results demonstrated that decision certainty influenced several oculomotor variables, including fixation frequency and duration, the frequency, peak velocity, and amplitude of saccades, and phasic pupil diameter. Whereas most measures tended to change linearly along with decision certainty, pupil diameter revealed more nuanced and dynamic information about the time course of perceptual decision making. Together, results demonstrate robust alterations in eye movement behavior as a function of decision certainty and attention demands, and suggest that monitoring oculomotor variables during applied task performance may prove valuable for identifying and remediating transient states of uncertainty. Published by Elsevier B.V.

  15. Hidden Markov model analysis reveals the advantage of analytic eye movement patterns in face recognition across cultures.

    Science.gov (United States)

    Chuk, Tim; Crookes, Kate; Hayward, William G; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    It remains controversial whether culture modulates eye movement behavior in face recognition. Inconsistent results have been reported regarding whether cultural differences in eye movement patterns exist, whether these differences affect recognition performance, and whether participants use similar eye movement patterns when viewing faces from different ethnicities. These inconsistencies may be due to substantial individual differences in eye movement patterns within a cultural group. Here we addressed this issue by conducting individual-level eye movement data analysis using hidden Markov models (HMMs). Each individual's eye movements were modeled with an HMM. We clustered the individual HMMs according to their similarities and discovered three common patterns in both Asian and Caucasian participants: holistic (looking mostly at the face center), left-eye-biased analytic (looking mostly at the two individual eyes in addition to the face center with a slight bias to the left eye), and right-eye-based analytic (looking mostly at the right eye in addition to the face center). The frequency of participants adopting the three patterns did not differ significantly between Asians and Caucasians, suggesting little modulation from culture. Significantly more participants (75%) showed similar eye movement patterns when viewing own- and other-race faces than different patterns. Most importantly, participants with left-eye-biased analytic patterns performed significantly better than those using either holistic or right-eye-biased analytic patterns. These results suggest that active retrieval of facial feature information through an analytic eye movement pattern may be optimal for face recognition regardless of culture. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Eye tracking a self-moved target with complex hand-target dynamics

    Science.gov (United States)

    Landelle, Caroline; Montagnini, Anna; Madelain, Laurent

    2016-01-01

    Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129

  17. Online webcam-based eye tracking in cognitive science: A first look.

    Science.gov (United States)

    Semmelmann, Kilian; Weigelt, Sarah

    2018-04-01

    Online experimentation is emerging in many areas of cognitive psychology as a viable alternative or supplement to classical in-lab experimentation. While performance- and reaction-time-based paradigms are covered in recent studies, one instrument of cognitive psychology has not received much attention up to now: eye tracking. In this study, we used JavaScript-based eye tracking algorithms recently made available by Papoutsaki et al. (International Joint Conference on Artificial Intelligence, 2016) together with consumer-grade webcams to investigate the potential of online eye tracking to benefit from the common advantages of online data conduction. We compared three in-lab conducted tasks (fixation, pursuit, and free viewing) with online-acquired data to analyze the spatial precision in the first two, and replicability of well-known gazing patterns in the third task. Our results indicate that in-lab data exhibit an offset of about 172 px (15% of screen size, 3.94° visual angle) in the fixation task, while online data is slightly less accurate (18% of screen size, 207 px), and shows higher variance. The same results were found for the pursuit task with a constant offset during the stimulus movement (211 px in-lab, 216 px online). In the free-viewing task, we were able to replicate the high attention attribution to eyes (28.25%) compared to other key regions like the nose (9.71%) and mouth (4.00%). Overall, we found web technology-based eye tracking to be suitable for all three tasks and are confident that the required hard- and software will be improved continuously for even more sophisticated experimental paradigms in all of cognitive psychology.

  18. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    Science.gov (United States)

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  19. Updating OSHA Standards Based on National Consensus Standards; Eye and Face Protection. Final rule.

    Science.gov (United States)

    2016-03-25

    On March 13, 2015, OSHA published in the Federal Register a notice of proposed rulemaking (NPRM) to revise its eye and face protection standards for general industry, shipyard employment, marine terminals, longshoring, and construction by updating the references to national consensus standards approved by the American National Standards Institute (ANSI). OSHA received no significant objections from commenters and therefore is adopting the amendments as proposed. This final rule updates the references in OSHA's eye and face standards to reflect the most recent edition of the ANSI/International Safety Equipment Association (ISEA) eye and face protection standard. It removes the oldest-referenced edition of the same ANSI standard. It also amends other provisions of the construction eye and face protection standard to bring them into alignment with OSHA's general industry and maritime standards.

  20. Caucasian infants scan own- and other-race faces differently.

    Directory of Open Access Journals (Sweden)

    Andrea Wheeler

    2011-04-01

    Full Text Available Young infants are known to prefer own-race faces to other race faces and recognize own-race faces better than other-race faces. However, it is entirely unclear as to whether infants also attend to different parts of own- and other-race faces differently, which may provide an important clue as to how and why the own-race face recognition advantage emerges so early. The present study used eye tracking methodology to investigate whether 6- to 10-month-old Caucasian infants (N = 37 have differential scanning patterns for dynamically displayed own- and other-race faces. We found that even though infants spent a similar amount of time looking at own- and other-race faces, with increased age, infants increasingly looked longer at the eyes of own-race faces and less at the mouths of own-race faces. These findings suggest experience-based tuning of the infant's face processing system to optimally process own-race faces that are different in physiognomy from other-race faces. In addition, the present results, taken together with recent own- and other-race eye tracking findings with infants and adults, provide strong support for an enculturation hypothesis that East Asians and Westerners may be socialized to scan faces differently due to each culture's conventions regarding mutual gaze during interpersonal communication.

  1. Eye gaze tracking based on the shape of pupil image

    Science.gov (United States)

    Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.

  2. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy.

    Science.gov (United States)

    Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido

    2015-05-01

    External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The

  3. Eye-mouth-eye angle as a good indicator of face masculinization, asymmetry, and attractiveness (Homo sapiens).

    Science.gov (United States)

    Danel, Dariusz; Pawlowski, Boguslaw

    2007-05-01

    Past research on male facial attractiveness has been limited by the reliance on facialmetric measures that are less than ideal. In particular, some of these measures are face size dependent and show only weak sexual dimorphism, which limits the ability to identify the relationship between masculinization and attractiveness. Here, the authors show that eye-mouth-eye (EME) angle is a quantitative and face size independent trait that is sexually dimorphic and a good indicator of masculinity and face symmetry. Using frontal photographs of female and male faces, the authors first confirmed that the EME angle (measured with the vertex in the middle of the mouth and the arms crossing the centers of pupils) was highly sexually dimorphic. Then, using pictures of young male faces whose attractiveness was assessed on a 7-point scale by young women, the authors showed that attractiveness rate was negatively correlated with EME angle and with the angle asymmetry. The results are compared with those that could be obtained with interpupilary or upper face height measurements. The authors discuss the relationship between attractiveness and both EME angle and its symmetry in the light of evolutionary psychology.

  4. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    Directory of Open Access Journals (Sweden)

    Laura Jean Wells

    Full Text Available The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise, each with three levels of intensity (low, moderate, and normal. Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  5. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    Science.gov (United States)

    Wells, Laura Jean; Gillespie, Steven Mark; Rotshtein, Pia

    2016-01-01

    The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  6. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    Directory of Open Access Journals (Sweden)

    Ziho Kang

    2016-01-01

    Full Text Available Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1 participants interrogate dynamic multielement objects that can overlap on the display and (2 visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1 developing dynamic areas of interests (AOIs in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2 introducing the concept of AOI gap tolerance (AGT that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3 finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC operations where air traffic controller specialists (ATCSs interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  7. Do the Eyes Have It? Using Eye Tracking to Assess Students Cognitive Dimensions

    Science.gov (United States)

    Nisiforou, Efi A.; Laghos, Andrew

    2013-01-01

    Field dependence/independence (FD/FI) is a significant dimension of cognitive styles. The paper presents results of a study that seeks to identify individuals' level of field independence during visual stimulus tasks processing. Specifically, it examined the relationship between the Hidden Figure Test (HFT) scores and the eye tracking metrics.…

  8. Can eye-tracking technology improve situational awareness in paramedic clinical education?

    Science.gov (United States)

    Williams, Brett; Quested, Andrew; Cooper, Simon

    2013-01-01

    Human factors play a significant part in clinical error. Situational awareness (SA) means being aware of one's surroundings, comprehending the present situation, and being able to predict outcomes. It is a key human skill that, when properly applied, is associated with reducing medical error: eye-tracking technology can be used to provide an objective and qualitative measure of the initial perception component of SA. Feedback from eye-tracking technology can be used to improve the understanding and teaching of SA in clinical contexts, and consequently, has potential for reducing clinician error and the concomitant adverse events.

  9. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, Casey Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rice, Brandon Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bower, Gordon Ross [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hill, Rachael Ann [Idaho National Lab. (INL), Idaho Falls, ID (United States); LeBlanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  10. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    International Nuclear Information System (INIS)

    Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross; Spielman, Zachary Alexander; Hill, Rachael Ann; LeBlanc, Katya Lee

    2015-01-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator's eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  11. Comparison of Predictable Smooth Ocular and Combined Eye-Head Tracking Behaviour in Patients with Lesions Affecting the Brainstem and Cerebellum

    Science.gov (United States)

    Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.

    1992-01-01

    We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.

  12. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    Science.gov (United States)

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  13. Accurate eye center location and tracking using isophote curvature

    NARCIS (Netherlands)

    Valenti, R.; Gevers, T.

    2008-01-01

    The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However,

  14. Oxytocin and Pareidolia Face Detection Using the Visual Search Task. A double blind, placebo-controlled within-subject design study using eye tracking and intranasal oxytocin.

    OpenAIRE

    Winge, Jarle Alexander

    2014-01-01

    Face pareidolia is the human tendency to see illusory human-like faces, for example in random patterns exhibiting configural properties of a face. Past research on humans show that oxytocin has a crucial role in enhancing facial processing. By leading to an increased focus on the face in general, and eyes especially, alter the encoding and conceptual recognition of social stimuli, enhancing sensitivity to hidden emotions in facial expressions, and enhancing the ability to interpret the fac...

  15. Measuring vigilance decrement using computer vision assisted eye tracking in dynamic naturalistic environments.

    Science.gov (United States)

    Bodala, Indu P; Abbasi, Nida I; Yu Sun; Bezerianos, Anastasios; Al-Nashash, Hasan; Thakor, Nitish V

    2017-07-01

    Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and pdecrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.

  16. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy

    International Nuclear Information System (INIS)

    Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Ciocca, Mario; Riboldi, Marco; Baroni, Guido; Orecchia, Roberto

    2015-01-01

    Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring

  17. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  18. Fingerprint and Face Identification for Large User Population

    Directory of Open Access Journals (Sweden)

    Teddy Ko

    2003-06-01

    Full Text Available The main objective of this paper is to present the state-of-the-art of the current biometric (fingerprint and face technology, lessons learned during the investigative analysis performed to ascertain the benefits of using combined fingerprint and facial technologies, and recommendations for the use of current available fingerprint and face identification technologies for optimum identification performance for applications using large user population. Prior fingerprint and face identification test study results have shown that their identification accuracies are strongly dependent on the image quality of the biometric inputs. Recommended methodologies for ensuring the capture of acceptable quality fingerprint and facial images of subjects are also presented in this paper.

  19. Face Recognition and Visual Search Strategies in Autism Spectrum Disorders: Amending and Extending a Recent Review by Weigelt et al.

    Directory of Open Access Journals (Sweden)

    Julia Tang

    Full Text Available The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed.

  20. Head Tracking via Robust Registration in Texture Map Images

    National Research Council Canada - National Science Library

    LaCascia, Marco

    1998-01-01

    .... The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking...

  1. Can Individuals with Autism Abstract Prototypes of Natural Faces?

    Science.gov (United States)

    Gastgeb, Holly Zajac; Wilkinson, Desiree A.; Minshew, Nancy J.; Strauss, Mark S.

    2011-01-01

    There is a growing amount of evidence suggesting that individuals with autism have difficulty with face processing. One basic cognitive ability that may underlie face processing difficulties is the ability to abstract a prototype. The current study examined prototype formation with natural faces using eye-tracking in high-functioning adults with…

  2. APPLICATION OF EYE TRACKING FOR MEASUREMENT AND EVALUATION IN HUMAN FACTORS STUDIES IN CONTROL ROOM MODERNIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, C.; Spielman, Z.; LeBlanc, K.; Rice, B.

    2017-05-01

    An important element of human factors engineering (HFE) pertains to measurement and evaluation (M&E). The role of HFE-M&E should be integrated throughout the entire control room modernization (CRM) process and be used for human-system performance evaluation and diagnostic purposes with resolving potential human engineering deficiencies (HEDs) and other human machine interface (HMI) design issues. NUREG-0711 describes how HFE in CRM should employ a hierarchical set of measures, particularly during integrated system validation (ISV), including plant performance, personnel task performance, situation awareness, cognitive workload, and anthropometric/ physiological factors. Historically, subjective measures have been primarily used since they are easier to collect and do not require specialized equipment. However, there are pitfalls with relying solely on subjective measures in M&E such that negatively impact reliability, sensitivity, and objectivity. As part of comprehensively capturing a diverse set of measures that strengthen findings and inferences made of the benefits from emerging technologies like advanced displays, this paper discusses the value of using eye tracking as an objective method that can be used in M&E. A brief description of eye tracking technology and relevant eye tracking measures is provided. Additionally, technical considerations and the unique challenges with using eye tracking in full-scaled simulations are addressed. Finally, this paper shares preliminary findings regarding the use of a wearable eye tracking system in a full-scale simulator study. These findings should help guide future full-scale simulator studies using eye tracking as a methodology to evaluate human-system performance.

  3. Improved Likelihood Function in Particle-based IR Eye Tracking

    DEFF Research Database (Denmark)

    Satria, R.; Sorensen, J.; Hammoud, R.

    2005-01-01

    In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...

  4. Eye gaze in intelligent user interfaces gaze-based analyses, models and applications

    CERN Document Server

    Nakano, Yukiko I; Bader, Thomas

    2013-01-01

    Remarkable progress in eye-tracking technologies opened the way to design novel attention-based intelligent user interfaces, and highlighted the importance of better understanding of eye-gaze in human-computer interaction and human-human communication. For instance, a user's focus of attention is useful in interpreting the user's intentions, their understanding of the conversation, and their attitude towards the conversation. In human face-to-face communication, eye gaze plays an important role in floor management, grounding, and engagement in conversation.Eye Gaze in Intelligent User Interfac

  5. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  6. 29 CFR 1917.91 - Eye and face protection.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Eye and face protection. 1917.91 Section 1917.91 Labor... with any of the following consensus standards: (A) ANSI Z87.1-2003, “American National Standard... accordance with one of the above consensus standards will be deemed to be in compliance with the requirements...

  7. A Statistical Approach to Continuous Self-Calibrating Eye Gaze Tracking for Head-Mounted Virtual Reality Systems

    OpenAIRE

    Tripathi, Subarna; Guenter, Brian

    2016-01-01

    We present a novel, automatic eye gaze tracking scheme inspired by smooth pursuit eye motion while playing mobile games or watching virtual reality contents. Our algorithm continuously calibrates an eye tracking system for a head mounted display. This eliminates the need for an explicit calibration step and automatically compensates for small movements of the headset with respect to the head. The algorithm finds correspondences between corneal motion and screen space motion, and uses these to...

  8. Eye Detection and Tracking for Intelligent Human Computer Interaction

    National Research Council Canada - National Science Library

    Yin, Lijun

    2006-01-01

    .... In this project, Dr. Lijun Yin has developed a new algorithm for detecting and tracking eyes under an unconstrained environment using a single ordinary camera or webcam. The new algorithm is advantageous in that it works in a non-intrusive way based on a socalled Topographic Context approach.

  9. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern.

    Science.gov (United States)

    Mega, Laura F; Volz, Kirsten G

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an 'intuitive group,' instructed to rely on their "gut feeling" for the authenticity judgments, and a 'deliberative group,' instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the "gestalt" of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  10. Eye-movement strategies in developmental prosopagnosia and "super" face recognition.

    Science.gov (United States)

    Bobak, Anna K; Parris, Benjamin A; Gregory, Nicola J; Bennetts, Rachel J; Bate, Sarah

    2017-02-01

    Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition.

  11. Tracking the Eye Movement of Four Years Old Children Learning Chinese Words

    Science.gov (United States)

    Lin, Dan; Chen, Guangyao; Liu, Yingyi; Liu, Jiaxin; Pan, Jue; Mo, Lei

    2018-01-01

    Storybook reading is the major source of literacy exposure for beginning readers. The present study tracked 4-year-old Chinese children's eye movements while they were reading simulated storybook pages. Their eye-movement patterns were examined in relation to their word learning gains. The same reading list, consisting of 20 two-character Chinese…

  12. Integrating Click-Through and Eye-Tracking Logs for Decision-Making Process Mining

    Directory of Open Access Journals (Sweden)

    Razvan PETRUSEL

    2014-01-01

    Full Text Available In current software every click of the users is logged, therefore a wealth of click-through information exists. Besides, recent technologies have made eye-tracking affordable and an alternative to other human-computer interaction means (e.g. mouse, touchscreens. A big challenge is to make sense of all this data and convert it into useful information. This paper introduces a possible solution placed in the context of decision-making processes. We show how the decision maker's activity can be traced using two means: mouse tracing (i.e. clicks and eye-tracking (i.e. eye fixations. Then, we discuss a mining approach, based on the log, which extracts a Decision Data Model (DDM. We use the DDM to determine, post-hoc, which decision strategy was employed. The paper concludes with a validation based on a controlled experiment.

  13. Evaluation of head-free eye tracking as an input device for air traffic control.

    Science.gov (United States)

    Alonso, Roland; Causse, Mickaël; Vachon, François; Parise, Robert; Dehais, Frédéric; Terrier, Patrice

    2013-01-01

    The purpose of this study was to investigate the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC) activity. Sixteen participants used an eye tracker to select targets displayed on a screen as quickly and accurately as possible. We assessed the impact of the presence of visual feedback about gaze position and the method of target selection on selection performance under different difficulty levels induced by variations in target size and target-to-target separation. We tend to consider that the combined use of gaze dwell-time selection and continuous eye-gaze feedback was the best condition as it suits naturally with gaze displacement over the ATC display and free the hands of the controller, despite a small cost in terms of selection speed. In addition, target size had a greater impact on accuracy and selection time than target distance. These findings provide guidelines on possible further implementation of eye tracking in ATC everyday activity. We investigated the possibility to integrate a free head motion eye-tracking system as input device in air traffic control (ATC). We found that the combined use of gaze dwell-time selection and continuous eye-gaze feedback allowed the best performance and that target size had a greater impact on performance than target distance.

  14. Practical Use of the Eye Camera in Pedagogical Research (Processing of Selected Data Using the Eye Tracking Method

    Directory of Open Access Journals (Sweden)

    Škrabánková Jana

    2016-06-01

    Full Text Available The paper deals with author’s pilot experiments using the eye tracking method for the primary school children examination. This method enables to gain a large amount of research data based on the tested people’s eye movements monitoring. In the paper, there are processed chosen research data of four gifted students’ examination in the context of their mathematical and logical intelligence.

  15. A MATLAB-based eye tracking control system using non-invasive helmet head restraint in the macaque.

    Science.gov (United States)

    De Luna, Paolo; Mohamed Mustafar, Mohamed Faiz Bin; Rainer, Gregor

    2014-09-30

    Tracking eye position is vital for behavioral and neurophysiological investigations in systems and cognitive neuroscience. Infrared camera systems which are now available can be used for eye tracking without the need to surgically implant magnetic search coils. These systems are generally employed using rigid head fixation in monkeys, which maintains the eye in a constant position and facilitates eye tracking. We investigate the use of non-rigid head fixation using a helmet that constrains only general head orientation and allows some freedom of movement. We present a MATLAB software solution to gather and process eye position data, present visual stimuli, interact with various devices, provide experimenter feedback and store data for offline analysis. Our software solution achieves excellent timing performance due to the use of data streaming, instead of the traditionally employed data storage mode for processing analog eye position data. We present behavioral data from two monkeys, demonstrating that adequate performance levels can be achieved on a simple fixation paradigm and show how performance depends on parameters such as fixation window size. Our findings suggest that non-rigid head restraint can be employed for behavioral training and testing on a variety of gaze-dependent visual paradigms, reducing the need for rigid head restraint systems for some applications. While developed for macaque monkey, our system of course can work equally well for applications in human eye tracking where head constraint is undesirable. Copyright © 2014. Published by Elsevier B.V.

  16. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma

    Directory of Open Access Journals (Sweden)

    Enkelejda Kasneci

    2017-01-01

    Full Text Available To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.

  17. Reading beyond the glance: eye tracking in neurosciences.

    Science.gov (United States)

    Popa, Livia; Selejan, Ovidiu; Scott, Allan; Mureşanu, Dafin F; Balea, Maria; Rafila, Alexandru

    2015-05-01

    From an interdisciplinary approach, the neurosciences (NSs) represent the junction of many fields (biology, chemistry, medicine, computer science, and psychology) and aim to explore the structural and functional aspects of the nervous system. Among modern neurophysiological methods that "measure" different processes of the human brain to salience stimuli, a special place belongs to eye tracking (ET). By detecting eye position, gaze direction, sequence of eye movement and visual adaptation during cognitive activities, ET is an effective tool for experimental psychology and neurological research. It provides a quantitative and qualitative analysis of the gaze, which is very useful in understanding choice behavior and perceptual decision making. In the high tech era, ET has several applications related to the interaction between humans and computers. Herein, ET is used to evaluate the spatial orienting of attention, the performance in visual tasks, the reactions to information on websites, the customer response to advertising, and the emotional and cognitive impact of various spurs to the brain.

  18. Application of head-mounted devices with eye-tracking in virtual reality therapy

    Directory of Open Access Journals (Sweden)

    Lutz Otto Hans-Martin

    2017-03-01

    Full Text Available Using eye-tracking to assess visual attention in head-mounted devices (HMD opens up many possibilities for virtual reality (VR-based therapy. Existing therapy concepts where attention plays a major role can be transferred to VR. Furthermore, they can be expanded to a precise real-time attention assessment, which can serve as a foundation for new therapy approaches. Utilizing HMDs and eye-tracking in a clinical environment is challenging because of hygiene issues and requirements of patients with heterogeneous cognitive and motor impairments. In this paper, we provide an overview of those challenges, discuss possible solutions and present preliminary results of a study with patients.

  19. Instructional Suggestions Supporting Science Learning in Digital Environments Based on a Review of Eye-Tracking Studies

    Science.gov (United States)

    Yang, Fang-Ying; Tsai, Meng-Jung; Chiou, Guo-Li; Lee, Silvia Wen-Yu; Chang, Cheng-Chieh; Chen, Li-Ling

    2018-01-01

    The main purpose of this study was to provide instructional suggestions for supporting science learning in digital environments based on a review of eye tracking studies in e-learning related areas. Thirty-three eye-tracking studies from 2005 to 2014 were selected from the Social Science Citation Index (SSCI) database for review. Through a…

  20. Eye Tracking Based Control System for Natural Human-Computer Interaction

    Directory of Open Access Journals (Sweden)

    Xuebai Zhang

    2017-01-01

    Full Text Available Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user’s eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  1. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    Science.gov (United States)

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  2. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  3. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    Science.gov (United States)

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (pfaces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  4. Face identification with frequency domain matched filtering in mobile environments

    Science.gov (United States)

    Lee, Dong-Su; Woo, Yong-Hyun; Yeom, Seokwon; Kim, Shin-Hwan

    2012-06-01

    Face identification at a distance is very challenging since captured images are often degraded by blur and noise. Furthermore, the computational resources and memory are often limited in the mobile environments. Thus, it is very challenging to develop a real-time face identification system on the mobile device. This paper discusses face identification based on frequency domain matched filtering in the mobile environments. Face identification is performed by the linear or phase-only matched filter and sequential verification stages. The candidate window regions are decided by the major peaks of the linear or phase-only matched filtering outputs. The sequential stages comprise a skin-color test and an edge mask filtering test, which verify color and shape information of the candidate regions in order to remove false alarms. All algorithms are built on the mobile device using Android platform. The preliminary results show that face identification of East Asian people can be performed successfully in the mobile environments.

  5. WEB ANALYTICS COMBINED WITH EYE TRACKING FOR SUCCESSFUL USER EXPERIENCE DESIGN: A CASE STUDY

    OpenAIRE

    Magdalena BORYS; Monika CZWÓRNÓG; Tomasz RATAJCZYK

    2016-01-01

    The authors propose a new approach for the mobile user experience design process by means of web analytics and eye-tracking. The proposed method was applied to design the LUT mobile website. In the method, to create the mobile website design, data of various users and their behaviour were gathered and analysed using the web analytics tool. Next, based on the findings from web analytics, the mobile prototype for the website was created and validated in eye-tracking usability testing. The analy...

  6. Extracting information of fixational eye movements through pupil tracking

    Science.gov (United States)

    Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng

    2018-01-01

    Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.

  7. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern

    Directory of Open Access Journals (Sweden)

    Laura F. Mega

    2017-06-01

    Full Text Available Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an ‘intuitive group,’ instructed to rely on their “gut feeling” for the authenticity judgments, and a ‘deliberative group,’ instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the “gestalt” of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  8. Applying the decision moving window to risky choice: Comparison of eye-tracking and mousetracing methods

    Directory of Open Access Journals (Sweden)

    Ana M. Franco-Watkins

    2011-12-01

    Full Text Available Currently, a disparity exists between the process-level models decision researchers use to describe and predict decision behavior and the methods implemented and metrics collected to test these models. The current work seeks to remedy this disparity by combining the advantages of work in decision research (mouse-tracing paradigms with contingent information display and cognitive psychology (eye-tracking paradigms from reading and scene perception. In particular, we introduce a new decision moving-window paradigm that presents stimulus information contingent on eye fixations. We provide data from the first application of this method to risky decision making, and show how it compares to basic eye-tracking and mouse-tracing methods. We also enumerate the practical, theoretical, and analytic advantages this method offers above and beyond both mouse-tracing with occlusion and basic eye tracking of information without occlusion. We include the use of new metrics that offer more precision than those typically calculated on mouse-tracing data as well as those not possible or feasible within the mouse-tracing paradigm.

  9. Binocular eye movement control and motion perception: what is being tracked?

    Science.gov (United States)

    van der Steen, Johannes; Dits, Joyce

    2012-10-19

    We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.

  10. Back to Basic: Do Children with Autism Spontaneously Look at Screen Displaying a Face or an Object?

    Directory of Open Access Journals (Sweden)

    Marie Guimard-Brunault

    2013-01-01

    Full Text Available Eye-tracking studies on exploration of faces and objects in autism provided important knowledge but only in a constraint condition (chin rest, total time looking at screen not reported, without studying potential differences between subjects with autism spectrum disorder (ASD and controls in spontaneous visual attention toward a screen presenting these stimuli. This study used eye tracking to compare spontaneous visual attention to a screen displaying a face or an object between children with autism and controls in a nonconstraint condition and to investigate the relationship with clinical characteristics in autism group. Time exploring screen was measured during passive viewing of static images of faces or objects. Autistic behaviors were assessed by the CARS and the BSE-R in autism group. In autism group, time exploring face screen and time exploring object screen were lower than in controls and were not correlated with degree of distractibility. There was no interaction between group and type of image on time spent exploring screen. Only time exploring face screen was correlated with autism severity and gaze impairment. Results highlight particularities of spontaneous visual attention toward a screen displaying faces or objects in autism, which should be taken into account in future eye-tracking studies on face exploration.

  11. Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability.

    Science.gov (United States)

    Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina

    2017-02-01

    This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory. © 2016 The British Psychological Society.

  12. Expertise differences in air traffic control: An eye-tracking study

    NARCIS (Netherlands)

    Van Meeuwen, Ludo; Jarodzka, Halszka; Brand-Gruwel, Saskia; Kirschner, Paul A.; De Bock, Jeano; Van Merriënboer, Jeroen

    2012-01-01

    Van Meeuwen, L. W., Jarodzka, H., Brand-Gruwel, S., Kirschner, P. A., De Bock, J. J. P. R., & Van Merriënboer, J. J. G. (2012, April). Expertise differences in air traffic control: An eye-tracking study. Paper presented at the American Educational Research Association Annual Meeting 2012, Vancouver,

  13. The Added Value of Eye-tracking in Diagnosing Dyscalculia: A Case Study

    Directory of Open Access Journals (Sweden)

    Sietske eVan Viersen

    2013-10-01

    Full Text Available The present study compared eye movements and performance of a nine-year-old girl with Developmental Dyscalculia (DD on a series of number line tasks to those of a group of typically developing (TD children (n = 10, in order to answer the question whether eye-tracking data from number line estimation tasks can be a useful tool to discriminate between TD children and children with a number processing deficit. Quantitative results indicated that the child with dyscalculia performed worse on all symbolic number line tasks compared to the control group, indicated by a low linear fit (R2 and a low accuracy measured by mean percent absolute error. In contrast to the control group, her magnitude representations seemed to be better represented by a logarithmic than a linear fit. Furthermore, qualitative analyses on the data of the child with dyscalculia revealed more unidentifiable fixation patterns in the processing of multi-digit numbers and more dysfunctional estimation strategy use in one third of the estimation trials as opposed to approximately 10% in the control group. In line with her dyscalculia diagnosis, these results confirm the difficulties with spatially representing and manipulating numerosities on a number line, resulting in inflexible and inadequate estimation or processing strategies. It can be concluded from this case study that eye-tracking data can be used to discern different number processing and estimation strategies in TD children and children with a number processing deficit. Hence, eye-tracking data in combination with number line estimation tasks might be a valuable and promising addition to current diagnostic measures.

  14. The added value of eye-tracking in diagnosing dyscalculia: a case study.

    Science.gov (United States)

    van Viersen, Sietske; Slot, Esther M; Kroesbergen, Evelyn H; Van't Noordende, Jaccoline E; Leseman, Paul P M

    2013-01-01

    The present study compared eye movements and performance of a 9-year-old girl with Developmental Dyscalculia (DD) on a series of number line tasks to those of a group of typically developing (TD) children (n = 10), in order to answer the question whether eye-tracking data from number line estimation tasks can be a useful tool to discriminate between TD children and children with a number processing deficit. Quantitative results indicated that the child with dyscalculia performed worse on all symbolic number line tasks compared to the control group, indicated by a low linear fit (R (2)) and a low accuracy measured by mean percent absolute error. In contrast to the control group, her magnitude representations seemed to be better represented by a logarithmic than a linear fit. Furthermore, qualitative analyses on the data of the child with dyscalculia revealed more unidentifiable fixation patterns in the processing of multi-digit numbers and more dysfunctional estimation strategy use in one third of the estimation trials as opposed to ~10% in the control group. In line with her dyscalculia diagnosis, these results confirm the difficulties with spatially representing and manipulating numerosities on a number line, resulting in inflexible and inadequate estimation or processing strategies. It can be concluded from this case study that eye-tracking data can be used to discern different number processing and estimation strategies in TD children and children with a number processing deficit. Hence, eye-tracking data in combination with number line estimation tasks might be a valuable and promising addition to current diagnostic measures.

  15. Designing Visual Decision Making Support with the Help of Eye-tracking

    DEFF Research Database (Denmark)

    Weber, Barbara; Gulden, Jens; Burattin, Andrea

    2017-01-01

    Data visualizations are helpful tools to cognitively access large amounts of data and make complex relationships in data understandable. This paper shows how results from neuro-physiological measurements, more specifically eye-tracking, can support justified design decisions about improving...

  16. Tracking without perceiving: a dissociation between eye movements and motion perception.

    Science.gov (United States)

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  17. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  18. How are learning strategies reflected in the eyes? Combining results from self-reports and eye-tracking.

    Science.gov (United States)

    Catrysse, Leen; Gijbels, David; Donche, Vincent; De Maeyer, Sven; Lesterhuis, Marije; Van den Bossche, Piet

    2018-03-01

    Up until now, empirical studies in the Student Approaches to Learning field have mainly been focused on the use of self-report instruments, such as interviews and questionnaires, to uncover differences in students' general preferences towards learning strategies, but have focused less on the use of task-specific and online measures. This study aimed at extending current research on students' learning strategies by combining general and task-specific measurements of students' learning strategies using both offline and online measures. We want to clarify how students process learning contents and to what extent this is related to their self-report of learning strategies. Twenty students with different generic learning profiles (according to self-report questionnaires) read an expository text, while their eye movements were registered to answer questions on the content afterwards. Eye-tracking data were analysed with generalized linear mixed-effects models. The results indicate that students with an all-high profile, combining both deep and surface learning strategies, spend more time on rereading the text than students with an all-low profile, scoring low on both learning strategies. This study showed that we can use eye-tracking to distinguish very strategic students, characterized using cognitive processing and regulation strategies, from low strategic students, characterized by a lack of cognitive and regulation strategies. These students processed the expository text according to how they self-reported. © 2017 The British Psychological Society.

  19. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition.

    Science.gov (United States)

    Van Engen, Kristin J; McLaughlin, Drew J

    2018-05-04

    Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition. Copyright © 2018. Published by Elsevier B.V.

  20. Visual attention to dynamic faces and objects is linked to face processing skills: A combined study of children with autism and controls

    Directory of Open Access Journals (Sweden)

    Julia eParish-Morris

    2013-04-01

    Full Text Available Although the extant literature on face recognition skills in Autism Spectrum Disorder (ASD shows clear impairments compared to typically developing controls (TDC at the group level, the distribution of scores within ASD is broad. In the present research, we take a dimensional approach and explore how differences in social attention during an eye tracking experiment correlate with face recognition skills across ASD and TDC. Emotional discrimination and person identity perception face processing skills were assessed using the Let’s Face It! Skills Battery in 110 children with and without ASD. Social attention was assessed using infrared eye gaze tracking during passive viewing of movies of facial expressions and objects displayed together on a computer screen. Face processing skills were significantly correlated with measures of attention to faces and with social skills as measured by the Social Communication Questionnaire. Consistent with prior research, children with ASD scored significantly lower on face processing skills tests but, unexpectedly, group differences in amount of attention to faces (versus objects were not found. We discuss possible methodological contributions to this null finding. We also highlight the importance of a dimensional approach for understanding the developmental origins of reduced face perception skills, and emphasize the need for longitudinal research to truly understand how social motivation and social attention influence the development of social perceptual skills.

  1. Structural functional associations of the orbit in thyroid eye disease: Kalman filters to track extraocular rectal muscles

    Science.gov (United States)

    Chaganti, Shikha; Nelson, Katrina; Mundy, Kevin; Luo, Yifu; Harrigan, Robert L.; Damon, Steve; Fabbri, Daniel; Mawn, Louise; Landman, Bennett

    2016-03-01

    Pathologies of the optic nerve and orbit impact millions of Americans and quantitative assessment of the orbital structures on 3-D imaging would provide objective markers to enhance diagnostic accuracy, improve timely intervention, and eventually preserve visual function. Recent studies have shown that the multi-atlas methodology is suitable for identifying orbital structures, but challenges arise in the identification of the individual extraocular rectus muscles that control eye movement. This is increasingly problematic in diseased eyes, where these muscles often appear to fuse at the back of the orbit (at the resolution of clinical computed tomography imaging) due to inflammation or crowding. We propose the use of Kalman filters to track the muscles in three-dimensions to refine multi-atlas segmentation and resolve ambiguity due to imaging resolution, noise, and artifacts. The purpose of our study is to investigate a method of automatically generating orbital metrics from CT imaging and demonstrate the utility of the approach by correlating structural metrics of the eye orbit with clinical data and visual function measures in subjects with thyroid eye disease. The pilot study demonstrates that automatically calculated orbital metrics are strongly correlated with several clinical characteristics. Moreover, it is shown that the superior, inferior, medial and lateral rectus muscles obtained using Kalman filters are each correlated with different categories of functional deficit. These findings serve as foundation for further investigation in the use of CT imaging in the study, analysis and diagnosis of ocular diseases, specifically thyroid eye disease.

  2. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    Science.gov (United States)

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  3. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    Science.gov (United States)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  4. The geometric preference subtype in ASD: identifying a consistent, early-emerging phenomenon through eye tracking.

    Science.gov (United States)

    Moore, Adrienne; Wozniak, Madeline; Yousef, Andrew; Barnes, Cindy Carter; Cha, Debra; Courchesne, Eric; Pierce, Karen

    2018-01-01

    the new Complex Social GeoPref test), eye tracking of toddlers can accurately identify a specific ASD "GeoPref" subtype with elevated symptom severity. The GeoPref tests are predictive of ASD at the individual subject level and thus potentially useful for various clinical applications (e.g., early identification, prognosis, or development of subtype-specific treatments).

  5. The role of eye fixation in memory enhancement under stress - An eye tracking study.

    Science.gov (United States)

    Herten, Nadja; Otto, Tobias; Wolf, Oliver T

    2017-04-01

    In a stressful situation, attention is shifted to potentially relevant stimuli. Recent studies from our laboratory revealed that participants stressed perform superior in a recognition task involving objects of the stressful episode. In order to characterize the role of a stress induced alteration in visual exploration, the present study investigated whether participants experiencing a laboratory social stress situation differ in their fixation from participants of a control group. Further, we aimed at shedding light on the relation of fixation behaviour with obtained memory measures. We randomly assigned 32 male and 31 female participants to a control or a stress condition consisting of the Trier Social Stress Test (TSST), a public speaking paradigm causing social evaluative threat. In an established 'friendly' control condition (f-TSST) participants talk to a friendly committee. During both conditions, the committee members used ten office items (central objects) while another ten objects were present without being used (peripheral objects). Participants wore eye tracking glasses recording their fixations. On the next day, participants performed free recall and recognition tasks involving the objects present the day before. Stressed participants showed enhanced memory for central objects, accompanied by longer fixation times and larger fixation amounts on these objects. Contrasting this, fixation towards the committee faces showed the reversed pattern; here, control participants exhibited longer fixations. Fixation indices and memory measures were, however, not correlated with each other. Psychosocial stress is associated with altered fixation behaviour. Longer fixation on objects related to the stressful situation may reflect enhanced encoding, whereas diminished face fixation suggests gaze avoidance of aversive, socially threatening stimuli. Modified visual exploration should be considered in future stress research, in particular when focussing on memory for a

  6. Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Directory of Open Access Journals (Sweden)

    Shujie Deng

    2014-10-01

    Full Text Available The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.

  7. Face Inversion Disproportionately Disrupts Sensitivity to Vertical over Horizontal Changes in Eye Position

    Science.gov (United States)

    Crookes, Kate; Hayward, William G.

    2012-01-01

    Presenting a face inverted (upside down) disrupts perceptual sensitivity to the spacing between the features. Recently, it has been shown that this disruption is greater for vertical than horizontal changes in eye position. One explanation for this effect proposed that inversion disrupts the processing of long-range (e.g., eye-to-mouth distance)…

  8. Using eye tracking technology to compare the effectiveness of malignant hyperthermia cognitive aid design.

    Science.gov (United States)

    King, Roderick; Hanhan, Jaber; Harrison, T Kyle; Kou, Alex; Howard, Steven K; Borg, Lindsay K; Shum, Cynthia; Udani, Ankeet D; Mariano, Edward R

    2018-05-15

    Malignant hyperthermia is a rare but potentially fatal complication of anesthesia, and several different cognitive aids designed to facilitate a timely and accurate response to this crisis currently exist. Eye tracking technology can measure voluntary and involuntary eye movements, gaze fixation within an area of interest, and speed of visual response and has been used to a limited extent in anesthesiology. With eye tracking technology, we compared the accessibility of five malignant hyperthermia cognitive aids by collecting gaze data from twelve volunteer participants. Recordings were reviewed and annotated to measure the time required for participants to locate objects on the cognitive aid to provide an answer; cumulative time to answer was the primary outcome. For the primary outcome, there were differences detected between cumulative time to answer survival curves (P typescript with minimal use of single color blocking.

  9. What interests them in the pictures?--differences in eye-tracking between rhesus monkeys and humans.

    Science.gov (United States)

    Hu, Ying-Zhou; Jiang, Hui-Hui; Liu, Ci-Rong; Wang, Jian-Hong; Yu, Cheng-Yang; Carlson, Synnöve; Yang, Shang-Chuan; Saarinen, Veli-Matti; Rizak, Joshua D; Tian, Xiao-Guang; Tan, Hen; Chen, Zhu-Yue; Ma, Yuan-Ye; Hu, Xin-Tian

    2013-10-01

    Studies estimating eye movements have demonstrated that non-human primates have fixation patterns similar to humans at the first sight of a picture. In the current study, three sets of pictures containing monkeys, humans or both were presented to rhesus monkeys and humans. The eye movements on these pictures by the two species were recorded using a Tobii eye-tracking system. We found that monkeys paid more attention to the head and body in pictures containing monkeys, whereas both monkeys and humans paid more attention to the head in pictures containing humans. The humans always concentrated on the eyes and head in all the pictures, indicating the social role of facial cues in society. Although humans paid more attention to the hands than monkeys, both monkeys and humans were interested in the hands and what was being done with them in the pictures. This may suggest the importance and necessity of hands for survival. Finally, monkeys scored lower in eye-tracking when fixating on the pictures, as if they were less interested in looking at the screen than humans. The locations of fixation in monkeys may provide insight into the role of eye movements in an evolutionary context.

  10. The eye-tracking computer device for communication in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Spataro, R; Ciriacono, M; Manno, C; La Bella, V

    2014-07-01

    To explore the effectiveness of communication and the variables affecting the eye-tracking computer system (ETCS) utilization in patients with late-stage amyotrophic lateral sclerosis (ALS). We performed a telephone survey on 30 patients with advanced non-demented ALS that were provisioned an ECTS device. Median age at interview was 55 years (IQR = 48-62), with a relatively high education (13 years, IQR = 8-13). A one-off interview was made and answers were later provided with the help of the caregiver. The interview included items about demographic and clinical variables affecting the daily ETCS utilization. The median time of ETCS device possession was 15 months (IQR = 9-20). The actual daily utilization was 300 min (IQR = 100-720), mainly for the communication with relatives/caregiver, internet surfing, e-mailing, and social networking. 23.3% of patients with ALS (n = 7) had a low daily ETCS utilization; most reported causes were eye-gaze tiredness and oculomotor dysfunction. Eye-tracking computer system is a valuable device for AAC in patients with ALS, and it can be operated with a good performance. The development of oculomotor impairment may limit its functional use. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Anxiety symptoms and children's eye gaze during fear learning.

    Science.gov (United States)

    Michalska, Kalina J; Machlin, Laura; Moroney, Elizabeth; Lowet, Daniel S; Hettema, John M; Roberson-Nay, Roxann; Averbeck, Bruno B; Brotman, Melissa A; Nelson, Eric E; Leibenluft, Ellen; Pine, Daniel S

    2017-11-01

    The eye region of the face is particularly relevant for decoding threat-related signals, such as fear. However, it is unclear if gaze patterns to the eyes can be influenced by fear learning. Previous studies examining gaze patterns in adults find an association between anxiety and eye gaze avoidance, although no studies to date examine how associations between anxiety symptoms and eye-viewing patterns manifest in children. The current study examined the effects of learning and trait anxiety on eye gaze using a face-based fear conditioning task developed for use in children. Participants were 82 youth from a general population sample of twins (aged 9-13 years), exhibiting a range of anxiety symptoms. Participants underwent a fear conditioning paradigm where the conditioned stimuli (CS+) were two neutral faces, one of which was randomly selected to be paired with an aversive scream. Eye tracking, physiological, and subjective data were acquired. Children and parents reported their child's anxiety using the Screen for Child Anxiety Related Emotional Disorders. Conditioning influenced eye gaze patterns in that children looked longer and more frequently to the eye region of the CS+ than CS- face; this effect was present only during fear acquisition, not at baseline or extinction. Furthermore, consistent with past work in adults, anxiety symptoms were associated with eye gaze avoidance. Finally, gaze duration to the eye region mediated the effect of anxious traits on self-reported fear during acquisition. Anxiety symptoms in children relate to face-viewing strategies deployed in the context of a fear learning experiment. This relationship may inform attempts to understand the relationship between pediatric anxiety symptoms and learning. © 2017 Association for Child and Adolescent Mental Health.

  12. Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology.

    Science.gov (United States)

    Muñoz-Leiva, Francisco; Hernández-Méndez, Janet; Gómez-Carmona, Diego

    2018-03-06

    The advent of Web 2.0 is changing tourists' behaviors, prompting them to take on a more active role in preparing their travel plans. It is also leading tourism companies to have to adapt their marketing strategies to different online social media. The present study analyzes advertising effectiveness in social media in terms of customers' visual attention and self-reported memory (recall). Data were collected through a within-subjects and between-groups design based on eye-tracking technology, followed by a self-administered questionnaire. Participants were instructed to visit three Travel 2.0 websites (T2W), including a hotel's blog, social network profile (Facebook), and virtual community profile (Tripadvisor). Overall, the results revealed greater advertising effectiveness in the case of the hotel social network; and visual attention measures based on eye-tracking data differed from measures of self-reported recall. Visual attention to the ad banner was paid at a low level of awareness, which explains why the associations with the ad did not activate its subsequent recall. The paper offers a pioneering attempt in the application of eye-tracking technology, and examines the possible impact of visual marketing stimuli on user T2W-related behavior. The practical implications identified in this research, along with its limitations and future research opportunities, are of interest both for further theoretical development and practical application. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    Directory of Open Access Journals (Sweden)

    Martin Wegrzyn

    Full Text Available Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes to disgust and happiness (mouth. The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  14. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    Science.gov (United States)

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  15. The development of eye tracking in aviation (ETA) technique to investigate pilot's cognitive processes of attention and decision-making

    OpenAIRE

    Li, Wen-Chin; Lin, John J. H.; Braithwaite, Graham; Greaves, Matt

    2016-01-01

    Eye tracking device had provided researchers a promising way to investigate what pilot‘s cognitive processes when they see information present on the flight deck. There are 35 participants consisted by pilots and avionics engineers participated in current research. The research apparatus include an eye tracker and a flight simulator divided by five AOIs for data collection. The research aims are to develop cost-efficiency of eye tracking technique in order to facilitate scientific research of...

  16. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    Science.gov (United States)

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal

  17. Tracking the reading eye: towards a model of real-world reading

    NARCIS (Netherlands)

    Jarodzka, Halszka; Brand-Gruwel, Saskia

    2018-01-01

    Eye tracking has helped to understand the process of reading a word or a sentence, and this research has been very fruitful over the past decades. However, everyday real-world reading dramatically differs from this scenario: we read a newspaper on the bus, surf the Internet for movie reviews or

  18. Expert and Novice Approaches to Using Graphs: Evidence from Eye-Track Experiments

    Science.gov (United States)

    Wirth, K. R.; Lindgren, J. M.

    2015-12-01

    Professionals and students in geology use an array of graphs to study the earth, but relatively little detail is known about how users interact with these graphs. Comprehension of graphical information in the earth sciences is further complicated by the common use of non-traditional formats (e.g., inverted axes, logarithmic scales, normalized plots, ternary diagrams). Many educators consider graph-reading skills an important outcome of general education science curricula, so it is critical that we understand both the development of graph-reading skills and the instructional practices that are most efficacious. Eye-tracking instruments provide quantitative information about eye movements and offer important insights into the development of expertise in graph use. We measured the graph reading skills and eye movements of novices (students with a variety of majors and educational attainment) and experts (faculty and staff from a variety of disciplines) while observing traditional and non-traditional graph formats. Individuals in the expert group consistently demonstrated significantly greater accuracy in responding to questions (e.g., retrieval, interpretation, prediction) about graphs. Among novices, only the number of college math and science courses correlated with response accuracy. Interestingly, novices and experts exhibited similar eye-tracks when they first encountered a new graph; they typically scanned through the title, x and y-axes, and data regions in the first 5-15 seconds. However, experts are readily distinguished from novices by a greater number of eye movements (20-35%) between the data and other graph elements (e.g., title, x-axis, y-axis) both during and after the initial orientation phase. We attribute the greater eye movements between the different graph elements an outcome of the generally better-developed self-regulation skills (goal-setting, monitoring, self-evaluation) that likely characterize individuals in our expert group.

  19. A Novel Eye-Tracking Method to Assess Attention Allocation in Individuals with and without Aphasia Using a Dual-Task Paradigm

    Science.gov (United States)

    Heuer, Sabine; Hallowell, Brooke

    2015-01-01

    Numerous authors report that people with aphasia have greater difficulty allocating attention than people without neurological disorders. Studying how attention deficits contribute to language deficits is important. However, existing methods for indexing attention allocation in people with aphasia pose serious methodological challenges. Eye-tracking methods have great potential to address such challenges. We developed and assessed the validity of a new dual-task method incorporating eye tracking to assess attention allocation. Twenty-six adults with aphasia and 33 control participants completed auditory sentence comprehension and visual search tasks. To test whether the new method validly indexes well-documented patterns in attention allocation, demands were manipulated by varying task complexity in single- and dual-task conditions. Differences in attention allocation were indexed via eye-tracking measures. For all participants significant increases in attention allocation demands were observed from single- to dual-task conditions and from simple to complex stimuli. Individuals with aphasia had greater difficulty allocating attention with greater task demands. Relationships between eye-tracking indices of comprehension during single and dual tasks and standardized testing were examined. Results support the validity of the novel eye-tracking method for assessing attention allocation in people with and without aphasia. Clinical and research implications are discussed. PMID:25913549

  20. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  1. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    Science.gov (United States)

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Adolescents' attention to responsibility messages in magazine alcohol advertisements: an eye-tracking approach.

    Science.gov (United States)

    Thomsen, Steven R; Fulton, Kristi

    2007-07-01

    To investigate whether adolescent readers attend to responsibility or moderation messages (e.g., "drink responsibly") included in magazine advertisements for alcoholic beverages and to assess the association between attention and the ability to accurately recall the content of these messages. An integrated head-eye tracking system (ASL Eye-TRAC 6000) was used to measure the eye movements, including fixations and fixation duration, of a group of 63 adolescents (ages 12-14 years) as they viewed six print advertisements for alcoholic beverages. Immediately after the eye-tracking sessions, participants completed a masked-recall exercise. Overall, the responsibility or moderation messages were the least frequently viewed textual or visual areas of the advertisements. Participants spent an average of only .35 seconds, or 7% of the total viewing time, fixating on each responsibility message. Beverage bottles, product logos, and cartoon illustrations were the most frequently viewed elements of the advertisements. Among those participants who fixated at least once on an advertisement's warning message, only a relatively small percentage were able to recall its general concept or restate it verbatim in the masked recall test. Voluntary responsibility or moderation messages failed to capture the attention of teenagers who participated in this study and need to be typographically modified to be more effective.

  3. Basic Number Processing Deficits in Developmental Dyscalculia: Evidence from Eye Tracking

    Science.gov (United States)

    Moeller, K.; Neuburger, S.; Kaufmann, L.; Landerl, K.; Nuerk, H. C.

    2009-01-01

    Recent research suggests that developmental dyscalculia is associated with a subitizing deficit (i.e., the inability to quickly enumerate small sets of up to 3 objects). However, the nature of this deficit has not previously been investigated. In the present study the eye-tracking methodology was employed to clarify whether (a) the subitizing…

  4. The Neuroergonomics of Aircraft Cockpits: The Four Stages of Eye-Tracking Integration to Enhance Flight Safety

    Directory of Open Access Journals (Sweden)

    Vsevolod Peysakhovich

    2018-02-01

    Full Text Available Commercial aviation is currently one of the safest modes of transportation; however, human error is still one major contributing cause of aeronautical accidents and incidents. One promising avenue to further enhance flight safety is Neuroergonomics, an approach at the intersection of neuroscience, cognitive engineering and human factors, which aims to create better human–system interaction. Eye-tracking technology allows users to “monitor the monitoring” by providing insights into both pilots’ attentional distribution and underlying decisional processes. In this position paper, we identify and define a framework of four stages of step-by-step integration of eye-tracking systems in modern cockpits. Stage I concerns Pilot Training and Flight Performance Analysis on-ground; stage II proposes On-board Gaze Recordings as extra data for the “black box” recorders; stage III describes Gaze-Based Flight Deck Adaptation including warning and alerting systems, and, eventually, stage IV prophesies Gaze-Based Aircraft Adaptation including authority taking by the aircraft. We illustrate the potential of these four steps with a description of incidents or accidents that we could certainly have avoided thanks to eye-tracking. Estimated milestones for the integration of each stage are also proposed together with a list of some implementation limitations. We believe that the research institutions and industrial actors of the domain will all benefit from the integration of the framework of the eye-tracking systems into cockpits.

  5. Eye tracking and gating system for proton therapy of orbital tumors

    International Nuclear Information System (INIS)

    Shin, Dongho; Yoo, Seung Hoon; Moon, Sung Ho; Yoon, Myonggeun; Lee, Se Byeong; Park, Sung Yong

    2012-01-01

    Purpose: A new motion-based gated proton therapy for the treatment of orbital tumors using real-time eye-tracking system was designed and evaluated. Methods: We developed our system by image-pattern matching, using a normalized cross-correlation technique with LabVIEW 8.6 and Vision Assistant 8.6 (National Instruments, Austin, TX). To measure the pixel spacing of an image consistently, four different calibration modes such as the point-detection, the edge-detection, the line-measurement, and the manual measurement mode were suggested and used. After these methods were applied to proton therapy, gating was performed, and radiation dose distributions were evaluated. Results: Moving phantom verification measurements resulted in errors of less than 0.1 mm for given ranges of translation. Dosimetric evaluation of the beam-gating system versus nongated treatment delivery with a moving phantom shows that while there was only 0.83 mm growth in lateral penumbra for gated radiotherapy, there was 4.95 mm growth in lateral penumbra in case of nongated exposure. The analysis from clinical results suggests that the average of eye movements depends distinctively on each patient by showing 0.44 mm, 0.45 mm, and 0.86 mm for three patients, respectively. Conclusions: The developed automatic eye-tracking based beam-gating system enabled us to perform high-precision proton radiotherapy of orbital tumors.

  6. The more-or-less morphing face illusion: A case of fixation-dependent modulation

    NARCIS (Netherlands)

    Lier, R.J. van; Koning, A.R.

    2014-01-01

    A visual illusion is presented in which the perceived changes in a morphing sequence depend on eye movements. The phenomenon is illustrated using face morphs: when tracking a moving dot superimposed on a face morphing sequence, the changes in the morphing sequence seem rather small, but when the dot

  7. Integrated Eye Tracking and Neural Monitoring for Enhanced Assessment of Mild TBI

    Science.gov (United States)

    2017-06-01

    working memory load effects after mild traumatic brain injury. Neuroimage, 2001. 14(5): p. 1004-12. 2. Chen, J.K., et al., Functional abnormalities in...report. 10 Supporting Data None. Integrated Eye Tracking and Neural Monitoring for Enhanced Assessment of Mild TBI Psychological Health

  8. P1-20: The Relation of Eye and Hand Movement during Multimodal Recall Memory

    Directory of Open Access Journals (Sweden)

    Eun-Sol Kim

    2012-10-01

    Full Text Available Eye and hand movement tracking has been proven to be a successful tool and is widely used to figure out characteristics of human cognition in language or visual processing (Just & Carpenter, 1976 Cognitive Psychology 8441–480. Eye movement has proven to be a successful measure to figure out characteristics of human language and visual processing (Rayner, 1998 Psychological Bulletin 124(3 372–422. Recently, mouse tracking was used for social-cognition-like categorization of sex-atypical faces and studying spoken-language processes (Magnuson, 2005 PNAS 102(28 9995–9996; Spivey et al., 2005 PNAS 102 10393–10398. Here, we present a framework that uses both eye gaze and hand movement simultaneously for analyzing the relation of them during memory retrieval. We tracked eye and mouse movements when the subject was watching a drama and playing a multimodal memory game (MMG, a cognitive task designed to investigate the recall memory mechanisms in watching video dramas (Zhang, 2009 AAAI 2009 Spring Symposium: Agents that Learn from Human Teachers 144–149. Experimental results show that eye tracking and mouse tracking provide complementary information about underlying cognitive processes. Also, we found some interesting patterns in eye-hand movement during multimodal memory recall.

  9. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    Science.gov (United States)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  10. Social anxiety is related to increased dwell time on socially threatening faces.

    Science.gov (United States)

    Lazarov, Amit; Abend, Rany; Bar-Haim, Yair

    2016-03-15

    Identification of reliable targets for therapeutic interventions is essential for developing evidence-based therapies. Threat-related attention bias has been implicated in the etiology and maintenance of social anxiety disorder. Extant response-time-based threat bias measures have demonstrated limited reliability and internal consistency. Here, we examined gaze patterns of socially anxious and nonanxious participants in relation to social threatening and neutral stimuli using an eye-tracking task, comprised of multiple threat and neutral stimuli, presented for an extended time-period. We tested the psychometric properties of this task with the hope to provide a solid stepping-stone for future treatment development. Eye gaze was tracked while participants freely viewed 60 different matrices comprised of eight disgusted and eight neutral facial expressions, presented for 6000ms each. Gaze patterns on threat and neutral areas of interest (AOIs) of participants with SAD, high socially anxious students and nonanxious students were compared. Internal consistency and test-retest reliability were evaluated. Participants did not differ on first-fixation variables. However, overall, socially anxious students and participants with SAD dwelled significantly longer on threat faces compared with nonanxious participants, with no difference between the anxious groups. Groups did not differ in overall dwell time on neutral faces. Internal consistency of total dwell time on threat and neutral AOIs was high and one-week test-retest reliability was acceptable. Only disgusted facial expressions were used. Relative small sample size. Social anxiety is associated with increased dwell time on socially threatening stimuli, presenting a potential target for therapeutic intervention. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Comparisons of Online Reading Paradigms: Eye Tracking, Moving-Window, and Maze

    Science.gov (United States)

    Witzel, Naoko; Witzel, Jeffrey; Forster, Kenneth

    2012-01-01

    This study compares four methodologies used to examine online sentence processing during reading. Specifically, self-paced, non-cumulative, moving-window reading (Just et al. in "J Exp Psychol Gen" 111:228-238, 1982), eye tracking (see e.g., Rayner in "Q J Exp Psychol" 62:1457-1506, 2009), and two versions of the maze task (Forster et al. in…

  12. Psychopathic traits are associated with reduced attention to the eyes of emotional faces among adult male non-offenders

    Directory of Open Access Journals (Sweden)

    Steven Mark Gillespie

    2015-10-01

    Full Text Available Psychopathic traits are linked with impairments in emotional facial expression recognition. These impairments may, in part, reflect reduced attention to the eyes of emotional faces. Although reduced attention to the eyes has been noted among children with conduct problems and callous-unemotional traits, similar findings are yet to be found in relation to psychopathic traits among adult male participants. Here we investigated the relationship of primary (selfish, uncaring and secondary (impulsive, antisocial psychopathic traits with attention to the eyes among adult male non-offenders during an emotion recognition task. We measured the number of fixations, and overall dwell time, on the eyes and the mouth of male and female faces showing the six basic emotions at varying levels of intensity. We found no relationship of primary or secondary psychopathic traits with recognition accuracy. However, primary psychopathic traits were associated with a reduced number of fixations, and lower overall dwell time, on the eyes relative to the mouth across expressions, intensity, and sex. Furthermore, the relationship of primary psychopathic traits with attention to the eyes of angry and fearful faces was influenced by the sex and intensity of the expression. We also showed that a greater number of fixations on the eyes, relative to the mouth, was associated with increased accuracy for angry and fearful expression recognition. These results are the first to show effects of psychopathic traits on attention to the eyes of emotional faces in an adult male sample, and may support amygdala based accounts of psychopathy. These findings may also have methodological implications for clinical studies of emotion recognition.

  13. Edge detection of iris of the eye for human biometric identification system

    Directory of Open Access Journals (Sweden)

    Kateryna O. Tryfonova

    2015-03-01

    Full Text Available Method of human biometric identification by iris of the eye is considered as one of the most accurate and reliable methods of identification. Aim of the research is to solve the problem of edge detection of digital image of the human eye iris to be able to implement human biometric identification system by means of mobile device. To achieve this aim the algorithm of edge detection by Canny is considered in work. It consists of the following steps: smoothing, finding gradients, non-maximum suppression, double thresholding with hysteresis. The software implementation of the Canny algorithm is carried out for the Android mobile platform with the use of high level programming language Java.

  14. Talent Identification in Track and Field.

    Science.gov (United States)

    Henson, Phillip; And Others

    Talent identification in most sports occurs through mass participation and the process of natural selection; track and field does not enjoy such widespread participation. This paper reports on a project undertaken for the following purposes: improve the means by which youth with the potential for high level performance can be identified; develop…

  15. An Eye-tracking Study of Notational, Informational, and Emotional Aspects of Learning Analytics Representations

    DEFF Research Database (Denmark)

    Vatrapu, Ravi; Reimann, Peter; Bull, Susan

    2013-01-01

    This paper presents an eye-tracking study of notational, informational, and emotional aspects of nine different notational systems (Skill Meters, Smilies, Traffic Lights, Topic Boxes, Collective Histograms, Word Clouds, Textual Descriptors, Table, and Matrix) and three different information states...... (Weak, Average, & Strong) used to represent student's learning. Findings from the eye-tracking study show that higher emotional activation was observed for the metaphorical notations of traffic lights and smilies and collective representations. Mean view time was higher for representations...... of the "average" informational learning state. Qualitative data analysis of the think-aloud comments and post-study interview show that student participants reflected on the meaning-making opportunities and action-taking possibilities afforded by the representations. Implications for the design and evaluation...

  16. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study

    NARCIS (Netherlands)

    Barsingerhorn, A.D.; Boonstra, F.N.; Goossens, H.H.L.M.

    2017-01-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different

  17. Eye-Tracking as a Measure of Responsiveness to Joint Attention in Infants at Risk for Autism

    Science.gov (United States)

    Navab, Anahita; Gillespie-Lynch, Kristen; Johnson, Scott P.; Sigman, Marian; Hutman, Ted

    2012-01-01

    Reduced responsiveness to joint attention (RJA), as assessed by the Early Social Communication Scales (ESCS), is predictive of both subsequent language difficulties and autism diagnosis. Eye-tracking measurement of RJA is a promising prognostic tool because it is highly precise and standardized. However, the construct validity of eye-tracking…

  18. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis

    Science.gov (United States)

    Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying

    2012-01-01

    This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…

  19. Geometry and Gesture-Based Features from Saccadic Eye-Movement as a Biometric in Radiology

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Tracy [Texas A& M University, College Station; Tourassi, Georgia [ORNL; Yoon, Hong-Jun [ORNL; Alamudun, Folami T. [ORNL

    2017-07-01

    In this study, we present a novel application of sketch gesture recognition on eye-movement for biometric identification and estimating task expertise. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus for this study. Sketch gesture recognition techniques were employed to extract geometric and gesture-based features from saccadic eye-movements. Our results show that saccadic eye-movement, characterized using sketch-based features, result in more accurate models for predicting individual identity and level of expertise than more traditional eye-tracking features.

  20. Eye Tracking Meets the Process of Process Modeling: a Visual Analytic Approach

    DEFF Research Database (Denmark)

    Burattin, Andrea; Kaiser, M.; Neurauter, Manuel

    2017-01-01

    Research on the process of process modeling (PPM) studies how process models are created. It typically uses the logs of the interactions with the modeling tool to assess the modeler’s behavior. In this paper we suggest to introduce an additional stream of data (i.e., eye tracking) to improve the ...

  1. De-identification of psychiatric intake records: Overview of 2016 CEGS N-GRID shared tasks Track 1.

    Science.gov (United States)

    Stubbs, Amber; Filannino, Michele; Uzuner, Özlem

    2017-11-01

    The 2016 CEGS N-GRID shared tasks for clinical records contained three tracks. Track 1 focused on de-identification of a new corpus of 1000 psychiatric intake records. This track tackled de-identification in two sub-tracks: Track 1.A was a "sight unseen" task, where nine teams ran existing de-identification systems, without any modifications or training, on 600 new records in order to gauge how well systems generalize to new data. The best-performing system for this track scored an F1 of 0.799. Track 1.B was a traditional Natural Language Processing (NLP) shared task on de-identification, where 15 teams had two months to train their systems on the new data, then test it on an unannotated test set. The best-performing system from this track scored an F1 of 0.914. The scores for Track 1.A show that unmodified existing systems do not generalize well to new data without the benefit of training data. The scores for Track 1.B are slightly lower than the 2014 de-identification shared task (which was almost identical to 2016 Track 1.B), indicating that these new psychiatric records pose a more difficult challenge to NLP systems. Overall, de-identification is still not a solved problem, though it is important to the future of clinical NLP. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Research of Digital Interface Layout Design based on Eye-tracking

    OpenAIRE

    Shao Jiang; Xue Chengqi; Wang Fang; Wang Haiyan; Tang Wencheng; Chen Mo; Kang Mingwu

    2015-01-01

    The aim of this paper is to improve the low service efficiency and unsmooth human-computer interaction caused by currently irrational layouts of digital interfaces for complex systems. Also, three common layout structures for digital interfaces are to be presented and five layout types appropriate for multilevel digital interfaces are to be summarized. Based on the eye tracking technology, an assessment was conducted in advantages and disadvantages of different layout types through subjects’ ...

  3. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    Science.gov (United States)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  4. Real-Time Generic Face Tracking in the Wild with CUDA

    NARCIS (Netherlands)

    Cheng, Shiyang; Asthana, Akshay; Asthana, Ashish; Zafeiriou, Stefanos; Shen, Jie; Pantic, Maja

    We present a robust real-time face tracking system based on the Constrained Local Models framework by adopting the novel regression-based Discriminative Response Map Fitting (DRMF) method. By exploiting the algorithm's potential parallelism, we present a hybrid CPU-GPU implementation capable of

  5. Attentional bias to betel quid cues: An eye tracking study.

    Science.gov (United States)

    Shen, Bin; Chiu, Meng-Chun; Li, Shuo-Heng; Huang, Guo-Joe; Liu, Ling-Jun; Ho, Ming-Chou

    2016-09-01

    The World Health Organization regards betel quid as a human carcinogen, and DSM-IV and ICD-10 dependence symptoms may develop with heavy use. This study, conducted in central Taiwan, investigated whether betel quid chewers can exhibit overt orienting to selectively respond to the betel quid cues. Twenty-four male chewers' and 23 male nonchewers' eye movements to betel-quid-related pictures and matched pictures were assessed during a visual probe task. The eye movement index showed that betel quid chewers were more likely to initially direct their gaze to the betel quid cues, t(23) = 3.70, p betel quid chewers' attentional bias. The results demonstrated that the betel quid chewers (but not the nonchewers) were more likely to initially direct their gaze to the betel quid cues, and spent more time and were more fixated on them. These findings suggested that when attention is directly measured through the eye tracking technique, this methodology may be more sensitive to detecting attentional biases in betel quid chewers. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Numerical Analysis on Color Preference and Visual Comfort from Eye Tracking Technique

    Directory of Open Access Journals (Sweden)

    Ming-Chung Ho

    2015-01-01

    Full Text Available Color preferences in engineering are very important, and there exists relationship between color preference and visual comfort. In this study, there are thirty university students who participated in the experiment, supplemented by pre- and posttest questionnaires, which lasted about an hour. The main purpose of this study is to explore the visual effects of different color assignment with subjective color preferences via eye tracking technology. Eye-movement data through a nonlinear analysis detect slight differences in color preferences and visual comfort, suggesting effective physiological indicators as extensive future research discussed. Results found that the average pupil size of eye-movement indicators can effectively reflect the differences of color preferences and visual comfort. This study more confirmed that the subjective feeling will make people have misjudgment.

  7. Face exploration dynamics differentiate men and women.

    Science.gov (United States)

    Coutrot, Antoine; Binetti, Nicola; Harrison, Charlotte; Mareschal, Isabelle; Johnston, Alan

    2016-11-01

    The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eye-scanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don't know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eye-position data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration. We record and exploit the largest set of eye-tracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gaze-based models from computer vision to clinical psychology.

  8. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk

    2014-01-01

    We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identification...

  9. EEG Eye State Identification Using Incremental Attribute Learning with Time-Series Classification

    Directory of Open Access Journals (Sweden)

    Ting Wang

    2014-01-01

    Full Text Available Eye state identification is a kind of common time-series classification problem which is also a hot spot in recent research. Electroencephalography (EEG is widely used in eye state classification to detect human's cognition state. Previous research has validated the feasibility of machine learning and statistical approaches for EEG eye state classification. This paper aims to propose a novel approach for EEG eye state identification using incremental attribute learning (IAL based on neural networks. IAL is a novel machine learning strategy which gradually imports and trains features one by one. Previous studies have verified that such an approach is applicable for solving a number of pattern recognition problems. However, in these previous works, little research on IAL focused on its application to time-series problems. Therefore, it is still unknown whether IAL can be employed to cope with time-series problems like EEG eye state classification. Experimental results in this study demonstrates that, with proper feature extraction and feature ordering, IAL can not only efficiently cope with time-series classification problems, but also exhibit better classification performance in terms of classification error rates in comparison with conventional and some other approaches.

  10. Eye Tracking Reveals a Crucial Role for Facial Motion in Recognition of Faces by Infants

    Science.gov (United States)

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was…

  11. Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task.

    Science.gov (United States)

    Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A

    2015-02-01

    In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.

  12. [Eye lens radiation exposure during ureteroscopy with and without a face protection shield: Investigations on a phantom model].

    Science.gov (United States)

    Zöller, G; Figel, M; Denk, J; Schulz, K; Sabo, A

    2016-03-01

    Eye lens radiation exposure during radiologically-guided endoscopic procedures may result in radiation-induced cataracts; therefore, we investigated the ocular radiation exposure during ureteroscopy on a phantom model. Using an Alderson phantom model and eye lens dosimeters, we measured the ocular radiation exposure depending on the number of X-ray images and on the duration of fluoroscopic imaging. The measurements were done with and without using a face protection shield. We could demonstrate that a significant ocular radiation exposure can occur, depending on the number of X-ray images and on the duration time of fluoroscopy. Eye lens doses up to 0.025 mSv were recorded even using modern digital X-ray systems. Using face protection shields this ocular radiation exposure can be reduced to a minimum. The International Commission on Radiological Protection (ICRP) recommendations of a mean eye lens dosage of 20 mSv/year may be exceeded during repeated ureteroscopy by a high volume surgeon. Using a face protection shield, the eye lens dose during ureteroscopy could be reduced to a minimum in a phantom model. Further investigations will show whether these results can be transferred to real life ureteroscopic procedures.

  13. The eyes have it: Using eye tracking to inform information processing strategies in multi-attributes choices.

    Science.gov (United States)

    Ryan, Mandy; Krucien, Nicolas; Hermens, Frouke

    2018-04-01

    Although choice experiments (CEs) are widely applied in economics to study choice behaviour, understanding of how individuals process attribute information remains limited. We show how eye-tracking methods can provide insight into how decisions are made. Participants completed a CE, while their eye movements were recorded. Results show that although the information presented guided participants' decisions, there were also several processing biases at work. Evidence was found of (a) top-to-bottom, (b) left-to-right, and (c) first-to-last order biases. Experimental factors-whether attributes are defined as "best" or "worst," choice task complexity, and attribute ordering-also influence information processing. How individuals visually process attribute information was shown to be related to their choices. Implications for the design and analysis of CEs and future research are discussed. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Evaluation of Color Settings in Aerial Images with the Use of Eye-Tracking User Study

    Science.gov (United States)

    Mirijovsky, J.; Popelka, S.

    2016-06-01

    The main aim of presented paper is to find the most realistic and preferred color settings for four different types of surfaces on the aerial images. This will be achieved through user study with the use of eye-movement recording. Aerial images taken by the unmanned aerial system were used as stimuli. From each image, squared crop area containing one of the studied types of surfaces (asphalt, concrete, water, soil, and grass) was selected. For each type of surface, the real value of reflectance was found with the use of precise spectroradiometer ASD HandHeld 2 which measures the reflectance. The device was used at the same time as aerial images were captured, so lighting conditions and state of vegetation were equal. The spectral resolution of the ASD device is better than 3.0 nm. For defining the RGB values of selected type of surface, the spectral reflectance values recorded by the device were merged into wider groups. Finally, we get three groups corresponding to RGB color system. Captured images were edited with the graphic editor Photoshop CS6. Contrast, clarity, and brightness were edited for all surface types on images. Finally, we get a set of 12 images of the same area with different color settings. These images were put into the grid and used as stimuli for the eye-tracking experiment. Eye-tracking is one of the methods of usability studies and it is considered as relatively objective. Eye-tracker SMI RED 250 with the sampling frequency 250 Hz was used in the study. As respondents, a group of 24 students of Geoinformatics and Geography was used. Their task was to select which image in the grid has the best color settings. The next task was to select which color settings they prefer. Respondents' answers were evaluated and the most realistic and most preferable color settings were found. The advantage of the eye-tracking evaluation was that also the process of the selection of the answers was analyzed. Areas of Interest were marked around each image in the

  15. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan; Hu, Bao-Gang; Ji, Qiang

    2016-01-01

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face

  16. Face-off: A new identification procedure for child eyewitnesses.

    Science.gov (United States)

    Price, Heather L; Fitzgerald, Ryan J

    2016-09-01

    In 2 experiments, we introduce a new "face-off" procedure for child eyewitness identifications. The new procedure, which is premised on reducing the stimulus set size, was compared with the showup and simultaneous procedures in Experiment 1 and with modified versions of the simultaneous and elimination procedures in Experiment 2. Several benefits of the face-off procedure were observed: it was significantly more diagnostic than the showup procedure; it led to significantly more correct rejections of target-absent lineups than the simultaneous procedures in both experiments, and it led to greater information gain than the modified elimination and simultaneous procedures. The face-off procedure led to consistently more conservative responding than the simultaneous procedures in both experiments. Given the commonly cited concern that children are too lenient in their decision criteria for identification tasks, the face-off procedure may offer a concrete technique to reduce children's high choosing rates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Usability Analysis of Online Bank Login Interface Based on Eye Tracking Experiment

    Directory of Open Access Journals (Sweden)

    Xiaofang YUAN

    2014-02-01

    Full Text Available With the rapid development of information technology and rapid popularization of online banking, it is used by the more and more consumers. Studying on the usability of online banking interface, improving the user-friendliness of web interface, and enhancing attraction of bank website, which have gradually become the basic network marketing strategy of the banks. Therefore, this study took three banks as an example to record subjects’’ eye tracking data of time to first fixation, fixation duration and blink count and so on by using Tobii T60XL Eye Tracking equipment, while they login online banking web interface, and analyzed that the factors of webpage layout, colors, the amount of information presentation which impacts on the usability of online banking login interface. The results shows that the login entry, account login information and other key control buttons should be placed in the upper left corner to quickly lock the target, and the interface should have a moderate amount of information presentation, the appropriate proportion, reasonable font size settings, harmonious, simple, and warmth design style.

  18. Stochastic anomaly detection in eye-tracking data for quantification of motor symptoms in Parkinson's disease

    Science.gov (United States)

    Jansson, Daniel; Medvedev, Alexander; Axelson, Hans; Nyholm, Dag

    2013-10-01

    Two methods for distinguishing between healthy controls and patients diagnosed with Parkinson's disease by means of recorded smooth pursuit eye movements are presented and evaluated. Both methods are based on the principles of stochastic anomaly detection and make use of orthogonal series approximation for probability distribution estimation. The first method relies on the identification of a Wiener-type model of the smooth pursuit system and attempts to find statistically significant differences between the estimated parameters in healthy controls and patientts with Parkinson's disease. The second method applies the same statistical method to distinguish between the gaze trajectories of healthy and Parkinson subjects attempting to track visual stimuli. Both methods show promising results, where healthy controls and patients with Parkinson's disease are effectively separated in terms of the considered metric. The results are preliminary because of the small number of participating test subjects, but they are indicative of the potential of the presented methods as diagnosing or staging tools for Parkinson's disease.

  19. The specificity of attentional biases by type of gambling: An eye-tracking study.

    Directory of Open Access Journals (Sweden)

    Daniel S McGrath

    Full Text Available A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games. Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers.

  20. The specificity of attentional biases by type of gambling: An eye-tracking study.

    Science.gov (United States)

    McGrath, Daniel S; Meitner, Amadeus; Sears, Christopher R

    2018-01-01

    A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers.

  1. Williams syndrome and its cognitive profile: the importance of eye movements

    Directory of Open Access Journals (Sweden)

    Van Herwegen J

    2015-06-01

    Full Text Available Jo Van Herwegen Department of Psychology, Kingston University London, Surrey, UK Abstract: People with Williams syndrome (WS, a rare neurodevelopmental disorder that is caused by a deletion on the long arm of chromosome 7, often show an uneven cognitive profile with participants performing better on language and face recognition tasks, in contrast to visuospatial and number tasks. Recent studies have shown that this specific cognitive profile in WS is a result of atypical developmental processes that interact with and affect brain development from infancy onward. Using examples from language, face processing, number, and visuospatial studies, this review evaluates current evidence from eye-tracking and developmental studies and argues that domain general processes, such as the ability to plan or execute saccades, influence the development of these domain-specific outcomes. Although more research on eye movements in WS is required, the importance of eye movements for cognitive development suggests a possible intervention pathway to improve cognitive abilities in this population. Keywords: Williams syndrome, eye movements, face processing, language, number, visuospatial abilities

  2. Route planning with transportation network maps: an eye-tracking study.

    Science.gov (United States)

    Grison, Elise; Gyselinck, Valérie; Burkhardt, Jean-Marie; Wiener, Jan Malte

    2017-09-01

    Planning routes using transportation network maps is a common task that has received little attention in the literature. Here, we present a novel eye-tracking paradigm to investigate psychological processes and mechanisms involved in such a route planning. In the experiment, participants were first presented with an origin and destination pair before we presented them with fictitious public transportation maps. Their task was to find the connecting route that required the minimum number of transfers. Based on participants' gaze behaviour, each trial was split into two phases: (1) the search for origin and destination phase, i.e., the initial phase of the trial until participants gazed at both origin and destination at least once and (2) the route planning and selection phase. Comparisons of other eye-tracking measures between these phases and the time to complete them, which depended on the complexity of the planning task, suggest that these two phases are indeed distinct and supported by different cognitive processes. For example, participants spent more time attending the centre of the map during the initial search phase, before directing their attention to connecting stations, where transitions between lines were possible. Our results provide novel insights into the psychological processes involved in route planning from maps. The findings are discussed in relation to the current theories of route planning.

  3. Surface ablation with iris recognition and dynamic rotational eye tracking-based tissue saving treatment with the Technolas 217z excimer laser.

    Science.gov (United States)

    Prakash, Gaurav; Agarwal, Amar; Kumar, Dhivya Ashok; Jacob, Soosan; Agarwal, Athiya; Maity, Amrita

    2011-03-01

    To evaluate the visual and refractive outcomes and expected benefits of Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking. This prospective, interventional case series comprised 122 eyes (70 patients). Pre- and postoperative assessment included uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), refraction, and higher order aberrations. All patients underwent Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking using the Technolas 217z 100-Hz excimer platform (Technolas Perfect Vision GmbH). Follow-up was performed up to 6 months postoperatively. Theoretical benefit analysis was performed to evaluate the algorithm's outcomes compared to others. Preoperative spherocylindrical power was sphere -3.62 ± 1.60 diopters (D) (range: 0 to -6.75 D), cylinder -1.15 ± 1.00 D (range: 0 to -3.50 D), and spherical equivalent -4.19 ± 1.60 D (range: -7.75 to -2.00 D). At 6 months, 91% (111/122) of eyes were within ± 0.50 D of attempted correction. Postoperative UDVA was comparable to preoperative CDVA at 1 month (P=.47) and progressively improved at 6 months (P=.004). Two eyes lost one line of CDVA at 6 months. Theoretical benefit analysis revealed that of 101 eyes with astigmatism, 29 would have had cyclotorsion-induced astigmatism of ≥ 10% if iris recognition and dynamic rotational eye tracking were not used. Furthermore, the mean percentage decrease in maximum depth of ablation by using the Tissue Saving Treatment was 11.8 ± 2.9% over Aspheric, 17.8 ± 6.2% over Personalized, and 18.2 ± 2.8% over Planoscan algorithms. Tissue saving surface ablation with iris recognition and dynamic rotational eye tracking was safe and effective in this series of eyes. Copyright 2011, SLACK Incorporated.

  4. Predictive saccades in children and adults: A combined fMRI and eye tracking study.

    Directory of Open Access Journals (Sweden)

    Katerina Lukasova

    Full Text Available Saccades were assessed in 21 adults (age 24 years, SD = 4 and 15 children (age 11 years, SD = 1, using combined functional magnetic resonance imaging (fMRI and eye-tracking. Subjects visually tracked a point on a horizontal line in four conditions: time and position predictable task (PRED, position predictable (pPRED, time predictable (tPRED and visually guided saccades (SAC. Both groups in the PRED but not in pPRED, tPRED and SAC produced predictive saccades with latency below 80 ms. In task versus group comparisons, children's showed less efficient learning compared to adults for predictive saccades (adults = 48%, children = 34%, p = 0.05. In adults brain activation was found in the frontal and occipital regions in the PRED, in the intraparietal sulcus in pPRED and in the frontal eye field, posterior intraparietal sulcus and medial regions in the tPRED task. Group-task interaction was found in the supplementary eye field and visual cortex in the PRED task, and the frontal cortex including the right frontal eye field and left frontal pole, in the pPRED condition. These results indicate that, the basic visuomotor circuitry is present in both adults and children, but fine-tuning of the activation according to the task temporal and spatial demand mature late in child development.

  5. Face, neck, and eye protection: adapting body armour to counter the changing patterns of injuries on the battlefield.

    Science.gov (United States)

    Breeze, J; Horsfall, I; Hepper, A; Clasper, J

    2011-12-01

    Recent international papers have suggested an urgent need for new methods of protecting the face, neck, and eyes in battle. We made a systematic analysis to identify all papers that reported the incidence and mortality of combat wounds to the face, eyes, or neck in the 21st century, and any papers that described methods of protecting the face, neck, or eyes. Neck wounds were found in 2-11% of injuries in battle, and associated with high mortality, but no new methods of protecting the neck were identified. Facial wounds were found in 6-30% of injuries in battle, but despite the psychological effects of this type of injury only one paper suggested methods for protection. If soldiers wore existing eye protection they potentially reduced the mean incidence of eye injuries in combat from the 4.5% found in this analysis to 0.5%. Given the need to balance protection with the functional requirements of the individual soldier, a multidisciplinary approach is required. Military surgeons are well placed to work with material scientists and biomechanical engineers to suggest modifications to the design of both personal and vehicle-mounted protection. Further research needs is needed to find out how effective current methods of protecting the neck are, and to develop innovative methods of protecting the vulnerable regions of the neck and face. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  6. Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: an eye tracking study.

    Science.gov (United States)

    Aizenman, Avi; Drew, Trafton; Ehinger, Krista A; Georgian-Smith, Dianne; Wolfe, Jeremy M

    2017-10-01

    As a promising imaging modality, digital breast tomosynthesis (DBT) leads to better diagnostic performance than traditional full-field digital mammograms (FFDM) alone. DBT allows different planes of the breast to be visualized, reducing occlusion from overlapping tissue. Although DBT is gaining popularity, best practices for search strategies in this medium are unclear. Eye tracking allowed us to describe search patterns adopted by radiologists searching DBT and FFDM images. Eleven radiologists examined eight DBT and FFDM cases. Observers marked suspicious masses with mouse clicks. Eye position was recorded at 1000 Hz and was coregistered with slice/depth plane as the radiologist scrolled through the DBT images, allowing a 3-D representation of eye position. Hit rate for masses was higher for tomography cases than 2-D cases and DBT led to lower false positive rates. However, search duration was much longer for DBT cases than FFDM. DBT was associated with longer fixations but similar saccadic amplitude compared with FFDM. When comparing radiologists' eye movements to a previous study, which tracked eye movements as radiologists read chest CT, we found DBT viewers did not align with previously identified "driller" or "scanner" strategies, although their search strategy most closely aligns with a type of vigorous drilling strategy.

  7. Mouse cursor movement and eye tracking data as an indicator of pathologists′ attention when viewing digital whole slide images

    Directory of Open Access Journals (Sweden)

    Vignesh Raghunath

    2012-01-01

    Full Text Available Context: Digital pathology has the potential to dramatically alter the way pathologists work, yet little is known about pathologists′ viewing behavior while interpreting digital whole slide images. While tracking pathologist eye movements when viewing digital slides may be the most direct method of capturing pathologists′ viewing strategies, this technique is cumbersome and technically challenging to use in remote settings. Tracking pathologist mouse cursor movements may serve as a practical method of studying digital slide interpretation, and mouse cursor data may illuminate pathologists′ viewing strategies and time expenditures in their interpretive workflow. Aims: To evaluate the utility of mouse cursor movement data, in addition to eye-tracking data, in studying pathologists′ attention and viewing behavior. Settings and Design: Pathologists (N = 7 viewed 10 digital whole slide images of breast tissue that were selected using a random stratified sampling technique to include a range of breast pathology diagnoses (benign/atypia, carcinoma in situ, and invasive breast cancer. A panel of three expert breast pathologists established a consensus diagnosis for each case using a modified Delphi approach. Materials and Methods: Participants′ foveal vision was tracked using SensoMotoric Instruments RED 60 Hz eye-tracking system. Mouse cursor movement was tracked using a custom MATLAB script. Statistical Analysis Used: Data on eye-gaze and mouse cursor position were gathered at fixed intervals and analyzed using distance comparisons and regression analyses by slide diagnosis and pathologist expertise. Pathologists′ accuracy (defined as percent agreement with the expert consensus diagnoses and efficiency (accuracy and speed were also analyzed. Results: Mean viewing time per slide was 75.2 seconds (SD = 38.42. Accuracy (percent agreement with expert consensus by diagnosis type was: 83% (benign/atypia; 48% (carcinoma in situ; and 93% (invasive

  8. Mouse cursor movement and eye tracking data as an indicator of pathologists’ attention when viewing digital whole slide images

    Science.gov (United States)

    Raghunath, Vignesh; Braxton, Melissa O.; Gagnon, Stephanie A.; Brunyé, Tad T.; Allison, Kimberly H.; Reisch, Lisa M.; Weaver, Donald L.; Elmore, Joann G.; Shapiro, Linda G.

    2012-01-01

    Context: Digital pathology has the potential to dramatically alter the way pathologists work, yet little is known about pathologists’ viewing behavior while interpreting digital whole slide images. While tracking pathologist eye movements when viewing digital slides may be the most direct method of capturing pathologists’ viewing strategies, this technique is cumbersome and technically challenging to use in remote settings. Tracking pathologist mouse cursor movements may serve as a practical method of studying digital slide interpretation, and mouse cursor data may illuminate pathologists’ viewing strategies and time expenditures in their interpretive workflow. Aims: To evaluate the utility of mouse cursor movement data, in addition to eye-tracking data, in studying pathologists’ attention and viewing behavior. Settings and Design: Pathologists (N = 7) viewed 10 digital whole slide images of breast tissue that were selected using a random stratified sampling technique to include a range of breast pathology diagnoses (benign/atypia, carcinoma in situ, and invasive breast cancer). A panel of three expert breast pathologists established a consensus diagnosis for each case using a modified Delphi approach. Materials and Methods: Participants’ foveal vision was tracked using SensoMotoric Instruments RED 60 Hz eye-tracking system. Mouse cursor movement was tracked using a custom MATLAB script. Statistical Analysis Used: Data on eye-gaze and mouse cursor position were gathered at fixed intervals and analyzed using distance comparisons and regression analyses by slide diagnosis and pathologist expertise. Pathologists’ accuracy (defined as percent agreement with the expert consensus diagnoses) and efficiency (accuracy and speed) were also analyzed. Results: Mean viewing time per slide was 75.2 seconds (SD = 38.42). Accuracy (percent agreement with expert consensus) by diagnosis type was: 83% (benign/atypia); 48% (carcinoma in situ); and 93% (invasive). Spatial

  9. Feature saliency in judging the sex and familiarity of faces.

    Science.gov (United States)

    Roberts, T; Bruce, V

    1988-01-01

    Two experiments are reported on the effect of feature masking on judgements of the sex and familiarity of faces. In experiment 1 the effect of masking the eyes, nose, or mouth of famous and nonfamous, male and female faces on response times in two tasks was investigated. In the first, recognition, task only masking of the eyes had a significant effect on response times. In the second, sex-judgement, task masking of the nose gave rise to a significant and large increase in response times. In experiment 2 it was found that when facial features were presented in isolation in a sex-judgement task, responses to noses were at chance level, unlike those for eyes or mouths. It appears that visual information available from the nose in isolation from the rest of the face is not sufficient for sex judgement, yet masking of the nose may disrupt the extraction of information about the overall topography of the face, information that may be more useful for sex judgement than for identification of a face.

  10. Eye tracking, strategies, and sex differences in virtual navigation.

    Science.gov (United States)

    Andersen, Nicolas E; Dahmani, Louisa; Konishi, Kyoko; Bohbot, Véronique D

    2012-01-01

    Reports of sex differences in wayfinding have typically used paradigms sensitive to the female advantage (navigation by landmarks) or sensitive to the male advantage (navigation by cardinal directions, Euclidian coordinates, environmental geometry, and absolute distances). The current virtual navigation paradigm allowed both men and women an equal advantage. We studied sex differences by systematically varying the number of landmarks. Eye tracking was used to quantify sex differences in landmark utilisation as participants solved an eight-arm radial maze task within different virtual environments. To solve the task, participants were required to remember the locations of target objects within environments containing 0, 2, 4, 6, or 8 landmarks. We found that, as the number of landmarks available in the environment increases, the proportion of time men and women spend looking at landmarks and the number of landmarks they use to find their way increases. Eye tracking confirmed that women rely more on landmarks to navigate, although landmark fixations were also associated with an increase in task completion time. Sex differences in navigational behaviour occurred only in environments devoid of landmarks and disappeared in environments containing multiple landmarks. Moreover, women showed sustained landmark-oriented gaze, while men's decreased over time. Finally, we found that men and women use spatial and response strategies to the same extent. Together, these results shed new light on the discrepancy in landmark utilisation between men and women and help explain the differences in navigational behaviour previously reported. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Self-directed learning skills in air-traffic control training; An eye-tracking approach

    NARCIS (Netherlands)

    Van Meeuwen, Ludo; Brand-Gruwel, Saskia; Van Merriënboer, Jeroen; Bock, Jeano; Kirschner, Paul A.

    2011-01-01

    Van Meeuwen, L. W., Brand-Gruwel, S., De Bock, J. J. P. R., Kirschner, P. A., & Van Merriënboer, J. J. G. (2010, September). Self-directed Learning Skills in Air-traffic Control Training; An Eye-tracking Approach. Paper presented at the European Association for Aviation Psychology, Budapest.

  12. The socialization effect on decision making in the Prisoner's Dilemma game: An eye-tracking study.

    Directory of Open Access Journals (Sweden)

    Anastasia G Peshkovskaya

    Full Text Available We used a mobile eye-tracking system (in the form of glasses to study the characteristics of visual perception in decision making in the Prisoner's Dilemma game. In each experiment, one of the 12 participants was equipped with eye-tracking glasses. The experiment was conducted in three stages: an anonymous Individual Game stage against a randomly chosen partner (one of the 12 other participants of the experiment; a Socialization stage, in which the participants were divided into two groups; and a Group Game stage, in which the participants played with partners in the groups. After each round, the respondent received information about his or her personal score in the last round and the overall winner of the game at the moment. The study proves that eye-tracking systems can be used for studying the process of decision making and forecasting. The total viewing time and the time of fixation on areas corresponding to noncooperative decisions is related to the participants' overall level of cooperation. The increase in the total viewing time and the time of fixation on the areas of noncooperative choice is due to a preference for noncooperative decisions and a decrease in the overall level of cooperation. The number of fixations on the group attributes is associated with group identity, but does not necessarily lead to cooperative behavior.

  13. Explaining Sad People's Memory Advantage for Faces.

    Science.gov (United States)

    Hills, Peter J; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen

    2017-01-01

    Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people.

  14. Remote eye care screening for rural veterans with Technology-based Eye Care Services: a quality improvement project.

    Science.gov (United States)

    Maa, April Y; Wojciechowski, Barbara; Hunt, Kelly; Dismuke, Clara; Janjua, Rabeea; Lynch, Mary G

    2017-01-01

    Veterans are at high risk for eye disease because of age and comorbid conditions. Access to eye care is challenging within the entire Veterans Hospital Administration's network of hospitals and clinics in the USA because it is the third busiest outpatient clinical service and growing at a rate of 9% per year. Rural and highly rural veterans face many more barriers to accessing eye care because of distance, cost to travel, and difficulty finding care in the community as many live in medically underserved areas. Also, rural veterans may be diagnosed in later stages of eye disease than their non-rural counterparts due to lack of access to specialty care. In March 2015, Technology-based Eye Care Services (TECS) was launched from the Atlanta Veterans Affairs (VA) as a quality improvement project to provide eye screening services for rural veterans. By tracking multiple measures including demographic and access to care metrics, data shows that TECS significantly improved access to care, with 33% of veterans receiving same-day access and >98% of veterans receiving an appointment within 30 days of request. TECS also provided care to a significant percentage of homeless veterans, 10.6% of the patients screened. Finally, TECS reduced healthcare costs, saving the VA up to US$148 per visit and approximately US$52 per patient in round trip travel reimbursements when compared to completing a face-to-face exam at the medical center. Overall savings to the VA system in this early phase of TECS totaled US$288,400, about US$41,200 per month. Other healthcare facilities may be able to use a similar protocol to extend care to at-risk patients.

  15. Efficient human face detection in infancy.

    Science.gov (United States)

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  16. Eye tracking to evaluate evidence recognition in crime scene investigations.

    Science.gov (United States)

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total

  17. Incidental L2 Vocabulary Acquisition "from" and "while" Reading: An Eye-Tracking Study

    Science.gov (United States)

    Pellicer-Sánchez, Ana

    2016-01-01

    Previous studies have shown that reading is an important source of incidental second language (L2) vocabulary acquisition. However, we still do not have a clear picture of what happens when readers encounter unknown words. Combining offline (vocabulary tests) and online (eye-tracking) measures, the incidental acquisition of vocabulary knowledge…

  18. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Eye movements in Multiple Object Tracking systematically lagging behind the scene content

    Czech Academy of Sciences Publication Activity Database

    Lukavský, Jiří

    2013-01-01

    Roč. 42, Suppl (2013), s. 42-43 ISSN 0301-0066. [36th European Conference on Visual Perception . 25.08.2013.-29.08.2013, Brémy] R&D Projects: GA ČR GA13-28709S Institutional support: RVO:68081740 Keywords : eye movements * attention * multiple object tracking Subject RIV: AN - Psychology http://www. perception web.com/abstract.cgi?id=v130146

  20. Number line estimation strategies in children with mathematical learning difficulties measured by eye tracking

    NARCIS (Netherlands)

    van 't Noordende, Jaccoline E|info:eu-repo/dai/nl/369862422; van Hoogmoed, Anne H|info:eu-repo/dai/nl/314839496; Schot, Willemijn D; Kroesbergen, Evelyn H|info:eu-repo/dai/nl/241607949

    INTRODUCTION: Number line estimation is one of the skills related to mathematical performance. Previous research has shown that eye tracking can be used to identify differences in the estimation strategies children with dyscalculia and children with typical mathematical development use on number

  1. Number line estimation strategies in children with mathematical learning difficulties measured by eye tracking

    NARCIS (Netherlands)

    van’t Noordende, Jaccoline E.; van Hoogmoed, Anne H.; Schot, Willemijn D.; Kroesbergen, Evelyn H.

    2016-01-01

    Introduction: Number line estimation is one of the skills related to mathematical performance. Previous research has shown that eye tracking can be used to identify differences in the estimation strategies children with dyscalculia and children with typical mathematical development use on number

  2. EEG and Eye Tracking Signatures of Target Encoding during Structured Visual Search

    Directory of Open Access Journals (Sweden)

    Anne-Marie Brouwer

    2017-05-01

    Full Text Available EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs locked to the onset of fixation or saccade (saccade-related potentials, SRPs have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits and targets that are not (misses. Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition. In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets. In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets. Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later.

  3. Preliminary Experience Using Eye-Tracking Technology to Differentiate Novice and Expert Image Interpretation for Ultrasound-Guided Regional Anesthesia.

    Science.gov (United States)

    Borg, Lindsay K; Harrison, T Kyle; Kou, Alex; Mariano, Edward R; Udani, Ankeet D; Kim, T Edward; Shum, Cynthia; Howard, Steven K

    2018-02-01

    Objective measures are needed to guide the novice's pathway to expertise. Within and outside medicine, eye tracking has been used for both training and assessment. We designed this study to test the hypothesis that eye tracking may differentiate novices from experts in static image interpretation for ultrasound (US)-guided regional anesthesia. We recruited novice anesthesiology residents and regional anesthesiology experts. Participants wore eye-tracking glasses, were shown 5 sonograms of US-guided regional anesthesia, and were asked a series of anatomy-based questions related to each image while their eye movements were recorded. The answer to each question was a location on the sonogram, defined as the area of interest (AOI). The primary outcome was the total gaze time in the AOI (seconds). Secondary outcomes were the total gaze time outside the AOI (seconds), total time to answer (seconds), and time to first fixation on the AOI (seconds). Five novices and 5 experts completed the study. Although the gaze time (mean ± SD) in the AOI was not different between groups (7 ± 4 seconds for novices and 7 ± 3 seconds for experts; P = .150), the gaze time outside the AOI was greater for novices (75 ± 18 versus 44 ± 4 seconds for experts; P = .005). The total time to answer and total time to first fixation in the AOI were both shorter for experts. Experts in US-guided regional anesthesia take less time to identify sonoanatomy and spend less unfocused time away from a target compared to novices. Eye tracking is a potentially useful tool to differentiate novices from experts in the domain of US image interpretation. © 2017 by the American Institute of Ultrasound in Medicine.

  4. 78 FR 71621 - Agency Information Collection Activities; Proposed Collection; Comment Request; Eye Tracking...

    Science.gov (United States)

    2013-11-29

    ... advertising can be distracting. However, the effects of this kind of distraction during the major statement of... attention to and processing of information in the ad. We plan to collect descriptive eye tracking data on participants' attention to (1) the superimposed text during the major statement of risk information and (2) the...

  5. Does sound structure affect word learning? An eye-tracking study of Danish learning toddlers

    DEFF Research Database (Denmark)

    Trecca, Fabio; Bleses, Dorthe; Madsen, Thomas O.

    2018-01-01

    closely related languages. In support of this hypothesis, recent work has shown that the phonetic properties of Danish negatively affect online language processing in young Danish children. In this study, we used eye-tracking to investigate whether the challenges associated with processing Danish also...

  6. Information acquisition during online decision-making : A model-based exploration using eye-tracking data

    NARCIS (Netherlands)

    Shi, W.; Wedel, M.; Pieters, R.

    2013-01-01

    We propose a model of eye-tracking data to understand information acquisition patterns on attribute-by-product matrices, which are common in online choice environments such as comparison websites. The objective is to investigate how consumers gather product and attribute information from moment to

  7. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    Science.gov (United States)

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  8. Attentional System for Face Detection and Tracking

    Directory of Open Access Journals (Sweden)

    Leonardo Pinto da Silva Panta Leão

    2011-04-01

    Full Text Available The human visual system quickly performs complex decisions due, in part, to attentional system, which positions the most relevant targets in the center of the visual field, region with greatest concentration of photoreceptor cells. The attentional system involves sensory, cognitive and also mechanical elements, because the eye and head muscles must be activated to produce movement. In this paper we present the proposal of a face detector system that, as well as the biological system, produces a coordinated movement with the purpose of positioning the target image in the center of camera's visual field. The developed system has distinct parts, one responsible for video pattern recognition and other for controlling the mechanical part, implemented as processes that communicate with each other by sockets.

  9. Reading the Mind in the Eyes or Reading between the Lines? Theory of Mind Predicts Collective Intelligence Equally Well Online and Face-To-Face

    Science.gov (United States)

    Engel, David; Woolley, Anita Williams; Jing, Lisa X.; Chabris, Christopher F.; Malone, Thomas W.

    2014-01-01

    Recent research with face-to-face groups found that a measure of general group effectiveness (called “collective intelligence”) predicted a group’s performance on a wide range of different tasks. The same research also found that collective intelligence was correlated with the individual group members’ ability to reason about the mental states of others (an ability called “Theory of Mind” or “ToM”). Since ToM was measured in this work by a test that requires participants to “read” the mental states of others from looking at their eyes (the “Reading the Mind in the Eyes” test), it is uncertain whether the same results would emerge in online groups where these visual cues are not available. Here we find that: (1) a collective intelligence factor characterizes group performance approximately as well for online groups as for face-to-face groups; and (2) surprisingly, the ToM measure is equally predictive of collective intelligence in both face-to-face and online groups, even though the online groups communicate only via text and never see each other at all. This provides strong evidence that ToM abilities are just as important to group performance in online environments with limited nonverbal cues as they are face-to-face. It also suggests that the Reading the Mind in the Eyes test measures a deeper, domain-independent aspect of social reasoning, not merely the ability to recognize facial expressions of mental states. PMID:25514387

  10. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research.

    Science.gov (United States)

    Kredel, Ralf; Vater, Christian; Klostermann, André; Hossner, Ernst-Joachim

    2017-01-01

    Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  11. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research

    Directory of Open Access Journals (Sweden)

    Ralf Kredel

    2017-10-01

    Full Text Available Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes, while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses. To meet both demands, some promising compromises of methodological solutions have been proposed—in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  12. Quantifying Novice and Expert Differences in Visual Diagnostic Reasoning in Veterinary Pathology Using Eye-Tracking Technology.

    Science.gov (United States)

    Warren, Amy L; Donnon, Tyrone L; Wagg, Catherine R; Priest, Heather; Fernandez, Nicole J

    2018-01-18

    Visual diagnostic reasoning is the cognitive process by which pathologists reach a diagnosis based on visual stimuli (cytologic, histopathologic, or gross imagery). Currently, there is little to no literature examining visual reasoning in veterinary pathology. The objective of the study was to use eye tracking to establish baseline quantitative and qualitative differences between the visual reasoning processes of novice and expert veterinary pathologists viewing cytology specimens. Novice and expert participants were each shown 10 cytology images and asked to formulate a diagnosis while wearing eye-tracking equipment (10 slides) and while concurrently verbalizing their thought processes using the think-aloud protocol (5 slides). Compared to novices, experts demonstrated significantly higher diagnostic accuracy (preasoning and script-inductive knowledge structures with system 2 (analytic) reasoning to verify their diagnosis.

  13. Reproducibility of retinal nerve fiber layer thickness measures using eye tracking in children with nonglaucomatous optic neuropathy.

    Science.gov (United States)

    Rajjoub, Raneem D; Trimboli-Heidler, Carmelina; Packer, Roger J; Avery, Robert A

    2015-01-01

    To determine the intra- and intervisit reproducibility of circumpapillary retinal nerve fiber layer (RNFL) thickness measures using eye tracking-assisted spectral-domain optical coherence tomography (SD OCT) in children with nonglaucomatous optic neuropathy. Prospective longitudinal study. Circumpapillary RNFL thickness measures were acquired with SD OCT using the eye-tracking feature at 2 separate study visits. Children with normal and abnormal vision (visual acuity ≥ 0.2 logMAR above normal and/or visual field loss) who demonstrated clinical and radiographic stability were enrolled. Intra- and intervisit reproducibility was calculated for the global average and 9 anatomic sectors by calculating the coefficient of variation and intraclass correlation coefficient. Forty-two subjects (median age 8.6 years, range 3.9-18.2 years) met inclusion criteria and contributed 62 study eyes. Both the abnormal and normal vision cohort demonstrated the lowest intravisit coefficient of variation for the global RNFL thickness. Intervisit reproducibility remained good for those with normal and abnormal vision, although small but statistically significant increases in the coefficient of variation were observed for multiple anatomic sectors in both cohorts. The magnitude of visual acuity loss was significantly associated with the global (ß = 0.026, P < .01) and temporal sector coefficient of variation (ß = 0.099, P < .01). SD OCT with eye tracking demonstrates highly reproducible RNFL thickness measures. Subjects with vision loss demonstrate greater intra- and intervisit variability than those with normal vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Does attention to health labels predict a healthy food choice? An eye-tracking study

    NARCIS (Netherlands)

    Fenko, Anna; Nicolaas, Iris; Galetzka, Mirjam

    2018-01-01

    Visual attention to health labels can indicate a subsequent healthy food choice. This study looked into the relative effects of Choices logos and traffic light labels on consumers’ visual attention and food choice. A field experiment using mobile eye-tracking was conducted in a Dutch university

  15. Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study

    OpenAIRE

    Sperling, Ingmar; Baldofski, Sabrina; L?thold, Patrick; Hilbert, Anja

    2017-01-01

    Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free ex...

  16. Differential emotion attribution to neutral faces of own and other races.

    Science.gov (United States)

    Hu, Chao S; Wang, Qiandong; Han, Tong; Weare, Ethan; Fu, Genyue

    2017-02-01

    Past research has demonstrated differential recognition of emotion on faces of different races. This paper reports the first study to explore differential emotion attribution to neutral faces of different races. Chinese and Caucasian adults viewed a series of Chinese and Caucasian neutral faces and judged their outward facial expression: neutral, positive, or negative. The results showed that both Chinese and Caucasian viewers perceived more Chinese faces than Caucasian faces as neutral. Nevertheless, Chinese viewers attributed positive emotion to Caucasian faces more than to Chinese faces, whereas Caucasian viewers attributed negative emotion to Caucasian faces more than to Chinese faces. Moreover, Chinese viewers attributed negative and neutral emotion to the faces of both races without significant difference in frequency, whereas Caucasian viewers mostly attributed neutral emotion to the faces. These differences between Chinese and Caucasian viewers may be due to differential visual experience, culture, racial stereotype, or expectation of the experiment. We also used eye tracking among the Chinese participants to explore the relationship between face-processing strategy and emotion attribution to neutral faces. The results showed that the interaction between emotion attribution and face race was significant on face-processing strategy, such as fixation proportion on eyes and saccade amplitude. Additionally, pupil size during processing Caucasian faces was larger than during processing Chinese faces.

  17. Research of Digital Interface Layout Design based on Eye-tracking

    Directory of Open Access Journals (Sweden)

    Shao Jiang

    2015-01-01

    Full Text Available The aim of this paper is to improve the low service efficiency and unsmooth human-computer interaction caused by currently irrational layouts of digital interfaces for complex systems. Also, three common layout structures for digital interfaces are to be presented and five layout types appropriate for multilevel digital interfaces are to be summarized. Based on the eye tracking technology, an assessment was conducted in advantages and disadvantages of different layout types through subjects’ search efficiency. Based on data and results, this study constructed a matching model which is appropriate for multilevel digital interface layout and verified the fact that the task element is a significant and important aspect of layout design. A scientific experimental model of research on digital interfaces for complex systems is provided. Both data and conclusions of the eye movement experiment provide a reference for layout designs of interfaces for complex systems with different task characteristics.

  18. NIRFaceNet: A Convolutional Neural Network for Near-Infrared Face Identification

    Directory of Open Access Journals (Sweden)

    Min Peng

    2016-10-01

    Full Text Available Near-infrared (NIR face recognition has attracted increasing attention because of its advantage of illumination invariance. However, traditional face recognition methods based on NIR are designed for and tested in cooperative-user applications. In this paper, we present a convolutional neural network (CNN for NIR face recognition (specifically face identification in non-cooperative-user applications. The proposed NIRFaceNet is modified from GoogLeNet, but has a more compact structure designed specifically for the Chinese Academy of Sciences Institute of Automation (CASIA NIR database and can achieve higher identification rates with less training time and less processing time. The experimental results demonstrate that NIRFaceNet has an overall advantage compared to other methods in the NIR face recognition domain when image blur and noise are present. The performance suggests that the proposed NIRFaceNet method may be more suitable for non-cooperative-user applications.

  19. The Processing of Human Emotional Faces by Pet and Lab Dogs: Evidence for Lateralization and Experience Effects

    Science.gov (United States)

    Barber, Anjuli L. A.; Randi, Dania; Müller, Corsin A.; Huber, Ludwig

    2016-01-01

    From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces. PMID:27074009

  20. Explaining Sad People’s Memory Advantage for Faces

    Science.gov (United States)

    Hills, Peter J.; Marquardt, Zoe; Young, Isabel; Goodenough, Imogen

    2017-01-01

    Sad people recognize faces more accurately than happy people (Hills et al., 2011). We devised four hypotheses for this finding that are tested between in the current study. The four hypotheses are: (1) sad people engage in more expert processing associated with face processing; (2) sad people are motivated to be more accurate than happy people in an attempt to repair their mood; (3) sad people have a defocused attentional strategy that allows more information about a face to be encoded; and (4) sad people scan more of the face than happy people leading to more facial features to be encoded. In Experiment 1, we found that dysphoria (sad mood often associated with depression) was not correlated with the face-inversion effect (a measure of expert processing) nor with response times but was correlated with defocused attention and recognition accuracy. Experiment 2 established that dysphoric participants detected changes made to more facial features than happy participants. In Experiment 3, using eye-tracking we found that sad-induced participants sampled more of the face whilst avoiding the eyes. Experiment 4 showed that sad-induced people demonstrated a smaller own-ethnicity bias. These results indicate that sad people show different attentional allocation to faces than happy and neutral people. PMID:28261138

  1. Peer Assessment of Webpage Design: Behavioral Sequential Analysis Based on Eye-Tracking Evidence

    Science.gov (United States)

    Hsu, Ting-Chia; Chang, Shao-Chen; Liu, Nan-Cen

    2018-01-01

    This study employed an eye-tracking machine to record the process of peer assessment. Each web page was divided into several regions of interest (ROIs) based on the frame design and content. A total of 49 undergraduate students with a visual learning style participated in the experiment. This study investigated the peer assessment attitudes of the…

  2. The Impact of Early Bilingualism on Face Recognition Processes.

    Science.gov (United States)

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  3. The impact of early bilingualism on face recognition processes

    Directory of Open Access Journals (Sweden)

    Sonia Kandel

    2016-07-01

    Full Text Available Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes. Face recognition processes were investigated through two classic effects in face recognition studies: the Other Race Effect (ORE and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race, Chinese faces (other race and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  4. A Mini-Review of Track And Field’s Talent-Identification Models in Iran and Some Designated Countries

    OpenAIRE

    Ebrahim Ghasemzadeh Mirkolaee; Seyed Mohammad Hossein Razavi; Saeed Amirnejad

    2013-01-01

    Talent identification and training the athletes of the basic levels in track and field requires codifying a proper model like any other system so that any duplication is prevented as well as knowing the right path. The federation of track and field started to codify the national talent-identification scheme in track and field in 1385. Hence, the present studies track-and-field talent-identification patterns in some designated countries and compare them with the codified pattern in Iran. The r...

  5. Prioritized Identification of Attractive and Romantic Partner Faces in Rapid Serial Visual Presentation.

    Science.gov (United States)

    Nakamura, Koyo; Arai, Shihoko; Kawabata, Hideaki

    2017-11-01

    People are sensitive to facial attractiveness because it is an important biological and social signal. As such, our perceptual and attentional system seems biased toward attractive faces. We tested whether attractive faces capture attention and enhance memory access in an involuntary manner using a dual-task rapid serial visual presentation (dtRSVP) paradigm, wherein multiple faces were successively presented for 120 ms. In Experiment 1, participants (N = 26) were required to identify two female faces embedded in a stream of animal faces as distractors. The results revealed that identification of the second female target (T2) was better when it was attractive compared to neutral or unattractive. In Experiment 2, we investigated whether perceived attractiveness affects T2 identification (N = 27). To this end, we performed another dtRSVP task involving participants in a romantic partnership with the opposite sex, wherein T2 was their romantic partner's face. The results demonstrated that a romantic partner's face was correctly identified more often than was the face of a friend or unknown person. Furthermore, the greater the intensity of passionate love participants felt for their partner (as measured by the Passionate Love Scale), the more often they correctly identified their partner's face. Our experiments indicate that attractive and romantic partners' faces facilitate the identification of the faces in an involuntary manner.

  6. Enhancing a CAVE with Eye Tracking System for Human-Computer Interaction Research in 3D Visualization

    National Research Council Canada - National Science Library

    Hix, Deborah

    1999-01-01

    The objective of this award was to purchase and install two ISCAN Inc. Eye Tracking Systems and associated equipment to create a unique set-up for research in fully immersive virtual environments (VEs...

  7. Attentional biases in body dysmorphic disorder (BDD): Eye-tracking using the emotional Stroop task.

    Science.gov (United States)

    Toh, Wei Lin; Castle, David J; Rossell, Susan L

    2017-04-01

    Body dysmorphic disorder (BDD) is characterised by repetitive behaviours and/or mental acts occurring in response to preoccupations with perceived defects or flaws in physical appearance. This study aimed to examine attentional biases in BDD via the emotional Stroop task with two modifications: i) incorporating an eye-tracking paradigm, and ii) employing an obsessive-compulsive disorder (OCD) control group. Twenty-one BDD, 19 OCD and 21 HC participants, who were age-, sex-, and IQ-matched, were included. A card version of the emotional Stroop task was employed based on seven 10-word lists: (i) BDD-positive, (ii) BDD-negative, (iii) OCD-checking, (iv) OCD-washing, (v) general positive, (vi) general threat, and (vii) neutral (as baseline). Participants were asked to read aloud words and word colours consecutively, thereby yielding accuracy and latency scores. Eye-tracking parameters were also measured. Participants with BDD exhibited significant Stroop interference for BDD-negative words relative to HC participants, as shown by extended colour-naming latencies. In contrast, the OCD group did not exhibit Stroop interference for OCD-related nor general threat words. Only mild eye-tracking anomalies were uncovered in clinical groups. Inspection of individual scanning styles and fixation heat maps however revealed that viewing strategies adopted by clinical groups were generally disorganised, with avoidance of certain disorder-relevant words and considerable visual attention devoted to non-salient card regions. The operation of attentional biases to negative disorder-specific words was corroborated in BDD. Future replication studies using other paradigms are vital, given potential ambiguities inherent in emotional Stroop task interpretation. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. EyeMusic: Making Music with the Eyes

    OpenAIRE

    Hornof, Anthony J.; Sato, Linda

    2004-01-01

    Though musical performers routinely use eye movements to communicate with each other during musical performances, very few performers or composers have used eye tracking devices to direct musical compositions and performances. EyeMusic is a system that uses eye movements as an input to electronic music compositions. The eye movements can directly control the music, or the music can respond to the eyes moving around a visual scene. EyeMusic is implemented so that any composer using established...

  9. Factors Influencing the Use of Captions by Foreign Language Learners: An Eye-Tracking Study

    Science.gov (United States)

    Winke, Paula; Gass, Susan; Sydorenko, Tetyana

    2013-01-01

    This study investigates caption-reading behavior by foreign language (L2) learners and, through eye-tracking methodology, explores the extent to which the relationship between the native and target language affects that behavior. Second-year (4th semester) English-speaking learners of Arabic, Chinese, Russian, and Spanish watched 2 videos…

  10. The more-or-less morphing face illusion: a case of fixation-dependent modulation.

    Science.gov (United States)

    Van Lier, Rob; Koning, Arno

    2014-01-01

    A visual illusion is presented in which the perceived changes in a morphing sequence depend on eye movements. The phenomenon is illustrated using face morphs: when tracking a moving dot superimposed on a face morphing sequence, the changes in the morphing sequence seem rather small, but when the dot stops moving, the perceived extent of morphing suddenly becomes much larger. We explore this phenomenon further and discuss the observed effects.

  11. Comparative eye-tracking evaluation of scatterplots and parallel coordinates

    Directory of Open Access Journals (Sweden)

    Rudolf Netzel

    2017-06-01

    Full Text Available We investigate task performance and reading characteristics for scatterplots (Cartesian coordinates and parallel coordinates. In a controlled eye-tracking study, we asked 24 participants to assess the relative distance of points in multidimensional space, depending on the diagram type (parallel coordinates or a horizontal collection of scatterplots, the number of data dimensions (2, 4, 6, or 8, and the relative distance between points (15%, 20%, or 25%. For a given reference point and two target points, we instructed participants to choose the target point that was closer to the reference point in multidimensional space. We present a visual scanning model that describes different strategies to solve this retrieval task for both diagram types, and propose corresponding hypotheses that we test using task completion time, accuracy, and gaze positions as dependent variables. Our results show that scatterplots outperform parallel coordinates significantly in 2 dimensions, however, the task was solved more quickly and more accurately with parallel coordinates in 8 dimensions. The eye-tracking data further shows significant differences between Cartesian and parallel coordinates, as well as between different numbers of dimensions. For parallel coordinates, there is a clear trend toward shorter fixations and longer saccades with increasing number of dimensions. Using an area-of-interest (AOI based approach, we identify different reading strategies for each diagram type: For parallel coordinates, the participants’ gaze frequently jumped back and forth between pairs of axes, while axes were rarely focused on when viewing Cartesian coordinates. We further found that participants’ attention is biased: toward the center of the whole plotfor parallel coordinates and skewed to the center/left side for Cartesian coordinates. We anticipate that these results may support the design of more effective visualizations for multidimensional data.

  12. EyeFrame: Real-time memory aid improves human multitasking via domain-general eye tracking procedures

    Directory of Open Access Journals (Sweden)

    P. eTaylor

    2015-09-01

    Full Text Available OBJECTIVE: We developed an extensively general closed-loop system to improve human interaction in various multitasking scenarios, with semi-autonomous agents, processes, and robots. BACKGROUND: Much technology is converging toward semi-independent processes with intermittent human supervision distributed over multiple computerized agents. Human operators multitask notoriously poorly, in part due to cognitive load and limited working memory. To multitask optimally, users must remember task order, e.g., the most neglected task, since longer times not monitoring an element indicates greater probability of need for user input. The secondary task of monitoring attention history over multiple spatial tasks requires similar cognitive resources as primary tasks themselves. Humans can not reliably make more than ~2 decisions/s. METHODS: Participants managed a range of 4-10 semi-autonomous agents performing rescue tasks. To optimize monitoring and controlling multiple agents, we created an automated short term memory aid, providing visual cues from users' gaze history. Cues indicated when and where to look next, and were derived from an inverse of eye fixation recency. RESULTS: Contingent eye tracking algorithms drastically improved operator performance, increasing multitasking capacity. The gaze aid reduced biases, and reduced cognitive load, measured by smaller pupil dilation. CONCLUSIONS: Our eye aid likely helped by delegating short-term memory to the computer, and by reducing decision making load. Past studies used eye position for gaze-aware control and interactive updating of displays in application-specific scenarios, but ours is the first to successfully implement domain-general algorithms. Procedures should generalize well to: process control, factory operations, robot control, surveillance, aviation, air traffic control, driving, military, mobile search and rescue, and many tasks where probability of utility is predicted by duration since last

  13. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    Science.gov (United States)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  14. Using eye tracking to understand the effects of brand placement disclosure types in television programs

    NARCIS (Netherlands)

    Boerman, S.C.; van Reijmersdal, E.A.; Neijens, P.C.

    2015-01-01

    This eye tracking experiment (N = 149) investigates the influence of different ways of disclosing brand placement on viewers’ visual attention, the use of persuasion knowledge, and brand responses. The results showed that (1) a combination of text ("This program contains product placement") and a

  15. Application of eye-tracking in the testing of drivers: A review of research

    Directory of Open Access Journals (Sweden)

    Bronisław Kapitaniak

    2015-12-01

    Full Text Available Recording and analyzing eye movements provide important elements for understanding the nature of the task of driving a vehicle. This article reviews the literature on eye movement strategies employed by drivers of vehicles (vehicle control, evaluation of the situation by analyzing essential visual elements, navigation. Special focus was placed on the phenomenon of conspicuity, the probability of perceiving an object in the visual field and the factors that determine it. The article reports the methods of oculographic examination, with special emphasis on the non-invasive technique using corneal reflections, and the criteria for optimal selection of the test apparatus for drivers in experimental conditions (on a driving simulator and in real conditions. Particular attention was also paid to the helmet – or glass-type devices provided with 1 or 2 high definition (HD camcorders recording the field of vision and the direction of gaze, and the non-contact devices comprising 2 or 3 cameras and an infrared source to record eye and head movements, pupil diameter, eye convergence distance, duration and frequency of eyelid blinking. A review of the studies conducted using driver eye-tracking procedure was presented. The results, in addition to their cognitive value, can be used with success to optimize the strategy of drivers training.

  16. Social attention in ASD: A review and meta-analysis of eye-tracking studies.

    Science.gov (United States)

    Chita-Tegmark, Meia

    2016-01-01

    Determining whether social attention is reduced in Autism Spectrum Disorder (ASD) and what factors influence social attention is important to our theoretical understanding of developmental trajectories of ASD and to designing targeted interventions for ASD. This meta-analysis examines data from 38 articles that used eye-tracking methods to compare individuals with ASD and TD controls. In this paper, the impact of eight factors on the size of the effect for the difference in social attention between these two groups are evaluated: age, non-verbal IQ matching, verbal IQ matching, motion, social content, ecological validity, audio input and attention bids. Results show that individuals with ASD spend less time attending to social stimuli than typically developing (TD) controls, with a mean effect size of 0.55. Social attention in ASD was most impacted when stimuli had a high social content (showed more than one person). This meta-analysis provides an opportunity to survey the eye-tracking research on social attention in ASD and to outline potential future research directions, more specifically research of social attention in the context of stimuli with high social content. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. The feasibility of automated eye tracking with the Early Childhood Vigilance Test of attention in younger HIV-exposed Ugandan children.

    Science.gov (United States)

    Boivin, Michael J; Weiss, Jonathan; Chhaya, Ronak; Seffren, Victoria; Awadu, Jorem; Sikorskii, Alla; Giordani, Bruno

    2017-07-01

    Tobii eye tracking was compared with webcam-based observer scoring on an animation viewing measure of attention (Early Childhood Vigilance Test; ECVT) to evaluate the feasibility of automating measurement and scoring. Outcomes from both scoring approaches were compared with the Mullen Scales of Early Learning (MSEL), Color-Object Association Test (COAT), and Behavior Rating Inventory of Executive Function for preschool children (BRIEF-P). A total of 44 children 44 to 65 months of age were evaluated with the ECVT, COAT, MSEL, and BRIEF-P. Tobii ×2-30 portable infrared cameras were programmed to monitor pupil direction during the ECVT 6-min animation and compared with observer-based PROCODER webcam scoring. Children watched 78% of the cartoon (Tobii) compared with 67% (webcam scoring), although the 2 measures were highly correlated (r = .90, p = .001). It is possible for 2 such measures to be highly correlated even if one is consistently higher than the other (Bergemann et al., 2012). Both ECVT Tobii and webcam ECVT measures significantly correlated with COAT immediate recall (r = .37, p = .02 vs. r = .38, p = .01, respectively) and total recall (r = .33, p = .06 vs. r = .42, p = .005) measures. However, neither the Tobii eye tracking nor PROCODER webcam ECVT measures of attention correlated with MSEL composite cognitive performance or BRIEF-P global executive composite. ECVT scoring using Tobii eye tracking is feasible with at-risk very young African children and consistent with webcam-based scoring approaches in their correspondence to one another and other neurocognitive performance-based measures. By automating measurement and scoring, eye tracking technologies can improve the efficiency and help better standardize ECVT testing of attention in younger children. This holds promise for other neurodevelopmental tests where eye movements, tracking, and gaze length can provide important behavioral markers of neuropsychological and neurodevelopmental processes

  18. An Eye-Tracking Study of Learning from Science Text with Concrete and Abstract Illustrations

    Science.gov (United States)

    Mason, Lucia; Pluchino, Patrik; Tornatora, Maria Caterina; Ariasi, Nicola

    2013-01-01

    This study investigated the online process of reading and the offline learning from an illustrated science text. The authors examined the effects of using a concrete or abstract picture to illustrate a text and adopted eye-tracking methodology to trace text and picture processing. They randomly assigned 59 eleventh-grade students to 3 reading…

  19. Predictive factor analysis for successful performance of iris recognition-assisted dynamic rotational eye tracking during laser in situ keratomileusis.

    Science.gov (United States)

    Prakash, Gaurav; Ashok Kumar, Dhivya; Agarwal, Amar; Jacob, Soosan; Sarvanan, Yoga; Agarwal, Athiya

    2010-02-01

    To analyze the predictive factors associated with success of iris recognition and dynamic rotational eye tracking on a laser in situ keratomileusis (LASIK) platform with active assessment and correction of intraoperative cyclotorsion. Interventional case series. Two hundred seventy-five eyes of 142 consecutive candidates underwent LASIK with attempted iris recognition and dynamic rotational tracking on the Technolas 217z100 platform (Techolas Perfect Vision, St Louis, Missouri, USA) at a tertiary care ophthalmic hospital. The main outcome measures were age, gender, flap creation method (femtosecond, microkeratome, epi-LASIK), success of static rotational tracking, ablation algorithm, pulses, and depth; preablation and intraablation rotational activity were analyzed and evaluated using regression models. Preablation static iris recognition was successful in 247 eyes, without difference in flap creation methods (P = .6). Age (partial correlation, -0.16; P = .014), amount of pulses (partial correlation, 0.39; P = 1.6 x 10(-8)), and gender (P = .02) were significant predictive factors for the amount of intraoperative cyclodeviation. Tracking difficulties leading to linking the ablation with a new intraoperatively acquired iris image were more with femtosecond-assisted flaps (P = 2.8 x 10(-7)) and the amount of intraoperative cyclotorsion (P = .02). However, the number of cases having nonresolvable failure of intraoperative rotational tracking was similar in the 3 flap creation methods (P = .22). Intraoperative cyclotorsional activity depends on the age, gender, and duration of ablation (pulses delivered). Femtosecond flaps do not seem to have a disadvantage over microkeratome flaps as far as iris recognition and success of intraoperative dynamic rotational tracking is concerned. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  20. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children.

    Directory of Open Access Journals (Sweden)

    Natalia I Vargas-Cuentas

    Full Text Available Autism spectrum disorder (ASD currently affects nearly 1 in 160 children worldwide. In over two-thirds of evaluations, no validated diagnostics are used and gold standard diagnostic tools are used in less than 5% of evaluations. Currently, the diagnosis of ASD requires lengthy and expensive tests, in addition to clinical confirmation. Therefore, fast, cheap, portable, and easy-to-administer screening instruments for ASD are required. Several studies have shown that children with ASD have a lower preference for social scenes compared with children without ASD. Based on this, eye-tracking and measurement of gaze preference for social scenes has been used as a screening tool for ASD. Currently available eye-tracking software requires intensive calibration, training, or holding of the head to prevent interference with gaze recognition limiting its use in children with ASD.In this study, we designed a simple eye-tracking algorithm that does not require calibration or head holding, as a platform for future validation of a cost-effective ASD potential screening instrument. This system operates on a portable and inexpensive tablet to measure gaze preference of children for social compared to abstract scenes. A child watches a one-minute stimulus video composed of a social scene projected on the left side and an abstract scene projected on the right side of the tablet's screen. We designed five stimulus videos by changing the social/abstract scenes. Every child observed all the five videos in random order. We developed an eye-tracking algorithm that calculates the child's gaze preference for the social and abstract scenes, estimated as the percentage of the accumulated time that the child observes the left or right side of the screen, respectively. Twenty-three children without a prior history of ASD and 8 children with a clinical diagnosis of ASD were evaluated. The recorded video of the child´s eye movement was analyzed both manually by an observer

  1. Classification of user performance in the Ruff Figural Fluency Test based on eye-tracking features

    Directory of Open Access Journals (Sweden)

    Borys Magdalena

    2017-01-01

    Full Text Available Cognitive assessment in neurological diseases represents a relevant topic due to its diagnostic significance in detecting disease, but also in assessing progress of the treatment. Computer-based tests provide objective and accurate cognitive skills and capacity measures. The Ruff Figural Fluency Test (RFFT provides information about non-verbal capacity for initiation, planning, and divergent reasoning. The traditional paper form of the test was transformed into a computer application and examined. The RFFT was applied in an experiment performed among 70 male students to assess their cognitive performance in the laboratory environment. Each student was examined in three sequential series. Besides the students’ performances measured by using in app keylogging, the eye-tracking data obtained by non-invasive video-based oculography were gathered, from which several features were extracted. Eye-tracking features combined with performance measures (a total number of designs and/or error ratio were applied in machine learning classification. Various classification algorithms were applied, and their accuracy, specificity, sensitivity and performance were compared.

  2. Adaptive Colour Feature Identification in Image for Object Tracking

    Directory of Open Access Journals (Sweden)

    Feng Su

    2012-01-01

    Full Text Available Identification and tracking of a moving object using computer vision techniques is important in robotic surveillance. In this paper, an adaptive colour filtering method is introduced for identifying and tracking a moving object appearing in image sequences. This filter is capable of automatically identifying the most salient colour feature of the moving object in the image and using this for a robot to track the object. The method enables the selected colour feature to adapt to surrounding condition when it is changed. A method of determining the region of interest of the moving target is also developed for the adaptive colour filter to extract colour information. Experimental results show that by using a camera mounted on a robot, the proposed methods can perform robustly in tracking a randomly moving object using adaptively selected colour features in a crowded environment.

  3. Face recognition system and method using face pattern words and face pattern bytes

    Science.gov (United States)

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  4. TH-E-17A-10: Markerless Lung Tumor Tracking Based On Beams Eye View EPID Images

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, T; Kearney, V; Liu, H; Jiang, L; Foster, R; Mao, W [UT Southwestern Medical Center, Dallas, Texas (United States); Rozario, T; Bereg, S [University of Texas at Dallas, Richardson, Texas (United States); Klash, S [Premier Cancer Centers, Dallas, TX (United States)

    2014-06-15

    Purpose: Dynamic tumor tracking or motion compensation techniques have proposed to modify beam delivery following lung tumor motion on the flight. Conventional treatment plan QA could be performed in advance since every delivery may be different. Markerless lung tumor tracking using beams eye view EPID images provides a best treatment evaluation mechanism. The purpose of this study is to improve the accuracy of the online markerless lung tumor motion tracking method. Methods: The lung tumor could be located on every frame of MV images during radiation therapy treatment by comparing with corresponding digitally reconstructed radiograph (DRR). A kV-MV CT corresponding curve is applied on planning kV CT to generate MV CT images for patients in order to enhance the similarity between DRRs and MV treatment images. This kV-MV CT corresponding curve was obtained by scanning a same CT electron density phantom by a kV CT scanner and MV scanner (Tomotherapy) or MV CBCT. Two sets of MV DRRs were then generated for tumor and anatomy without tumor as the references to tracking the tumor on beams eye view EPID images. Results: Phantom studies were performed on a Varian TrueBeam linac. MV treatment images were acquired continuously during each treatment beam delivery at 12 gantry angles by iTools. Markerless tumor tracking was applied with DRRs generated from simulated MVCT. Tumors were tracked on every frame of images and compared with expected positions based on programed phantom motion. It was found that the average tracking error were 2.3 mm. Conclusion: This algorithm is capable of detecting lung tumors at complicated environment without implanting markers. It should be noted that the CT data has a slice thickness of 3 mm. This shows the statistical accuracy is better than the spatial accuracy. This project has been supported by a Varian Research Grant.

  5. Perception of wine labels by generation Z: eye-tracking experiment

    Directory of Open Access Journals (Sweden)

    Stanislav Mokrý

    2016-11-01

    Full Text Available Product quality is the result of an involved technological process. For the customer, product quality is not easy to grasp and the decision to buy the product is more influenced by the customer's perception of quality than by quality itself. It is therefore the result of many factors making an impression on the customer, their personal taste and the mood of the moment. The role of marketing is to understand the factors that have a customer impact. We need to identify the factors the customer is aware of and is able to communicate. Yet there are also a number of factors at play that affect the customer without their being aware of it. The aim of the paper is to get to know customer behaviour not just through the factors the customer communicates (answering questions but to seek new methods that allow an objective examination of the customer's stimulus response, in our research case, using eye-tracking technology. The research study was conducted by way of an experiment with concurrent questioning in June 2016. There were 44 respondents taking part in the experiment, aged from 19 to 25 (Generation Z. The experiment set out to identify the importance of various visual attributes of a bottle of white wine, using a total of 7 stimuli.  The experiment was carried out using the method called A/B testing, whereby one half of the respondents (A was shown the original version of the stimulus and the second half (B the modified stimulus. The eye-tracking research was carried out using remote eye-tracker SMI RED 250 at a sampling frequency of 125 Hz. In answering questions, the respondents evaluated the importance of the factors of price, type, awards, the shape and colour of the bottle and information on the label, i.e. information about the producer (maker of the wine, wine variety, wine-growing region, country of origin, year of vintage and the sugar content indication. The paper concludes with a summary of the respective importance of the individual

  6. Discrimination between smiling faces: Human observers vs. automated face analysis.

    Science.gov (United States)

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Use of Eye Tracking as an Innovative Instructional Method in Surgical Human Anatomy.

    Science.gov (United States)

    Sánchez-Ferrer, María Luísa; Grima-Murcia, María Dolores; Sánchez-Ferrer, Francisco; Hernández-Peñalver, Ana Isabel; Fernández-Jover, Eduardo; Sánchez Del Campo, Francisco

    Tobii glasses can record corneal infrared light reflection to track pupil position and to map gaze focusing in the video recording. Eye tracking has been proposed for use in training and coaching as a visually guided control interface. The aim of our study was to test the potential use of these glasses in various situations: explanations of anatomical structures on tablet-type electronic devices, explanations of anatomical models and dissected cadavers, and during the prosection thereof. An additional aim of the study was to test the use of the glasses during laparoscopies performed on Thiel-embalmed cadavers (that allows pneumoinsufflation and exact reproduction of the laparoscopic surgical technique). The device was also tried out in actual surgery (both laparoscopy and open surgery). We performed a pilot study using the Tobii glasses. Dissection room at our School of Medicine and in the operating room at our Hospital. To evaluate usefulness, a survey was designed for use among students, instructors, and practicing physicians. The results were satisfactory, with the usefulness of this tool supported by more than 80% positive responses to most questions. There was no inconvenience for surgeons and that patient safety was ensured in the real laparoscopy. To our knowledge, this is the first publication to demonstrate the usefulness of eye tracking in practical instruction of human anatomy, as well as in teaching clinical anatomy and surgical techniques in the dissection and operating rooms. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. Investigating the Effect of Complexity Factors in Stoichiometry Problems Using Logistic Regression and Eye Tracking

    Science.gov (United States)

    Tang, Hui; Kirk, John; Pienta, Norbert J.

    2014-01-01

    This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…

  9. Effects of Different Multimedia Presentations on Viewers' Information-Processing Activities Measured by Eye-Tracking Technology

    Science.gov (United States)

    Chuang, Hsueh-Hua; Liu, Han-Chin

    2012-01-01

    This study implemented eye-tracking technology to understand the impact of different multimedia instructional materials, i.e., five successive pages versus a single page with the same amount of information, on information-processing activities in 21 non-science-major college students. The findings showed that students demonstrated the same number…

  10. Eye tracking and climate change: How is climate literacy information processed?

    Science.gov (United States)

    Williams, C. C.; McNeal, K. S.

    2011-12-01

    The population of the Southeastern United States is perceived to be resistant to information regarding global climate change. The Climate Literacy Partnership in the Southeast (CLiPSE) project was formed to provide a resource for climate science information. As part of this project, we are evaluating the way that education materials influence the interpretation of climate change related information. At Mississippi State University, a study is being conducted examining how individuals from the Southeastern United States process climate change information and whether or not the interaction with such information impacts the interpretation of subsequent climate change related information. By observing the patterns both before and after an educational intervention, we are able to evaluate the effectiveness of the climate change information on an individual's interpretation of related information. Participants in this study view figures describing various types of climate change related information (CO2 emissions, sea levels, etc.) while their eye movements are tracked to determine a baseline for the way that they process this type of graphical data. Specifically, we are examining time spent viewing and number of fixations on critical portions of the figures prior to exposure to an educational document on climate change. Following the baseline period, we provide participants with portions of a computerized version of Climate Literacy: The Essential Principles of Climate Sciences that the participants read at their own pace while their eye movements are monitored. Participants are told that they will be given a test on the material after reading the resource. After reading the excerpt, participants are presented with a new set of climate change related figures to interpret (with eye tracking) along with a series of questions regarding information contained in the resource. We plan to evaluate changes that occur in the way that climate change related information is

  11. Discussion and Future Directions for Eye Tracker Development

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Mulvey, Fiona; Mardanbegi, Diako

    2011-01-01

    Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking.......Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking....

  12. De-warping of images and improved eye tracking for the scanning laser ophthalmoscope.

    Directory of Open Access Journals (Sweden)

    Phillip Bedggood

    Full Text Available A limitation of scanning laser ophthalmoscopy (SLO is that eye movements during the capture of each frame distort the retinal image. Various sophisticated strategies have been devised to ensure that each acquired frame can be mapped quickly and accurately onto a chosen reference frame, but such methods are blind to distortions in the reference frame itself. Here we explore a method to address this limitation in software, and demonstrate its accuracy. We used high-speed (200 fps, high-resolution (~1 μm, flood-based imaging of the human retina with adaptive optics to obtain "ground truth" information on the retinal image and motion of the eye. This information was used to simulate SLO video sequences at 20 fps, allowing us to compare various methods for eye-motion recovery and subsequent minimization of intra-frame distortion. We show that a a single frame can be near-perfectly recovered with perfect knowledge of intra-frame eye motion; b eye motion at a given time point within a frame can be accurately recovered by tracking the same strip of tissue across many frames, due to the stochastic symmetry of fixational eye movements. This approach is similar to, and easily adapted from, previously suggested strip-registration approaches; c quality of frame recovery decreases with amplitude of eye movements, however, the proposed method is affected less by this than other state-of-the-art methods and so offers even greater advantages when fixation is poor. The new method could easily be integrated into existing image processing software, and we provide an example implementation written in Matlab.

  13. Social experience does not abolish cultural diversity in eye movements

    Directory of Open Access Journals (Sweden)

    David J Kelly

    2011-05-01

    Full Text Available Adults from Eastern (e.g., China and Western (e.g., USA cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese (BBC adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of BBC adults deployed ‘Eastern’ eye movement strategies, while approximately 25% of participants displayed ‘Western’ strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that ‘culture’ alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required.

  14. When is protection from impact needed for the face as well as the eyes in occupational environments?

    Science.gov (United States)

    Dain, Stephen J; Huang, Rose; Tiao, Aimee; Chou, B Ralph

    2018-05-01

    The most commonly identified reason for requiring or using occupational eye and face protection is for protection against flying objects. Standards vary on what risk may require protection of the eyes alone and what requires protection for the whole face. Information on the minimum energy transfer for face damage to occur is not well-established. The heads of pigs were used as the common model for human skin. A 6 mm steel ball projected at velocities between 45 and 135 m/s was directed at the face area. Examples of impacts were filmed with a high-speed camera and the resulting damage was rated visually on a scale from 1 (no visible damage) to 5 (penetrated the skin and embedded in the flesh). The results for the cheek area indicate that 85 m/s is the velocity above which damage is more likely to occur unless the skin near the lip is included. For damage to the lip area to be avoided, the velocity needs to be 60 m/s or less. The present data support a maximum impact velocity of 85 m/s, provided the thinner and more vulnerable skin of the lids and orbital adnexa is protected. If the coverage area does not extend to the orbital adnexa, then the absolute upper limit for the velocity is 60 m/s. At this stage, eye-only protection, as represented by the lowest level of impact test in the standards in the form of a drop ball test, is not in question. © 2017 Optometry Australia.

  15. The specificity of attentional biases by type of gambling: An eye-tracking study

    OpenAIRE

    McGrath, Daniel S.; Meitner, Amadeus; Sears, Christopher R.

    2018-01-01

    A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of at...

  16. Detection of third and sixth cranial nerve palsies with a novel method for eye tracking while watching a short film clip.

    Science.gov (United States)

    Samadani, Uzma; Farooq, Sameer; Ritlop, Robert; Warren, Floyd; Reyes, Marleen; Lamm, Elizabeth; Alex, Anastasia; Nehrbass, Elena; Kolecki, Radek; Jureller, Michael; Schneider, Julia; Chen, Agnes; Shi, Chen; Mendhiratta, Neil; Huang, Jason H; Qian, Meng; Kwak, Roy; Mikheev, Artem; Rusinek, Henry; George, Ajax; Fergus, Robert; Kondziolka, Douglas; Huang, Paul P; Smith, R Theodore

    2015-03-01

    Automated eye movement tracking may provide clues to nervous system function at many levels. Spatial calibration of the eye tracking device requires the subject to have relatively intact ocular motility that implies function of cranial nerves (CNs) III (oculomotor), IV (trochlear), and VI (abducent) and their associated nuclei, along with the multiple regions of the brain imparting cognition and volition. The authors have developed a technique for eye tracking that uses temporal rather than spatial calibration, enabling detection of impaired ability to move the pupil relative to normal (neurologically healthy) control volunteers. This work was performed to demonstrate that this technique may detect CN palsies related to brain compression and to provide insight into how the technique may be of value for evaluating neuropathological conditions associated with CN palsy, such as hydrocephalus or acute mass effect. The authors recorded subjects' eye movements by using an Eyelink 1000 eye tracker sampling at 500 Hz over 200 seconds while the subject viewed a music video playing inside an aperture on a computer monitor. The aperture moved in a rectangular pattern over a fixed time period. This technique was used to assess ocular motility in 157 neurologically healthy control subjects and 12 patients with either clinical CN III or VI palsy confirmed by neuro-ophthalmological examination, or surgically treatable pathological conditions potentially impacting these nerves. The authors compared the ratio of vertical to horizontal eye movement (height/width defined as aspect ratio) in normal and test subjects. In 157 normal controls, the aspect ratio (height/width) for the left eye had a mean value ± SD of 1.0117 ± 0.0706. For the right eye, the aspect ratio had a mean of 1.0077 ± 0.0679 in these 157 subjects. There was no difference between sexes or ages. A patient with known CN VI palsy had a significantly increased aspect ratio (1.39), whereas 2 patients with known CN III

  17. Delayed Anticipatory Spoken Language Processing in Adults with Dyslexia—Evidence from Eye-tracking.

    Science.gov (United States)

    Huettig, Falk; Brouwer, Susanne

    2015-05-01

    It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Children’s Empathy and Their Perception and Evaluation of Facial Pain Expression: An Eye Tracking Study

    Directory of Open Access Journals (Sweden)

    Zhiqiang Yan

    2017-12-01

    Full Text Available The function of empathic concern to process pain is a product of evolutionary adaptation. Focusing on 5- to 6-year old children, the current study employed eye-tracking in an odd-one-out task (searching for the emotional facial expression among neutral expressions, N = 47 and a pain evaluation task (evaluating the pain intensity of a facial expression, N = 42 to investigate the relationship between children’s empathy and their behavioral and perceptual response to facial pain expression. We found children detected painful expression faster than others (angry, sad, and happy, children high in empathy performed better on searching facial expression of pain, and gave higher evaluation of pain intensity; and rating for pain in painful expressions was best predicted by a self-reported empathy score. As for eye-tracking in pain detection, children fixated on pain more quickly, less frequently and for shorter times. Of facial clues, children fixated on eyes and mouth more quickly, more frequently and for longer times. These results implied that painful facial expression was different from others in a cognitive sense, and children’s empathy might facilitate their search and make them perceive the intensity of observed pain on the higher side.

  19. Concepts of Interface Usability and the Enhancement of Design through Eye Tracking and Psychophysiology

    Science.gov (United States)

    2008-09-01

    factors, specific human deficiencies, and consumed substances can all influence colour perception. 3.2 Colour Blindness Colour blindness, which...to present a knife and fork in representation of a restaurant , or a sign with a petrol pump indicating a service station (Horton, 1994). Some...optimally and efficiently in terms of search and retrieval behaviour . Researchers can use eye tracking technology to record and measure responses such

  20. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study.

    Science.gov (United States)

    Tsujimura, Akira; Kiuchi, Hiroshi; Soda, Tetsuji; Takezawa, Kentaro; Fukuhara, Shinichiro; Takao, Tetsuya; Sekiguchi, Yuki; Iwasa, Atsushi; Nonomura, Norio; Miyagawa, Yasushi

    2017-09-01

    Very little has been elucidated about sexual interest in female-to-male (FtM) transsexual persons. To investigate the sexual interest of FtM transsexual persons vs that of men using an eye-tracking system. The study included 15 men and 13 FtM transsexual subjects who viewed three sexual videos (clip 1: sexy clothed young woman kissing the region of the male genitals covered by underwear; clip 2: naked actor and actress kissing and touching each other; and clip 3: heterosexual intercourse between a naked actor and actress) in which several regions were designated for eye-gaze analysis in each frame. The designation of each region was not visible to the participants. Visual attention was measured across each designated region according to gaze duration. For clip 1, there was a statistically significant sex difference in the viewing pattern between men and FtM transsexual subjects. Longest gaze time was for the eyes of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. For clip 2, there also was a statistically significant sex difference. Longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects, and there was a significant difference between regions with longest gaze time. The most apparent difference was in the gaze time for the body of the actor: the percentage of time spent gazing at the body of the actor was 8.35% in FtM transsexual subjects, whereas it was only 0.03% in men. For clip 3, there were no statistically significant differences in viewing patterns between men and FtM transsexual subjects, although longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. We suggest that the characteristics of sexual interest of FtM transsexual persons are not the same as those of biological men. Tsujimura A, Kiuchi H, Soda T, et al. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons

  1. Photographic but not line-drawn faces show early perceptual neural sensitivity to eye gaze direction

    Directory of Open Access Journals (Sweden)

    Alejandra eRossi

    2015-04-01

    Full Text Available Our brains readily decode facial movements and changes in social attention, reflected in earlier and larger N170 event-related potentials (ERPs to viewing gaze aversions vs. direct gaze in real faces (Puce et al. 2000. In contrast, gaze aversions in line-drawn faces do not produce these N170 differences (Rossi et al., 2014, suggesting that physical stimulus properties or experimental context may drive these effects. Here we investigated the role of stimulus-induced context on neurophysiological responses to dynamic gaze. Sixteen healthy adults viewed line-drawn and real faces, with dynamic eye aversion and direct gaze transitions, and control stimuli (scrambled arrays and checkerboards while continuous electroencephalographic (EEG activity was recorded. EEG data from 2 temporo-occipital clusters of 9 electrodes in each hemisphere where N170 activity is known to be maximal were selected for analysis. N170 peak amplitude and latency, and temporal dynamics from event-related spectral perturbations (ERSPs were measured in 16 healthy subjects. Real faces generated larger N170s for averted vs. direct gaze motion, however, N170s to real and direct gaze were as large as those to respective controls. N170 amplitude did not differ across line-drawn gaze changes. Overall, bilateral mean gamma power changes for faces relative to control stimuli occurred between 150-350 ms, potentially reflecting signal detection of facial motion.Our data indicate that experimental context does not drive N170 differences to viewed gaze changes. Low-level stimulus properties, such as the high sclera/iris contrast change in real eyes likely drive the N170 changes to viewed aversive movements.

  2. Storm Identification, Tracking and Forecasting Using High-Resolution Images of Short-Range X-Band Radar

    Directory of Open Access Journals (Sweden)

    Sajid Shah

    2015-05-01

    Full Text Available Rain nowcasting is an essential part of weather monitoring. It plays a vital role in human life, ranging from advanced warning systems to scheduling open air events and tourism. A nowcasting system can be divided into three fundamental steps, i.e., storm identification, tracking and nowcasting. The main contribution of this work is to propose procedures for each step of the rain nowcasting tool and to objectively evaluate the performances of every step, focusing on two-dimension data collected from short-range X-band radars installed in different parts of Italy. This work presents the solution of previously unsolved problems in storm identification: first, the selection of suitable thresholds for storm identification; second, the isolation of false merger (loosely-connected storms; and third, the identification of a high reflectivity sub-storm within a large storm. The storm tracking step of the existing tools, such as TITANand SCIT, use only up to two storm attributes, i.e., center of mass and area. It is possible to use more attributes for tracking. Furthermore, the contribution of each attribute in storm tracking is yet to be investigated. This paper presents a novel procedure called SALdEdA (structure, amplitude, location, eccentricity difference and areal difference for storm tracking. This work also presents the contribution of each component of SALdEdA in storm tracking. The second order exponential smoothing strategy is used for storm nowcasting, where the growth and decay of each variable of interest is considered to be linear. We evaluated the major steps of our method. The adopted techniques for automatic threshold calculation are assessed with a 97% goodness. False merger and sub-storms within a cluster of storms are successfully handled. Furthermore, the storm tracking procedure produced good results with an accuracy of 99.34% for convective events and 100% for stratiform events.

  3. The effect of human image in B2C website design: an eye-tracking study

    Science.gov (United States)

    Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo

    2014-09-01

    On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.

  4. Developmental improvement and age-related decline in unfamiliar face matching.

    Science.gov (United States)

    Megreya, Ahmed M; Bindemann, Markus

    2015-01-01

    Age-related changes have been documented widely in studies of face recognition and eyewitness identification. However, it is not clear whether these changes arise from general developmental differences in memory or occur specifically during the perceptual processing of faces. We report two experiments to track such perceptual changes using a 1-in- 10 (experiment 1) and 1-in-1 (experiment 2) matching task for unfamiliar faces. Both experiments showed improvements in face matching during childhood and adult-like accuracy levels by adolescence. In addition, face-matching performance declined in adults of the age of 65 years. These findings indicate that developmental improvements and aging-related differences in face processing arise from changes in the perceptual encoding of faces. A clear face inversion effect was also present in all age groups. This indicates that those age-related changes in face matching reflect a quantitative effect, whereby typical face processes are engaged but do not operate at the best-possible level. These data suggest that part of the problem of eyewitness identification in children and elderly persons might reflect impairments in the perceptual processing of unfamiliar faces.

  5. Tracking the Mind's Eye: A New Technology for Researching Twenty-First-Century Writing and Reading Processes

    Science.gov (United States)

    Anson, Chris M.; Schwegler, Robert A.

    2012-01-01

    This article describes the nature of eye-tracking technology and its use in the study of discourse processes, particularly reading. It then suggests several areas of research in composition studies, especially at the intersection of writing, reading, and digital media, that can benefit from the use of this technology. (Contains 2 figures.)

  6. Recognition of faces and names: multimodal physiological correlates of memory and executive function.

    Science.gov (United States)

    Mitchell, Meghan B; Shirk, Steven D; McLaren, Donald G; Dodd, Jessica S; Ezzati, Ali; Ally, Brandon A; Atri, Alireza

    2016-06-01

    We sought to characterize electrophysiological, eye-tracking and behavioral correlates of face-name recognition memory in healthy younger adults using high-density electroencephalography (EEG), infrared eye-tracking (ET), and neuropsychological measures. Twenty-one participants first studied 40 face-name (FN) pairs; 20 were presented four times (4R) and 20 were shown once (1R). Recognition memory was assessed by asking participants to make old/new judgments for 80 FN pairs, of which half were previously studied items and half were novel FN pairs (N). Simultaneous EEG and ET recording were collected during recognition trials. Comparisons of event-related potentials (ERPs) for correctly identified FN pairs were compared across the three item types revealing classic ERP old/new effects including 1) relative positivity (1R > N) bi-frontally from 300 to 500 ms, reflecting enhanced familiarity, 2) relative positivity (4R > 1R and 4R > N) in parietal areas from 500 to 800 ms, reflecting enhanced recollection, and 3) late frontal effects (1R > N) from 1000 to 1800 ms in right frontal areas, reflecting post-retrieval monitoring. ET analysis also revealed significant differences in eye movements across conditions. Exploration of cross-modality relationships suggested associations between memory and executive function measures and the three ERP effects. Executive function measures were associated with several indicators of saccadic eye movements and fixations, which were also associated with all three ERP effects. This novel characterization of face-name recognition memory performance using simultaneous EEG and ET reproduced classic ERP and ET effects, supports the construct validity of the multimodal FN paradigm, and holds promise as an integrative tool to probe brain networks supporting memory and executive functioning.

  7. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Improvement of design of a surgical interface using an eye tracking device.

    Science.gov (United States)

    Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz

    2014-05-07

    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The

  9. A Maximum Power Transfer Tracking Method for WPT Systems with Coupling Coefficient Identification Considering Two-Value Problem

    Directory of Open Access Journals (Sweden)

    Xin Dai

    2017-10-01

    Full Text Available Maximum power transfer tracking (MPTT is meant to track the maximum power point during the system operation of wireless power transfer (WPT systems. Traditionally, MPTT is achieved by impedance matching at the secondary side when the load resistance is varied. However, due to a loosely coupling characteristic, the variation of coupling coefficient will certainly affect the performance of impedance matching, therefore MPTT will fail accordingly. This paper presents an identification method of coupling coefficient for MPTT in WPT systems. Especially, the two-value issue during the identification is considered. The identification approach is easy to implement because it does not require additional circuit. Furthermore, MPTT is easy to realize because only two easily measured DC parameters are needed. The detailed identification procedure corresponding to the two-value issue and the maximum power transfer tracking process are presented, and both the simulation analysis and experimental results verified the identification method and MPTT.

  10. Recent developments in track reconstruction and hadron identification at MPD

    Science.gov (United States)

    Mudrokh, A.; Zinchenko, A.

    2017-03-01

    A Monte Carlo simulation of real detector effects with as many details as possible has been carried out instead of a simplified Geant point smearing approach during the study of the detector performance. Some results of realistic simulation of the MPD TPC (Time Projection Chamber) including digitization in central Au+Au collisions have been obtained. Particle identification (PID) has been tuned to account for modifications in the track reconstruction. Some results on hadron identification in the TPC and TOF (Time Of Flight) detectors with realistically simulated response have been also obtained.

  11. Instruction-based clinical eye-tracking study on the visual interpretation of divergence: How do students look at vector field plots?

    Science.gov (United States)

    Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.

    2018-06-01

    Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux concept. We test the effectiveness of both strategies in an instruction-based eye-tracking study with N =41 physics majors. We found that students' performance improved when both strategies were introduced (74% correct) instead of only one strategy (64% correct), and students performed best when they were free to choose between the two strategies (88% correct). This finding supports the idea of introducing multiple representations of a physical concept to foster student understanding. Relevant eye-tracking measures demonstrate that both strategies imply different visual processing of the vector field plots, therefore reflecting conceptual differences between the strategies. Advanced analysis methods further reveal significant differences in eye movements between the best and worst performing students. For instance, the best students performed predominantly horizontal and vertical saccades, indicating correct interpretation of partial derivatives. They also focused on smaller regions when they balanced positive and negative flux. This mixed-method research leads to new insights into student visual processing of vector field representations, highlights the advantages and limitations of eye-tracking methodologies in this context, and discusses implications for teaching and for future research. The introduction of saccadic direction analysis expands traditional methods, and shows the potential to discover new insights into student understanding and learning difficulties.

  12. Development of long-term event memory in preverbal infants: an eye-tracking study

    OpenAIRE

    Nakano, Tamami; Kitazawa, Shigeru

    2017-01-01

    The development of long-term event memory in preverbal infants remains elusive. To address this issue, we applied an eye-tracking method that successfully revealed in great apes that they have long-term memory of single events. Six-, 12-, 18- and 24-month-old infants watched a video story in which an aggressive ape-looking character came out from one of two identical doors. While viewing the same video again 24?hours later, 18- and 24-month-old infants anticipatorily looked at the door where ...

  13. Tracking Eyes using Shape and Appearance

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nielsen, Mads; Hansen, John Paulin

    2002-01-01

    to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes. We use a Gaussian Process interpolation method for gaze determination, which facilitates stability feedback from the system. The use of a learning method for gaze estimation gives more flexibility...

  14. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  15. An Exploration of the Use of Eye-Gaze Tracking to Study Problem-Solving on Standardized Science Assessments

    Science.gov (United States)

    Tai, Robert H.; Loehr, John F.; Brigham, Frederick J.

    2006-01-01

    This pilot study investigated the capacity of eye-gaze tracking to identify differences in problem-solving behaviours within a group of individuals who possessed varying degrees of knowledge and expertise in three disciplines of science (biology, chemistry and physics). The six participants, all pre-service science teachers, completed an 18-item…

  16. Procedural Learning and Associative Memory Mechanisms Contribute to Contextual Cueing: Evidence from fMRI and Eye-Tracking

    Science.gov (United States)

    Manelis, Anna; Reder, Lynne M.

    2012-01-01

    Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate…

  17. An Eye Tracking Comparison of External Pointing Cues and Internal Continuous Cues in Learning with Complex Animations

    Science.gov (United States)

    Boucheix, Jean-Michel; Lowe, Richard K.

    2010-01-01

    Two experiments used eye tracking to investigate a novel cueing approach for directing learner attention to low salience, high relevance aspects of a complex animation. In the first experiment, comprehension of a piano mechanism animation containing spreading-colour cues was compared with comprehension obtained with arrow cues or no cues. Eye…

  18. Building an ACT-R Reader for Eye-Tracking Corpus Data.

    Science.gov (United States)

    Dotlačil, Jakub

    2018-01-01

    Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works (most prominently, Engelmann, Vasishth, Engbert & Kliegl, ), the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo (MCMC) techniques, rather than by relying on the mix of manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. Copyright © 2017 Cognitive Science Society, Inc.

  19. Registration of clinical volumes to beams-eye-view images for real-time tracking

    Energy Technology Data Exchange (ETDEWEB)

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I., E-mail: rberbeco@lroc.harvard.edu [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Keall, Paul J. [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, New South Wales 2006 (Australia)

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  20. How visual search relates to visual diagnostic performance : a narrative systematic review of eye-tracking research in radiology

    NARCIS (Netherlands)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; ten Cate, Olle

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review

  1. Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces

    Science.gov (United States)

    Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret

    2013-01-01

    Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173

  2. Love is in the gaze: an eye-tracking study of love and sexual desire.

    Science.gov (United States)

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports. © The Author(s) 2014.

  3. Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal

    Directory of Open Access Journals (Sweden)

    Malik M. Naeem Mannan

    2016-02-01

    Full Text Available Contamination of eye movement and blink artifacts in Electroencephalogram (EEG recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI. In this paper, we proposed an automatic framework based on independent component analysis (ICA and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.

  4. Exploring Responses to Art in Adolescence: A Behavioral and Eye-Tracking Study

    Science.gov (United States)

    Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2014-01-01

    Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception. PMID:25048813

  5. Investigating emotion recognition and empathy deficits in Conduct Disorder using behavioural and eye-tracking methods

    OpenAIRE

    Martin-Key, Nayra, Anna

    2017-01-01

    The aim of this thesis was to characterise the nature of the emotion recognition and empathy deficits observed in male and female adolescents with Conduct Disorder (CD) and varying levels of callous-unemotional (CU) traits. The first two experiments employed behavioural tasks with concurrent eye-tracking methods to explore the mechanisms underlying facial and body expression recognition deficits. Having CD and being male independently predicted poorer facial expression recognition across all ...

  6. 头部多角度人脸快速跟踪算法DSP实现%Fast face tracking algorithm of head multi-position based on DSP

    Institute of Scientific and Technical Information of China (English)

    姜俊金; 王增才; 朱淑亮

    2012-01-01

    针对传统驾驶员疲劳检测人脸跟踪算法复杂,DSP实现时实时性不强,不能有效地实现多角度人脸跟踪的问题,提出了一种快速人脸跟踪算法.该算法通过对YCbCr肤色模型进行图像预处理、肤色检测,提取人脸区域,通过对亮度信号Y进行统计运算,判断人脸边界,再进行相似度判断,从而实现人脸区域的跟踪.实验结果表明,该方法简单、鲁棒性强,能够快速地实现彩色图像人脸多角度跟踪.%In the field of real-time human face tracking for driver fatigue detection, the classic algorithms are so complex that the DSP system can not track the face in multi-angle state quickly and exactly, so a new face tracking algorithm is presented. In YCbCr human face color model, the image is preprocessed first, then the face region is extracted through the face color detec-tion. Through calculating the brightness signal Y, the face edge can be detected. Then a symmetry similarity measure is used to check the factuality of the face tracking. In this way, the face region can be tracked Experimental results indicate that this algo-rithm is so simple and can realize the tracking of face in multi-angle of color image.

  7. [Ablation on the undersurface of a LASIK flap. Instrument and method for continuous eye tracking].

    Science.gov (United States)

    Taneri, S; Azar, D T

    2007-02-01

    The risk of iatrogenic keratectasia after laser in situ keratomileusis (LASIK) increases with thinner posterior stromal beds. Ablations on the undersurface of a LASIK flap could only be performed without the guidance of an eye tracker, which may lead to decentration. A new method for laser ablation with flying spot lasers on the undersurface of a LASIK flap was developed that enables the use of an active eye tracker by utilizing a novel instrument. The first clinical results are reported. Patients wishing an enhancement procedure were eligible for a modified repeat LASIK procedure if the flaps cut in the initial procedure were thick enough to perform the intended additional ablation on the undersurface leaving at least 90 microm of flap thickness behind. (1) The horizontal axis and the center of the entrance pupil were marked on the epithelial side of the flap using gentian violet dye. (2) The flap was reflected on a newly designed flap holder which had a donut-shaped black marking. (3) The eye tracker was centered on the mark visible in transparency on the flap. (4) Ablation with a flying spot Bausch & Lomb Technolas 217z laser was performed on the undersurface of the flap with a superior hinge taking into account that in astigmatic ablations the cylinder axis had to be mirrored according to the formula: axis on the undersurface=180 degrees -axis on the stromal bed. (5) The flap was repositioned. Detection of the marking on the modified flap holder and continuous tracking instead of the real pupil was possible in all of the 12 eyes treated with this technique. It may be necessary to cover the real pupil during ablation in order not to confuse the eye tracker. Ablation could be performed without decentration or loss of best spectacle-corrected visual acuity. Refractive results in minor corrections were good without nomogram adjustment. Using this novel flap holder with a marking that is tracked instead of the real pupil, centered ablations with a flying spot laser

  8. A comparison of problem identification interviews conducted face-to-face and via videoconferencing using the consultation analysis record.

    Science.gov (United States)

    Fischer, Aaron J; Collier-Meek, Melissa A; Bloomfield, Bradley; Erchul, William P; Gresham, Frank M

    2017-08-01

    School psychologists who experience challenges delivering face-to-face consultation may utilize videoconferencing to facilitate their consultation activities. Videoconferencing has been found to be an effective method of service delivery in related fields and emerging research suggests that it may be effective for providing teacher training and support in school settings. In this exploratory investigation, we used the Consultation Analysis Record (Bergan & Tombari, 1975) and its four indices to assess the effectiveness of conducting problem identification interviews via videoconferencing versus face-to-face. Overall, findings indicated significant differences across these two conditions, with videoconference interviews coded as having higher indices of content relevance, process effectiveness, and message control, but lower content focus, compared to face-to-face interviews. As these indices have been positively associated with favorable consultation outcomes, the results provide initial support for the effectiveness of consultation delivered via videoconferencing. Copyright © 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  9. Congruence and placement in sponsorship: An eye-tracking application.

    Science.gov (United States)

    Santos, Manuel Alonso Dos; Moreno, Ferran Calabuig; Franco, Manuel Sánchez

    2018-05-30

    Sporting events can be announced using sports posters and by disseminating advertisements on the internet, on the street and in print media. But until now, no prior research has measured the effectiveness of sponsorship in sporting event posters. This study uses eye tracking to measure the effectiveness of sporting event posters and proposes considering the level of the viewer's attention as an indicator. This research involves a factorial experiment based on the following variables: congruence, the number of sponsors, and placement of the sponsor's advertisement in a sporting event poster. The results indicate that sponsors positioned in the poster's area of action receive more attention. However, we were unable to prove that congruent sponsors receive more attention, as claimed in the literature. This result could be due to a situation of blindness towards the sponsor. The conclusion section of this paper discusses theoretical conclusions and potential managerial actions. Copyright © 2017. Published by Elsevier Inc.

  10. Chronic Pain and Selective Attention to Pain Arousing Daily Activity Pictures: Evidence From an Eye Tracking Study

    Directory of Open Access Journals (Sweden)

    Masoumeh Mahmoodi-Aghdam

    2017-11-01

    Conclusion: Although these results did not provide unequivocal support for the vigilance-avoidance hypothesis, they are generally consistent with the results of studies using eye tracking technology. Furthermore, our findings put a question over characterization of attentional biases in patients with chronic pain by simply relating that to difficulty in disengaging from pain-related stimuli.

  11. In the Eye of the Beholder-  A Survey of Models for Eyes and Gaze

    DEFF Research Database (Denmark)

    Witzner Hansen, Dan; Ji, Qiang

    2010-01-01

    Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and...

  12. Use of eye tracking equipment for human reliability analysis applied to complex system operations

    International Nuclear Information System (INIS)

    Pinheiro, Andre Ricardo Mendonça; Prado, Eugenio Anselmo Pessoa do; Martins, Marcelo Ramos

    2017-01-01

    This article will discuss the preliminary results of an evaluation methodology for the analysis and quantification of manual character errors (human), by monitoring cognitive parameters and skill levels in the operation of a complex control system based on parameters provided by a eye monitoring equipment (Eye Tracker). The research was conducted using a simulator (game) that plays concepts of operation of a nuclear reactor with a split sample for evaluation of aspects of learning, knowledge and standard operating within the context addressed. bridge operators were monitored using the EYE TRACKING, eliminating the presence of the analyst in the evaluation of the operation, allowing the analysis of the results by means of multivariate statistical techniques within the scope of system reliability. The experiments aim to observe state change situations such as stops and scheduled departures, incidents assumptions and common operating characteristics. Preliminary results of this research object indicate that technical and cognitive aspects can contribute to improving the reliability of the available techniques in human reliability, making them more realistic both in the context of quantitative approaches to regulatory and training purposes, as well as reduced incidence of human error. (author)

  13. Use of eye tracking equipment for human reliability analysis applied to complex system operations

    Energy Technology Data Exchange (ETDEWEB)

    Pinheiro, Andre Ricardo Mendonça; Prado, Eugenio Anselmo Pessoa do; Martins, Marcelo Ramos, E-mail: andrericardopinheiro@usp.br, E-mail: eugenio.prado@labrisco.usp.br, E-mail: mrmatins@usp.br [Universidade de Sao Paulo (LABRISCO/USP), Sao Paulo, SP (Brazil). Lab. de Análise, Avaliação e Gerenciamento de Risco

    2017-07-01

    This article will discuss the preliminary results of an evaluation methodology for the analysis and quantification of manual character errors (human), by monitoring cognitive parameters and skill levels in the operation of a complex control system based on parameters provided by a eye monitoring equipment (Eye Tracker). The research was conducted using a simulator (game) that plays concepts of operation of a nuclear reactor with a split sample for evaluation of aspects of learning, knowledge and standard operating within the context addressed. bridge operators were monitored using the EYE TRACKING, eliminating the presence of the analyst in the evaluation of the operation, allowing the analysis of the results by means of multivariate statistical techniques within the scope of system reliability. The experiments aim to observe state change situations such as stops and scheduled departures, incidents assumptions and common operating characteristics. Preliminary results of this research object indicate that technical and cognitive aspects can contribute to improving the reliability of the available techniques in human reliability, making them more realistic both in the context of quantitative approaches to regulatory and training purposes, as well as reduced incidence of human error. (author)

  14. Development of SPIES (Space Intelligent Eyeing System) for smart vehicle tracing and tracking

    Science.gov (United States)

    Abdullah, Suzanah; Ariffin Osoman, Muhammad; Guan Liyong, Chua; Zulfadhli Mohd Noor, Mohd; Mohamed, Ikhwan

    2016-06-01

    SPIES or Space-based Intelligent Eyeing System is an intelligent technology which can be utilized for various applications such as gathering spatial information of features on Earth, tracking system for the movement of an object, tracing system to trace the history information, monitoring driving behavior, security and alarm system as an observer in real time and many more. SPIES as will be developed and supplied modularly will encourage the usage based on needs and affordability of users. SPIES are a complete system with camera, GSM, GPS/GNSS and G-Sensor modules with intelligent function and capabilities. Mainly the camera is used to capture pictures and video and sometimes with audio of an event. Its usage is not limited to normal use for nostalgic purpose but can be used as a reference for security and material of evidence when an undesirable event such as crime occurs. When integrated with space based technology of the Global Navigational Satellite System (GNSS), photos and videos can be recorded together with positioning information. A product of the integration of these technologies when integrated with Information, Communication and Technology (ICT) and Geographic Information System (GIS) will produce innovation in the form of information gathering methods in still picture or video with positioning information that can be conveyed in real time via the web to display location on the map hence creating an intelligent eyeing system based on space technology. The importance of providing global positioning information is a challenge but overcome by SPIES even in areas without GNSS signal reception for the purpose of continuous tracking and tracing capability

  15. Visual attention to food cues in obesity: an eye-tracking study.

    Science.gov (United States)

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  16. Greater Pupil Size in Response to Emotional Faces as an Early Marker of Social-Communicative Difficulties in Infants at High Risk for Autism.

    Science.gov (United States)

    Wagner, Jennifer B; Luyster, Rhiannon J; Tager-Flusberg, Helen; Nelson, Charles A

    2016-01-01

    When scanning faces, individuals with autism spectrum disorder (ASD) have shown reduced visual attention (e.g., less time on eyes) and atypical autonomic responses (e.g., heightened arousal). To understand how these differences might explain sub-clinical variability in social functioning, 9-month-olds, with or without a family history of ASD, viewed emotionally-expressive faces, and gaze and pupil diameter (a measure of autonomic activation) were recorded using eye-tracking. Infants at high-risk for ASD with no subsequent clinical diagnosis (HRA-) and low-risk controls (LRC) showed similar face scanning and attention to eyes and mouth. Attention was overall greater to eyes than mouth, but this varied as a function of the emotion presented. HRA- showed significantly larger pupil size than LRC. Correlations between scanning at 9 months, pupil size at 9 months, and 18-month social-communicative behavior, revealed positive associations between pupil size and attention to both face and eyes at 9 months in LRC, and a negative association between 9-month pupil size and 18-month social-communicative behavior in HRA-. The present findings point to heightened autonomic arousal in HRA-. Further, with greater arousal relating to worse social-communicative functioning at 18 months, this work points to a mechanism by which unaffected siblings might develop atypical social behavior.

  17. Attention mediates the effect of nutrition label information on consumers’ choice. Evidence from a choice experiment involving eye-tracking

    NARCIS (Netherlands)

    Bialkova, Svetlana; Bialkova, Svetlana; Grunert, Klaus G.; Juhl, Hans Jørn; Wasowicz-Kirylo, Grazyna; Stysko-Kunkowska, Malgorzata; van Trijp, Hans C.M.

    2014-01-01

    In two eye-tracking studies, we explored whether and how attention to nutrition information mediates consumers’ choice. Consumers had to select either the healthiest option or a product of their preference within an assortment. On each product a particular label (Choices logo, monochrome GDA label,

  18. Attention mediates the effect of nutrition label information on consumers' choice. Evidence from a choice experiment involving eye-tracking

    NARCIS (Netherlands)

    Bialkova, S.; Grunert, K.G.; Juhl, H.J.; Wasowicz-Kirylo, G.; Stysko-Kunkowska, M.; Trijp, van J.C.M.

    2014-01-01

    In two eye-tracking studies, we explored whether and how attention to nutrition information mediates consumers' choice. Consumers had to select either the healthiest option or a product of their preference within an assortment. On each product a particular label (Choices logo, monochrome GDA label,

  19. Longitudinal strain bull's eye plot patterns in patients with cardiomyopathy and concentric left ventricular hypertrophy.

    Science.gov (United States)

    Liu, Dan; Hu, Kai; Nordbeck, Peter; Ertl, Georg; Störk, Stefan; Weidemann, Frank

    2016-05-10

    Despite substantial advances in the imaging techniques and pathophysiological understanding over the last decades, identification of the underlying causes of left ventricular hypertrophy by means of echocardiographic examination remains a challenge in current clinical practice. The longitudinal strain bull's eye plot derived from 2D speckle tracking imaging offers an intuitive visual overview of the global and regional left ventricular myocardial function in a single diagram. The bull's eye mapping is clinically feasible and the plot patterns could provide clues to the etiology of cardiomyopathies. The present review summarizes the longitudinal strain, bull's eye plot features in patients with various cardiomyopathies and concentric left ventricular hypertrophy and the bull's eye plot features might serve as one of the cardiac workup steps on evaluating patients with left ventricular hypertrophy.

  20. Early Experience with Technology-Based Eye Care Services (TECS): A Novel Ophthalmologic Telemedicine Initiative.

    Science.gov (United States)

    Maa, April Y; Wojciechowski, Barbara; Hunt, Kelly J; Dismuke, Clara; Shyu, Jason; Janjua, Rabeea; Lu, Xiaoqin; Medert, Charles M; Lynch, Mary G

    2017-04-01

    The aging population is at risk of common eye diseases, and routine eye examinations are recommended to prevent visual impairment. Unfortunately, patients are less likely to seek care as they age, which may be the result of significant travel and time burdens associated with going to an eye clinic in person. A new method of eye-care delivery that mitigates distance barriers and improves access was developed to improve screening for potentially blinding conditions. We present the quality data from the early experience (first 13 months) of Technology-Based Eye Care Services (TECS), a novel ophthalmologic telemedicine program. With TECS, a trained ophthalmology technician is stationed in a primary care clinic away from the main hospital. The ophthalmology technician follows a detailed protocol that collects information about the patient's eyes. The information then is interpreted remotely. Patients with possible abnormal findings are scheduled for a face-to-face examination in the eye clinic. Any patient with no known ocular disease who desires a routine eye screening examination is eligible. Technology-Based Eye Care Services was established in 5 primary care clinics in Georgia surrounding the Atlanta Veterans Affairs hospital. Four program operation metrics (patient satisfaction, eyeglass remakes, disease detection, and visit length) and 2 access-to-care metrics (appointment wait time and no-show rate) were tracked. Care was rendered to 2690 patients over the first 13 months of TECS. The program has been met with high patient satisfaction (4.95 of 5). Eyeglass remake rate was 0.59%. Abnormal findings were noted in 36.8% of patients and there was >90% agreement between the TECS reading and the face-to-face findings of the physician. TECS saved both patient (25% less) and physician time (50% less), and access to care substantially improved with 99% of patients seen within 14 days of contacting the eye clinic, with a TECS no-show rate of 5.2%. The early experience with

  1. Towards understanding addiction factors of mobile devices: An eye tracking study on effect of screen size.

    Science.gov (United States)

    Wibirama, Sunu; Nugroho, Hanung A

    2017-07-01

    Mobile devices addiction has been an important research topic in cognitive science, mental health, and human-machine interaction. Previous works observed mobile device addiction by logging mobile devices activity. Although immersion has been linked as a significant predictor of video game addiction, investigation on addiction factors of mobile device with behavioral measurement has never been done before. In this research, we demonstrated the usage of eye tracking to observe effect of screen size on experience of immersion. We compared subjective judgment with eye movements analysis. Non-parametric analysis on immersion score shows that screen size affects experience of immersion (pmobile devices addiction. Our experimental results are also useful to develop a guideline as well as intervention strategy to deal with smartphone addiction.

  2. Model-driven gaze simulation for the blind person in face-to-face communication

    NARCIS (Netherlands)

    Qiu, S.; Anas, S.A.B.; Osawa, H.; Rauterberg, G.W.M.; Hu, J.

    2016-01-01

    In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses

  3. Rett syndrome: basic features of visual processing-a pilot study of eye-tracking.

    Science.gov (United States)

    Djukic, Aleksandra; Valicenti McDermott, Maria; Mavrommatis, Kathleen; Martins, Cristina L

    2012-07-01

    Consistently observed "strong eye gaze" has not been validated as a means of communication in girls with Rett syndrome, ubiquitously affected by apraxia, unable to reply either verbally or manually to questions during formal psychologic assessment. We examined nonverbal cognitive abilities and basic features of visual processing (visual discrimination attention/memory) by analyzing patterns of visual fixation in 44 girls with Rett syndrome, compared with typical control subjects. To determine features of visual fixation patterns, multiple pictures (with the location of the salient and presence/absence of novel stimuli as variables) were presented on the screen of a TS120 eye-tracker. Of the 44, 35 (80%) calibrated and exhibited meaningful patterns of visual fixation. They looked longer at salient stimuli (cartoon, 2.8 ± 2 seconds S.D., vs shape, 0.9 ± 1.2 seconds S.D.; P = 0.02), regardless of their position on the screen. They recognized novel stimuli, decreasing the fixation time on the central image when another image appeared on the periphery of the slide (2.7 ± 1 seconds S.D. vs 1.8 ± 1 seconds S.D., P = 0.002). Eye-tracking provides a feasible method for cognitive assessment and new insights into the "hidden" abilities of individuals with Rett syndrome. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Face recognition increases during saccade preparation.

    Science.gov (United States)

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  5. Attention to internal face features in unfamiliar face matching.

    Science.gov (United States)

    Fletcher, Kingsley I; Butavicius, Marcus A; Lee, Michael D

    2008-08-01

    Accurate matching of unfamiliar faces is vital in security and forensic applications, yet previous research has suggested that humans often perform poorly when matching unfamiliar faces. Hairstyle and facial hair can strongly influence unfamiliar face matching but are potentially unreliable cues. This study investigated whether increased attention to the more stable internal face features of eyes, nose, and mouth was associated with more accurate face-matching performance. Forty-three first-year psychology students decided whether two simultaneously presented faces were of the same person or not. The faces were displayed for either 2 or 6 seconds, and had either similar or dissimilar hairstyles. The level of attention to internal features was measured by the proportion of fixation time spent on the internal face features and the sensitivity of discrimination to changes in external feature similarity. Increased attention to internal features was associated with increased discrimination in the 2-second display-time condition, but no significant relationship was found in the 6-second condition. Individual differences in eye-movements were highly stable across the experimental conditions.

  6. The Influences of Face Inversion and Facial Expression on Sensitivity to Eye Contact in High-Functioning Adults with Autism Spectrum Disorders

    Science.gov (United States)

    Vida, Mark D.; Maurer, Daphne; Calder, Andrew J.; Rhodes, Gillian; Walsh, Jennifer A.; Pachai, Matthew V.; Rutherford, M. D.

    2013-01-01

    We examined the influences of face inversion and facial expression on sensitivity to eye contact in high-functioning adults with and without an autism spectrum disorder (ASD). Participants judged the direction of gaze of angry, fearful, and neutral faces. In the typical group only, the range of directions of gaze leading to the perception of eye…

  7. No Differences in Emotion Recognition Strategies in Children with Autism Spectrum Disorder: Evidence from Hybrid Faces

    Directory of Open Access Journals (Sweden)

    Kris Evers

    2014-01-01

    Full Text Available Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD. However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness or in the mouth region (so-called bottom-emotions: sadness, anger, and fear. No stronger reliance on mouth information was found in children with ASD.

  8. No differences in emotion recognition strategies in children with autism spectrum disorder: evidence from hybrid faces.

    Science.gov (United States)

    Evers, Kris; Kerkhof, Inneke; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2014-01-01

    Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.

  9. The processing of spatial information in short-term memory: insights from eye tracking the path length effect.

    Science.gov (United States)

    Guérard, Katherine; Tremblay, Sébastien; Saint-Aubin, Jean

    2009-10-01

    Serial memory for spatial locations increases as the distance between successive stimuli locations decreases. This effect, known as the path length effect [Parmentier, F. B. R., Elford, G., & Maybery, M. T. (2005). Transitional information in spatial serial memory: Path characteristics affect recall performance. Journal of Experimental Psychology: Learning, Memory & Cognition, 31, 412-427], was investigated in a systematic manner using eye tracking and interference procedures to explore the mechanisms responsible for the processing of spatial information. In Experiment 1, eye movements were monitored during a spatial serial recall task--in which the participants have to remember the location of spatially and temporally separated dots on the screen. In the experimental conditions, eye movements were suppressed by requiring participants to incessantly move their eyes between irrelevant locations. Ocular suppression abolished the path length effect whether eye movements were prevented during item presentation or during a 7s retention interval. In Experiment 2, articulatory suppression was combined with a spatial serial recall task. Although articulatory suppression impaired performance, it did not alter the path length effect. Our results suggest that rehearsal plays a key role in serial memory for spatial information, though the effect of path length seems to involve other processes located at encoding, such as the time spent fixating each location and perceptual organization.

  10. Using combined eye tracking and word association in order to assess novel packaging solutions: A case study involving jam jars

    NARCIS (Netherlands)

    Piqueras Fiszman, B.; Velasco, C.; Salgado, A.; Spence, C.

    2013-01-01

    The present study utilized the techniques of eye tracking and word association in order to collect attentional information and freely-elicited associations from consumers in response to changing specific attributes of the product packaging (jam jars). We assessed the relationship between the data

  11. Automated Proton Track Identification in MicroBooNE Using Gradient Boosted Decision Trees

    Energy Technology Data Exchange (ETDEWEB)

    Woodruff, Katherine [New Mexico State U.

    2017-10-02

    MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino experiment that is currently running in the Booster Neutrino Beam at Fermilab. LArTPC technology allows for high-resolution, three-dimensional representations of neutrino interactions. A wide variety of software tools for automated reconstruction and selection of particle tracks in LArTPCs are actively being developed. Short, isolated proton tracks, the signal for low- momentum-transfer neutral current (NC) elastic events, are easily hidden in a large cosmic background. Detecting these low-energy tracks will allow us to probe interesting regions of the proton's spin structure. An effective method for selecting NC elastic events is to combine a highly efficient track reconstruction algorithm to find all candidate tracks with highly accurate particle identification using a machine learning algorithm. We present our work on particle track classification using gradient tree boosting software (XGBoost) and the performance on simulated neutrino data.

  12. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    Science.gov (United States)

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Assessing cognitive functioning in females with Rett syndrome by eye-tracking methodology.

    Science.gov (United States)

    Ahonniska-Assa, Jaana; Polack, Orli; Saraf, Einat; Wine, Judy; Silberg, Tamar; Nissenkorn, Andreea; Ben-Zeev, Bruria

    2018-01-01

    While many individuals with severe developmental impairments learn to communicate with augmentative and alternative communication (AAC) devices, a significant number of individuals show major difficulties in the effective use of AAC. Recent technological innovations, i.e., eye-tracking technology (ETT), aim to improve the transparency of communication and may also enable a more valid cognitive assessment. To investigate whether ETT in forced-choice tasks can enable children with very severe motor and speech impairments to respond consistently, allowing a more reliable evaluation of their language comprehension. Participants were 17 girls with Rett syndrome (M = 6:06 years). Their ability to respond by eye gaze was first practiced with computer games using ETT. Afterwards, their receptive vocabulary was assessed using the Peabody Picture Vocabulary Test-4 (PPVT-4). Target words were orally presented and participants responded by focusing their eyes on the preferred picture. Remarkable differences between the participants in receptive vocabulary were demonstrated using ETT. The verbal comprehension abilities of 32% of the participants ranged from low-average to mild cognitive impairment, and the other 68% of the participants showed moderate to severe impairment. Young age at the time of assessment was positively correlated with higher receptive vocabulary. The use of ETT seems to make the communicational signals of children with severe motor and communication impairments more easily understood. Early practice of ETT may improve the quality of communication and enable more reliable conclusions in learning and assessment sessions. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  14. Rapid identification and source-tracking of Listeria monocytogenes using MALDI-TOF mass spectrometry.

    Science.gov (United States)

    Jadhav, Snehal; Gulati, Vandana; Fox, Edward M; Karpe, Avinash; Beale, David J; Sevior, Danielle; Bhave, Mrinal; Palombo, Enzo A

    2015-06-02

    Listeria monocytogenes is an important foodborne pathogen responsible for the sometimes fatal disease listeriosis. Public health concerns and stringent regulations associated with the presence of this pathogen in food and food processing environments underline the need for rapid and reliable detection and subtyping techniques. In the current study, the application of matrix assisted laser desorption/ionisation-time-of-flight mass spectrometry (MALDI-TOF MS) as a single identification and source-tracking tool for a collection of L. monocytogenes isolates, obtained predominantly from dairy sources within Australia, was explored. The isolates were cultured on different growth media and analysed using MALDI-TOF MS at two incubation times (24 and 48 h). Whilst reliable genus-level identification was achieved from most media, identification at the species level was found to be dependent on culture conditions. Successful speciation was highest for isolates cultured on the chromogenic Agar Listeria Ottaviani Agosti agar (ALOA, 91% of isolates) and non-selective horse blood agar (HBA, 89%) for 24h. Chemometric statistical analysis of the MALDI-TOF MS data enabled source-tracking of L. monocytogenes isolates obtained from four different dairy sources. Strain-level discrimination was also observed to be influenced by culture conditions. In addition, t-test/analysis of variance (ANOVA) was used to identify potential biomarker peaks that differentiated the isolates according to their source of isolation. Source-tracking using MALDI-TOF MS was compared and correlated with the gold standard pulsed-field gel electrophoresis (PFGE) technique. The discriminatory index and the congruence between both techniques were compared using the Simpsons Diversity Index and adjusted Rand and Wallace coefficients. Overall, MALDI-TOF MS based source-tracking (using data obtained by culturing the isolates on HBA) and PFGE demonstrated good congruence with a Wallace coefficient of 0.71 and

  15. A free geometry model-independent neural eye-gaze tracking system

    Directory of Open Access Journals (Sweden)

    Gneo Massimo

    2012-11-01

    Full Text Available Abstract Background Eye Gaze Tracking Systems (EGTSs estimate the Point Of Gaze (POG of a user. In diagnostic applications EGTSs are used to study oculomotor characteristics and abnormalities, whereas in interactive applications EGTSs are proposed as input devices for human computer interfaces (HCI, e.g. to move a cursor on the screen when mouse control is not possible, such as in the case of assistive devices for people suffering from locked-in syndrome. If the user’s head remains still and the cornea rotates around its fixed centre, the pupil follows the eye in the images captured from one or more cameras, whereas the outer corneal reflection generated by an IR light source, i.e. glint, can be assumed as a fixed reference point. According to the so-called pupil centre corneal reflection method (PCCR, the POG can be thus estimated from the pupil-glint vector. Methods A new model-independent EGTS based on the PCCR is proposed. The mapping function based on artificial neural networks allows to avoid any specific model assumption and approximation either for the user’s eye physiology or for the system initial setup admitting a free geometry positioning for the user and the system components. The robustness of the proposed EGTS is proven by assessing its accuracy when tested on real data coming from: i different healthy users; ii different geometric settings of the camera and the light sources; iii different protocols based on the observation of points on a calibration grid and halfway points of a test grid. Results The achieved accuracy is approximately 0.49°, 0.41°, and 0.62° for respectively the horizontal, vertical and radial error of the POG. Conclusions The results prove the validity of the proposed approach as the proposed system performs better than EGTSs designed for HCI which, even if equipped with superior hardware, show accuracy values in the range 0.6°-1°.

  16. The "hypnotic state" and eye movements : Less there than meets the eye?

    NARCIS (Netherlands)

    Cardea, Etzel; Nordhjem, Barbara; Marcusson-Clavertz, David; Holmqvist, Kenneth

    2017-01-01

    Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of

  17. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    Science.gov (United States)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  18. Comparison of ergometer- and track-based testing in junior track-sprint cyclists. Implications for talent identification and development.

    Science.gov (United States)

    Tofari, Paul J; Cormack, Stuart J; Ebert, Tammie R; Gardner, A Scott; Kemp, Justin G

    2017-10-01

    Talent identification (TID) and talent development (TDE) programmes in track sprint cycling use ergometer- and track-based tests to select junior athletes and assess their development. The purpose of this study was to assess which tests are best at monitoring TID and TDE. Ten male participants (16.2 ± 1.1 year; 178.5 ± 6.0 cm and 73.6 ± 7.6 kg) were selected into the national TID squad based on initial testing. These tests consisted of two 6-s maximal sprints on a custom-built ergometer and 4 maximal track-based tests (2 rolling and 2 standing starts) using 2 gear ratios. Magnitude-based inferences and correlation coefficients assessed changes following a 3-month TDE programme. Training elicited meaningful improvements (80-100% likely) in all ergometer parameters. The standing and rolling small gear, track-based effort times were likely and very likely (3.2 ± 2.4% and 3.3 ± 1.9%, respectively) improved by training. Stronger correlations between ergometer- and track-based measures were very likely following training. Ergometer-based testing provides a more sensitive tool than track-based testing to monitor changes in neuromuscular function during the early stages of TDE. However, track-based testing can indicate skill-based improvements in performance when interpreted with ergometer testing. In combination, these tests provide information on overall talent development.

  19. Identification of emotions in mixed disgusted-happy faces as a function of depressive symptom severity.

    Science.gov (United States)

    Sanchez, Alvaro; Romero, Nuria; Maurage, Pierre; De Raedt, Rudi

    2017-12-01

    Interpersonal difficulties are common in depression, but their underlying mechanisms are not yet fully understood. The role of depression in the identification of mixed emotional signals with a direct interpersonal value remains unclear. The present study aimed to clarify this question. A sample of 39 individuals reporting a broad range of depression levels completed an emotion identification task where they viewed faces expressing three emotional categories (100% disgusted and 100% happy faces, as well as their morphed 50% disgusted - 50% happy exemplars). Participants were asked to identify the corresponding depicted emotion as "clearly disgusted", "mixed", or "clearly happy". Higher depression levels were associated with lower identification of positive emotions in 50% disgusted - 50% happy faces. The study was conducted with an analogue sample reporting individual differences in subclinical depression levels. Further research must replicate these findings in a clinical sample and clarify whether differential emotional identification patterns emerge in depression for different mixed negative-positive emotions (sad-happy vs. disgusted-happy). Depression may account for a lower bias to perceive positive states when ambiguous states from others include subtle signals of social threat (i.e., disgust), leading to an under-perception of positive social signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Exploring differences in speech processing among elderly hearing-impaired listeners with or without hearing aid experience: Eye-tracking and fMRI measurements

    DEFF Research Database (Denmark)

    Habicht, Julia; Behler, Oliver; Kollmeier, Birger

    2018-01-01

    on the cognitive processes underlying speech comprehension. Eye-tracking and functional magnetic resonance imaging (fMRI) measurements were carried out with acoustic sentence-in-noise (SIN) stimuli complemented by pairs of pictures that either correctly (target) or incorrectly (competitor) depicted the sentence...... meanings. For the eye-tracking measurements, the time taken by the participants to start fixating the target picture (the ‘processing time’) was measured. For the fMRI measurements, brain activation inferred from blood oxygenation level dependent (BOLD) responses following sentence comprehension...... frontal areas for SIN relative to noise-only stimuli in the eHA group compared to the iHA group. Together, these results imply that HA experience leads to faster speech-in-noise processing, possibly related to less recruitment of brain regions outside the core sentence-comprehension network. Follow...

  1. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie [ORNL; Pinto, Frank M [ORNL; Morin-Ducote, Garnetta [University of Tennessee, Knoxville (UTK); Hudson, Kathy [University of Tennessee, Knoxville (UTK); Tourassi, Georgia [ORNL

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  2. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking

    Science.gov (United States)

    Gauvin, Hanna S.; Hartsuiker, Robert J.; Huettig, Falk

    2013-01-01

    The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception. PMID:24339809

  3. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    Science.gov (United States)

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  4. Brief Report: Broad Autism Phenotype in Adults Is Associated with Performance on an Eye-Tracking Measure of Joint Attention

    Science.gov (United States)

    Swanson, Meghan R.; Siller, Michael

    2014-01-01

    The current study takes advantage of modern eye-tracking technology and evaluates how individuals allocate their attention when viewing social videos that display an adult model who is gazing at a series of targets that appear and disappear in the four corners of the screen (congruent condition), or gazing elsewhere (incongruent condition). Data…

  5. Impaired Value Learning for Faces in Preschoolers With Autism Spectrum Disorder.

    Science.gov (United States)

    Wang, Quan; DiNicola, Lauren; Heymann, Perrine; Hampson, Michelle; Chawarska, Katarzyna

    2018-01-01

    One of the common findings in autism spectrum disorder (ASD) is limited selective attention toward social objects, such as faces. Evidence from both human and nonhuman primate studies suggests that selection of objects for processing is guided by the appraisal of object values. We hypothesized that impairments in selective attention in ASD may reflect a disruption of a system supporting learning about object values in the social domain. We examined value learning in social (faces) and nonsocial (fractals) domains in preschoolers with ASD (n = 25) and typically developing (TD) controls (n = 28), using a novel value learning task implemented on a gaze-contingent eye-tracking platform consisting of value learning and a selective attention choice test. Children with ASD performed more poorly than TD controls on the social value learning task, but both groups performed similarly on the nonsocial task. Within-group comparisons indicated that value learning in TD children was enhanced on the social compared to the nonsocial task, but no such enhancement was seen in children with ASD. Performance in the social and nonsocial conditions was correlated in the ASD but not in the TD group. The study provides support for a domain-specific impairment in value learning for faces in ASD, and suggests that, in ASD, value learning in social and nonsocial domains may rely on a shared mechanism. These findings have implications both for models of selective social attention deficits in autism and for identification of novel treatment targets. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  6. IDENTIFICATION SYSTEM, TRACKING AND SUPPORT FOR VESSELS ON RIVERS

    Directory of Open Access Journals (Sweden)

    SAMOILESCU Gheorghe

    2015-05-01

    Full Text Available According to the program COMPRIS (Consortium Operational Management Platform River Information Services, AIS (Automatic Identification System, RIS (River Information Services have compiled a reference model based on the perspective of navigation on the river with related information services. This paper presents a tracking and monitoring surveillance system necessary for assistance of each ship sailing in an area of interest. It shows the operating principle of the composition and role of each equipment. Transferring data to traffic monitoring authority is part of this work.

  7. Post-industrial landscape - its identification and classification as contemporary challenges faced by geographic research

    Czech Academy of Sciences Publication Activity Database

    Kolejka, Jaromír

    2010-01-01

    Roč. 14, č. 2 (2010), s. 67-78 ISSN 1842-5135 Institutional research plan: CEZ:AV0Z30860518 Keywords : classification * geographical research * identification method * landscape structure Subject RIV: DE - Earth Magnetism, Geodesy, Geography http://studiacrescent.com/images/02_2010/09_jaromir_kolejka_post_industrial_landscape_its_identification_and_classification_as_contemporary_challenges_faced_by_geographic_.pdf

  8. Tracking and Particle Identification at LHCb and Strange Hadron Production in Events with Z Boson

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00392146; Serra, N.; Mueller, K; Steinkamp, O

    The Lhcb experiment, located at the Large Hadron Collider at CERN, is a high energy particle physics experiment dedicated to precision measurements of events containing beauty and charm quarks. The detector is built as a single-arm forward spectrometer. It uses tracking stations upstream and downstream of its dipole magnet to measure the trajectories and momenta of charged particles. This thesis describes the improvements to the track reconstruction algorithm, which were implemented for the second run of the LHC that started in spring 2015. Furthermore, the method to confirm the performance numbers on data is presented. In addition to the tracking system, the detector uses two Ring Imaging Cherenkov detectors, upstream and downstream of the dipole magnet, together with the calorimeter and muon system, for particle identification. The detector response for the particle identification is known to be poorly modelled, since the dependence on environmental variables like temperature and pressure inside the gas mo...

  9. The ticking time bomb: Using eye-tracking methodology to capture attentional processing during gradual time constraints.

    Science.gov (United States)

    Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G

    2016-11-01

    Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.

  10. Differential Attention to Faces in Infant Siblings of Children with Autism Spectrum Disorder and Associations with Later Social and Language Ability

    Science.gov (United States)

    Wagner, Jennifer; Luyster, Rhiannon J.; Moustapha, Hana; Tager-Flusberg, Helen; Nelson, Charles Alexander

    2018-01-01

    A growing body of literature has begun to explore social attention in infant siblings of children with autism spectrum disorder (ASD) with hopes of identifying early differences that are associated with later ASD or other aspects of development. The present study used eye-tracking to familiar (mother) and unfamiliar (stranger) faces in two groups…

  11. Effects of anger and sadness on attentional patterns in decision making: an eye-tracking study.

    Science.gov (United States)

    Xing, Cai

    2014-02-01

    Past research examining the effect of anger and sadness on decision making has associated anger with a relatively more heuristic decision-making approach. However, it is unclear whether angry and sad individuals differ while attending to decision-relevant information. An eye-tracking experiment (N=87) was conducted to examine the role of attention in links between emotion and decision making. Angry individuals looked more and earlier toward heuristic cues while making decisions, whereas sad individuals did not show such bias. Implications for designing persuasive messages and studying motivated visual processing were discussed.

  12. Exploring Text and Icon Graph Interpretation in Students with Dyslexia: An Eye-tracking Study.

    Science.gov (United States)

    Kim, Sunjung; Wiseheart, Rebecca

    2017-02-01

    A growing body of research suggests that individuals with dyslexia struggle to use graphs efficiently. Given the persistence of orthographic processing deficits in dyslexia, this study tested whether graph interpretation deficits in dyslexia are directly related to difficulties processing the orthographic components of graphs (i.e. axes and legend labels). Participants were 80 college students with and without dyslexia. Response times and eye movements were recorded as students answered comprehension questions about simple data displayed in bar graphs. Axes and legends were labelled either with words (mixed-modality graphs) or icons (orthography-free graphs). Students also answered informationally equivalent questions presented in sentences (orthography-only condition). Response times were slower in the dyslexic group only for processing sentences. However, eye tracking data revealed group differences for processing mixed-modality graphs, whereas no group differences were found for the orthography-free graphs. When processing bar graphs, students with dyslexia differ from their able reading peers only when graphs contain orthographic features. Implications for processing informational text are discussed. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Adding More Fuel to the Fire: An Eye-Tracking Study of Idiom Processing by Native and Non-Native Speakers

    Science.gov (United States)

    Siyanova-Chanturia, Anna; Conklin, Kathy; Schmitt, Norbert

    2011-01-01

    Using eye-tracking, we investigate on-line processing of idioms in a biasing story context by native and non-native speakers of English. The stimuli are idioms used figuratively ("at the end of the day"--"eventually"), literally ("at the end of the day"--"in the evening"), and novel phrases ("at the end of the war"). Native speaker results…

  14. Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Julia eIrwin

    2014-05-01

    Full Text Available Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD and those with typical development (TD. Patterns of gaze were observed under three conditions: audiovisual (AV speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.

  15. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    Science.gov (United States)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  16. "It is the left eye, right?"

    Directory of Open Access Journals (Sweden)

    Pikkel D

    2014-04-01

    Full Text Available Dvora Pikkel,1 Adi Sharabi-Nov,2,3 Joseph Pikkel4,5 1Risk Management and Patient Safety Unit, Assuta Hospital, Ramat Hachayal, Tel-Aviv, Israel; 2Research Wing, Ziv Medical Center, Safed, Israel; 3Tel-Hai Academic College, Upper Galilee, Israel; 4Department of Ophthalmology, Ziv Medical Center, Safed, Israel; 5Faculty of Medicine, Bar-Ilan University, Ramat Gan, Tel Aviv, Israel Objective: Because wrong-site confusion is among the most common mistakes in the operations of paired organs, we have examined the frequency of wrong-sided confusions that could theoretically occur in cataract surgeries in the absence of preoperative verification. Methods: Ten cataract surgeons participated in the study. The surgeons were asked to complete a questionnaire that included their demographic data, occupational habits, and their approach to and the handling of patients preoperatively. On the day of operation, the surgeons were asked to recognize the side of the operation from the patient's name only. At the second stage of the study, surgeons were asked to recognize the side of the operation while standing a 2-meter distance from the patient's face. The surgeons' answers were compared to the actual operation side. Patients then underwent a full time-out procedure, which included side marking before the operation. Results: Of the total 67 patients, the surgeons correctly identified the operated side of the eye in 49 (73% by name and in 56 (83% by looking at patients' faces. Wrong-side identification correlated with the time lapsed from the last preoperative examination (P=0.034. The number of cataract surgeries performed by the same surgeon (on the same day also correlated to the number of wrong identifications (P=0.000. Surgeon seniority or age did not correlate to the number of wrong identifications. Conclusion: This study illustrates the high error rate that can result in the absence of side marking prior to cataract surgery, as well as in operations on

  17. The effect of exposure to multiple lineups on face identification accuracy.

    Science.gov (United States)

    Hinz, T; Pezdek, K

    2001-04-01

    This study examines the conditions under which an intervening lineup affects identification accuracy on a subsequent lineup. One hundred and sixty adults observed a photograph of one target individual for 60 s. One week later, they viewed an intervening target-absent lineup and were asked to identify the target individual. Two days later, participants were shown one of three 6-person lineups that included a different photograph of the target face (present or absent), a foil face from the intervening lineup (present or absent), plus additional foil faces. The hit rate was higher when the foil face from the intervening lineup was absent from the test lineup and the false alarm rate was greater when the target face was absent from the test lineup. The results suggest that simply being exposed to an innocent suspect in an intervening lineup, whether that innocent suspect is identified by the witness or not, increases the probability of misidentifying the innocent suspect and decreases the probability of correctly identifying the true perpetrator in a subsequent test lineup. The implications of these findings both for police lineup procedures and for the interpretation of lineup results in the courtroom are discussed.

  18. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    Science.gov (United States)

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  19. Individual differences in personality predict how people look at faces.

    Science.gov (United States)

    Perlman, Susan B; Morris, James P; Vander Wyk, Brent C; Green, Steven R; Doyle, Jaime L; Pelphrey, Kevin A

    2009-06-22

    Determining the ways in which personality traits interact with contextual determinants to shape social behavior remains an important area of empirical investigation. The specific personality trait of neuroticism has been related to characteristic negative emotionality and associated with heightened attention to negative, emotionally arousing environmental signals. However, the mechanisms by which this personality trait may shape social behavior remain largely unspecified. We employed eye tracking to investigate the relationship between characteristics of visual scanpaths in response to emotional facial expressions and individual differences in personality. We discovered that the amount of time spent looking at the eyes of fearful faces was positively related to neuroticism. This finding is discussed in relation to previous behavioral research relating personality to selective attention for trait-congruent emotional information, neuroimaging studies relating differences in personality to amygdala reactivity to socially relevant stimuli, and genetic studies suggesting linkages between the serotonin transporter gene and neuroticism. We conclude that personality may be related to interpersonal interaction by shaping aspects of social cognition as basic as eye contact. In this way, eye gaze represents a possible behavioral link in a complex relationship between genes, brain function, and personality.

  20. Individual differences in personality predict how people look at faces.

    Directory of Open Access Journals (Sweden)

    Susan B Perlman

    2009-06-01

    Full Text Available Determining the ways in which personality traits interact with contextual determinants to shape social behavior remains an important area of empirical investigation. The specific personality trait of neuroticism has been related to characteristic negative emotionality and associated with heightened attention to negative, emotionally arousing environmental signals. However, the mechanisms by which this personality trait may shape social behavior remain largely unspecified.We employed eye tracking to investigate the relationship between characteristics of visual scanpaths in response to emotional facial expressions and individual differences in personality. We discovered that the amount of time spent looking at the eyes of fearful faces was positively related to neuroticism.This finding is discussed in relation to previous behavioral research relating personality to selective attention for trait-congruent emotional information, neuroimaging studies relating differences in personality to amygdala reactivity to socially relevant stimuli, and genetic studies suggesting linkages between the serotonin transporter gene and neuroticism. We conclude that personality may be related to interpersonal interaction by shaping aspects of social cognition as basic as eye contact. In this way, eye gaze represents a possible behavioral link in a complex relationship between genes, brain function, and personality.

  1. Validation of mobile eye tracking as novel and efficient means for differentiating progressive supranuclear palsy from Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Svenja eMarx

    2012-12-01

    Full Text Available Background: The decreased ability to carry out vertical saccades is a key symptom of Progressive Supranuclear Palsy (PSP. Objective measurement devices can help to reliably detect subtle eye-movement disturbances to improve sensitivity and specificity of the clinical diagnosis. The present study aims at transferring findings from restricted stationary video-oculography to a wearable head-mounted device, which can be readily applied in clinical practice.Methods: We investigated the eye movements in 10 possible or probable PSP patients, 11 Parkinson’s disease (PD patients and 10 age-matched healthy controls (HC using a mobile, gaze-driven video camera setup (EyeSeeCam. Ocular movements were analyzed during a standardized fixation protocol and in an unrestricted real-life scenario while walking along a corridor.Results: The EyeSeeCam detected prominent impairment of both saccade velocity and amplitude in PSP patients, differentiating them from PD and HCs. Differences were particularly evident for saccades in the vertical plane, and stronger for saccades than for other eye movements. Differences were more pronounced during the standardized protocol than in the real-life scenario. Conclusions: Combined analysis of saccade velocity and saccade amplitude during the fixation protocol with the EyeSeeCam provides a simple, rapid (< 20s and reliable tool to differentiate clinically established PSP patients from PD and HCs. As such, our findings prepare the ground for using wearable eye-tracking in patients with uncertain diagnoses.

  2. Changed processing of visual sexual stimuli under GnRH-therapy – a single case study in pedophilia using eye tracking and fMRI

    Science.gov (United States)

    2014-01-01

    Background Antiandrogen therapy (ADT) has been used for 30 years to treat pedophilic patients. The aim of the treatment is a reduction in sexual drive and, in consequence, a reduced risk of recidivism. Yet the therapeutic success of antiandrogens is uncertain especially regarding recidivism. Meta-analyses and reviews report only moderate and often mutually inconsistent effects. Case presentation Based on the case of a 47 year old exclusively pedophilic forensic inpatient, we examined the effectiveness of a new eye tracking method and a new functional magnetic resonance imaging (fMRI)-design in regard to the evaluation of ADT in pedophiles. We analyzed the potential of these methods in exploring the impact of ADT on automatic and controlled attentional processes in pedophiles. Eye tracking and fMRI measures were conducted before the initial ADT as well as four months after the onset of ADT. The patient simultaneously viewed an image of a child and an image of an adult while eye movements were measured. During the fMRI-measure the same stimuli were presented subliminally. Eye movements demonstrated that controlled attentional processes change under ADT, whereas automatic processes remained mostly unchanged. We assume that these results reflect either the increased ability of the patient to control his eye movements while viewing prepubertal stimuli or his better ability to manipulate his answer in a socially desirable manner. Unchanged automatic attentional processes could reflect the stable pedophilic preference of the patient. Using fMRI, the subliminal presentation of sexually relevant stimuli led to changed activation patterns under the influence of ADT in occipital and parietal brain regions, the hippocampus, and also in the orbitofrontal cortex. We suggest that even at an unconscious level ADT can lead to changed processing of sexually relevant stimuli, reflecting changes of cognitive and perceptive automatic processes. Conclusion We are convinced that our

  3. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology.

    Science.gov (United States)

    Fernandez-Mendez, Felipe; Saez-Gallego, Nieves Maria; Barcala-Furelos, Roberto; Abelairas-Gomez, Cristian; Padron-Cabo, Alexis; Perez-Ferreiros, Alexandra; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation.

  4. 3D quantitative analysis of early decomposition changes of the human face.

    Science.gov (United States)

    Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina

    2018-03-01

    Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.

  5. Impact of Air Movement on Eye Symptoms

    DEFF Research Database (Denmark)

    Melikov, Arsen Krikor; Sakoi, Tomonori; Kolencíková, Sona

    2013-01-01

    The impact of direction, oscillation and temperature of isothermal room air movement on eye discomfort and tear film quality was studied. Twenty-four male subjects participated in the experiment. Horizontal air movement against the face and chest was generated by a large desk fan – LDF and a small...... when the airflow was directed against the face and when against the chest, LDF with and without oscillation and PV. Eye tear film samples were taken and analyzed at the beginning and the end of the exposures. Eye irritation and dryness were reported by the subjects. The air movement under individual...... control did not change significantly the tear film quality though tendency for improvement was observed. Eye dryness increased much when the airflow was blowing constantly against the face compared to oscillating airflow, airflow directed against the chest and upward airflow against the face....

  6. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.

    Science.gov (United States)

    Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J

    2018-05-29

    Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.

  7. 眼球追蹤技術在學習與教育上的應用 Eye Tracking Technology for Learning and Education

    Directory of Open Access Journals (Sweden)

    陳學志 Hsueh-Chih Chen

    2010-12-01

    Full Text Available 本文目的為對眼球追蹤技術在學習與教育上的應用進行論述。人類的認知訊息處理歷程中有80%以上的訊息是由視覺獲得,而眼球運動也是認知過程中最為重要的感官訊息來源,近來發展的眼球追蹤技術提供了自然且即時的測量來探討認知、情緒、動機等議題,因此,眼球追蹤技術已經被廣泛地使用在各個領域中。本文針對眼球追蹤與眼球追蹤儀的基本概念、眼動指標、操作方法、資料分析方法等進行介紹。並針對眼動與閱讀、眼動與教學、眼動與問題解決及眼動與情意特質的運用等議題進行論述。藉由本文的介紹將使讀者對眼動研究在學習與教育上的應用有基本的認識。 The purpose of this article is to canvass the advantages of using eye-tracking technologies for applications in learning and education. More than 80% of the course of human cognitive processing is based on information acquired from visual modals. Eye movement plays an unrivaled role in inferring a given individual internal state. Recent developments in eye tracking technology have shown a powerful natural and real-time measuring for cognition, emotion, and motivation. This is evidenced from a wide range of reported studies and researches. We reviewed the representative research of reading, problem-solving, teaching, affect disposition, and other issues by displaying how eye-tracking technology can support advanced studies on mental processes. In addition, basic mechanical concepts, indicators, operation methods, and data analysis for the use of eye-tracking technology were introduced and discussed.

  8. Eye movements during listening reveal spontaneous grammatical processing.

    Science.gov (United States)

    Huette, Stephanie; Winter, Bodo; Matlock, Teenie; Ardell, David H; Spivey, Michael

    2014-01-01

    Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analog representations of features such as movement, that could be a key perceptual component to grammatical comprehension.

  9. Eye movements during listening reveal spontaneous grammatical processing

    Directory of Open Access Journals (Sweden)

    Stephanie eHuette

    2014-05-01

    Full Text Available Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking, which emphasizes the ongoing nature of actions, and the simple past (e.g., walked, which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analogue representations of features such as movement, that could be a key perceptual component to grammatical comprehension.

  10. A face in the crowd: a non-invasive and cost effective photo-identification methodology to understand the fine scale movement of eastern water dragons.

    Directory of Open Access Journals (Sweden)

    Riana Zanarivero Gardiner

    Full Text Available Ectothermic vertebrates face many challenges of thermoregulation. Many species rely on behavioral thermoregulation and move within their landscape to maintain homeostasis. Understanding the fine-scale nature of this regulation through tracking techniques can provide a better understanding of the relationships between such species and their dynamic environments. The use of animal tracking and telemetry technology has allowed the extensive collection of such data which has enabled us to better understand the ways animals move within their landscape. However, such technologies do not come without certain costs: they are generally invasive, relatively expensive, can be too heavy for small sized animals and unreliable in certain habitats. This study provides a cost-effective and non-invasive method through photo-identification, to determine fine scale movements of individuals. With our methodology, we have been able to find that male eastern water dragons (Intellagama leuseurii have home ranges one and a half times larger than those of females. Furthermore, we found intraspecific differences in the size of home ranges depending on the time of the day. Lastly, we found that location mostly influenced females' home ranges, but not males and discuss why this may be so. Overall, we provide valuable information regarding the ecology of the eastern water dragon, but most importantly demonstrate that non-invasive photo-identification can be successfully applied to the study of reptiles.

  11. A face in the crowd: a non-invasive and cost effective photo-identification methodology to understand the fine scale movement of eastern water dragons.

    Science.gov (United States)

    Gardiner, Riana Zanarivero; Doran, Erik; Strickland, Kasha; Carpenter-Bundhoo, Luke; Frère, Celine

    2014-01-01

    Ectothermic vertebrates face many challenges of thermoregulation. Many species rely on behavioral thermoregulation and move within their landscape to maintain homeostasis. Understanding the fine-scale nature of this regulation through tracking techniques can provide a better understanding of the relationships between such species and their dynamic environments. The use of animal tracking and telemetry technology has allowed the extensive collection of such data which has enabled us to better understand the ways animals move within their landscape. However, such technologies do not come without certain costs: they are generally invasive, relatively expensive, can be too heavy for small sized animals and unreliable in certain habitats. This study provides a cost-effective and non-invasive method through photo-identification, to determine fine scale movements of individuals. With our methodology, we have been able to find that male eastern water dragons (Intellagama leuseurii) have home ranges one and a half times larger than those of females. Furthermore, we found intraspecific differences in the size of home ranges depending on the time of the day. Lastly, we found that location mostly influenced females' home ranges, but not males and discuss why this may be so. Overall, we provide valuable information regarding the ecology of the eastern water dragon, but most importantly demonstrate that non-invasive photo-identification can be successfully applied to the study of reptiles.

  12. Processing of emotional faces in social phobia

    Directory of Open Access Journals (Sweden)

    Nicole Kristjansen Rosenberg

    2011-02-01

    Full Text Available Previous research has found that individuals with social phobia differ from controls in their processing of emotional faces. For instance, people with social phobia show increased attention to briefly presented threatening faces. However, when exposure times are increased, the direction of this attentional bias is more unclear. Studies investigating eye movements have found both increased as well as decreased attention to threatening faces in socially anxious participants. The current study investigated eye movements to emotional faces in eight patients with social phobia and 34 controls. Three different tasks with different exposure durations were used, which allowed for an investigation of the time course of attention. At the early time interval, patients showed a complex pattern of both vigilance and avoidance of threatening faces. At the longest time interval, patients avoided the eyes of sad, disgust, and neutral faces more than controls, whereas there were no group differences for angry faces.

  13. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    Science.gov (United States)

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  14. Social Attention in the Two Species of Pan: Bonobos Make More Eye Contact than Chimpanzees.

    Directory of Open Access Journals (Sweden)

    Fumihiro Kano

    Full Text Available Humans' two closest primate living relatives, bonobos and chimpanzees, differ behaviorally, cognitively, and emotionally in several ways despite their general similarities. While bonobos show more affiliative behaviors towards conspecifics, chimpanzees display more overt and severe aggression against conspecifics. From a cognitive standpoint, bonobos perform better in social coordination, gaze-following and food-related cooperation, while chimpanzees excel in tasks requiring extractive foraging skills. We hypothesized that attention and motivation play an important role in shaping the species differences in behavior, cognition, and emotion. Thus, we predicted that bonobos would pay more attention to the other individuals' face and eyes, as those are related to social affiliation and social coordination, while chimpanzees would pay more attention to the action target objects, as they are related to foraging. Using eye-tracking we examined the bonobos' and chimpanzees' spontaneous scanning of pictures that included eyes, mouth, face, genitals, and action target objects of conspecifics. Although bonobos and chimpanzees viewed those elements overall similarly, bonobos viewed the face and eyes longer than chimpanzees, whereas chimpanzees viewed the other elements, the mouth, action target objects and genitals, longer than bonobos. In a discriminant analysis, the individual variation in viewing patterns robustly predicted the species of individuals, thus clearly demonstrating species-specific viewing patterns. We suggest that such attentional and motivational differences between bonobos and chimpanzees could have partly contributed to shaping the species-specific behaviors, cognition, and emotion of these species, even in a relatively short period of evolutionary time.

  15. Start Position Strongly Influences Fixation Patterns during Face Processing: Difficulties with Eye Movements as a Measure of Information Use

    Science.gov (United States)

    Arizpe, Joseph; Kravitz, Dwight J.; Yovel, Galit; Baker, Chris I.

    2012-01-01

    Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute

  16. Evaluation of perception performance in neck dissection planning using eye tracking and attention landscapes

    Science.gov (United States)

    Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka

    2007-03-01

    Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.

  17. On the other side of the fence: effects of social categorization and spatial grouping on memory and attention for own-race and other-race faces.

    Directory of Open Access Journals (Sweden)

    Nadine Kloth

    Full Text Available The term "own-race bias" refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own

  18. Real-time recording and classification of eye movements in an immersive virtual environment.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  19. MODAL TRACKING of A Structural Device: A Subspace Identification Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Franco, S. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruggiero, E. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Emmons, M. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lopez, I. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Stoops, L. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-03-20

    Mechanical devices operating in an environment contaminated by noise, uncertainties, and extraneous disturbances lead to low signal-to-noise-ratios creating an extremely challenging processing problem. To detect/classify a device subsystem from noisy data, it is necessary to identify unique signatures or particular features. An obvious feature would be resonant (modal) frequencies emitted during its normal operation. In this report, we discuss a model-based approach to incorporate these physical features into a dynamic structure that can be used for such an identification. The approach we take after pre-processing the raw vibration data and removing any extraneous disturbances is to obtain a representation of the structurally unknown device along with its subsystems that capture these salient features. One approach is to recognize that unique modal frequencies (sinusoidal lines) appear in the estimated power spectrum that are solely characteristic of the device under investigation. Therefore, the objective of this effort is based on constructing a black box model of the device that captures these physical features that can be exploited to “diagnose” whether or not the particular device subsystem (track/detect/classify) is operating normally from noisy vibrational data. Here we discuss the application of a modern system identification approach based on stochastic subspace realization techniques capable of both (1) identifying the underlying black-box structure thereby enabling the extraction of structural modes that can be used for analysis and modal tracking as well as (2) indicators of condition and possible changes from normal operation.

  20. Homophobia: An Impulsive Attraction to the Same Sex? Evidence From Eye-Tracking Data in a Picture-Viewing Task.

    Science.gov (United States)

    Cheval, Boris; Radel, Remi; Grob, Emmanuelle; Ghisletta, Paolo; Bianchi-Demicheli, Francesco; Chanal, Julien

    2016-05-01

    Some models suggest that homophobia can be explained as a denied attraction toward same-sex individuals. While it has been found that homophobic men have same-sex attraction, these results are not consistent. This study drew on the dual-process models to test the assumption that sexual interest in homosexual cues among men high in homophobia will depend on their specific impulses toward homosexual-related stimuli. Heterosexual men (N = 38) first completed a scale measuring their level of homonegativity. Then, they performed a manikin task to evaluate their impulsive approach tendencies toward homosexual stimuli (IAHS). A picture-viewing task was performed with simultaneous eye-tracking recording to assess participants' viewing time of the visual area of interest (i.e., face and body). IAHS positively predicted the viewing time of homosexual photographs among men with a high score of homonegativity. Men with a high homonegativity score looked significantly longer at homosexual than at heterosexual photographs but only when they had a high IAHS. These findings confirm the importance of considering the variability in impulsive processes to understand why some (but not all) men high in homophobia have homosexual interest. These findings reinforce the theoretical basis for elaborating a dual-process model for behaviors in the sexual context. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  1. Biometric identification using knee X-rays.

    Science.gov (United States)

    Shamir, Lior; Ling, Shari; Rahimi, Salim; Ferrucci, Luigi; Goldberg, Ilya G

    2009-01-01

    Identification of people often makes use of unique features of the face, fingerprints and retina. Beyond this, a similar identifying process can be applied to internal parts of the body that are not visible to the unaided eye. Here we show that knee X-rays can be used for the identification of individual persons. The image analysis method is based on the wnd-charm algorithm, which has been found effective for the diagnosis of clinical conditions of knee joints. Experimental results show that the rank-10 identification accuracy using a dataset of 425 individuals is ~56%, and the rank-1 accuracy is ~34%. The dataset contained knee X-rays taken several years apart from each other, showing that the identifiable features correspond to specific persons, rather than the present clinical condition of the joint.

  2. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    International Nuclear Information System (INIS)

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank; Morin-Ducote, Garnetta; Hudson, Kathleen B.

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content

  3. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie; Tourassi, Georgia D. [Biomedical Science and Engineering Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pinto, Frank [School of Engineering, Science, and Technology, Virginia State University, Petersburg, Virginia 23806 (United States); Morin-Ducote, Garnetta; Hudson, Kathleen B. [Department of Radiology, University of Tennessee Medical Center at Knoxville, Knoxville, Tennessee 37920 (United States)

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  4. The Using of the 3D Face Modal in Case of the Biometrical Identification in Access Control Systems

    Directory of Open Access Journals (Sweden)

    S. V. Pivtoratskaya

    2011-03-01

    Full Text Available The biometrical identification using the face geometry is proposed here. The method to eliminate the distortions of face’s images by means of the restoration of the face as segment of the ellipsoid is adduced.

  5. Intermediate view synthesis for eye-gazing

    Science.gov (United States)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  6. Eye-Tracking Analysis of the Figures of Anti-Smoking Health Promoting Periodical’s Illustrations1

    Directory of Open Access Journals (Sweden)

    Maródi Ágnes

    2015-08-01

    Full Text Available Nowadays new education technologies and e-communication devices give new measuring and assessing tools for researchers. Eye-tracking is one of these new methods in education. In our study we assessed 4 figures from the anti-smoking heath issues of National Institute for Health Development. In the study 22 students were included from a 7th grade class of a Kecskemét primary school. Our results show that students concentrate on the text-part of the figures except if the picture is frightening. However if the text and the picture are not both frightening enough, the message will not be transferred to young students.

  7. Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1.

    Science.gov (United States)

    Stubbs, Amber; Kotfila, Christopher; Uzuner, Özlem

    2015-12-01

    The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured four tracks. The first of these was the de-identification track focused on identifying protected health information (PHI) in longitudinal clinical narratives. The longitudinal nature of clinical narratives calls particular attention to details of information that, while benign on their own in separate records, can lead to identification of patients in combination in longitudinal records. Accordingly, the 2014 de-identification track addressed a broader set of entities and PHI than covered by the Health Insurance Portability and Accountability Act - the focus of the de-identification shared task that was organized in 2006. Ten teams tackled the 2014 de-identification task and submitted 22 system outputs for evaluation. Each team was evaluated on their best performing system output. Three of the 10 systems achieved F1 scores over .90, and seven of the top 10 scored over .75. The most successful systems combined conditional random fields and hand-written rules. Our findings indicate that automated systems can be very effective for this task, but that de-identification is not yet a solved problem. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A novel method for observation by unaided eyes of nitrogen ion tracks and angular distribution in a plasma focus device using 50 Hz–HV electrochemically-etched polycarbonate detectors

    International Nuclear Information System (INIS)

    Sohrabi, M.; Habibi, M.; Roshani, G.H.; Ramezani, V.

    2012-01-01

    A novel ion detection method has been developed and studied in this paper for the first time to detect and observe tracks of nitrogen ions and their angular distribution by unaided eyes in the Amirkabir 4 kJ plasma focus device (PFD). The method is based on electrochemical etching (ECE) of nitrogen ion tracks in 1 mm thick large area polycarbonate (PC) detectors. The ECE method employed a specially designed and constructed large area ECE chamber by applying a 50 Hz–high voltage (HV) generator under optimized ECE conditions. The nitrogen ion tracks and angular distribution were efficiently (constructed for this study) amplified to a point observable by the unaided eyes. The beam profile and angular distribution of nitrogen ion tracks in the central axes of the beam and two- and three-dimensional iso-ion track density distributions showing micro-beam spots were determined. The distribution of ion track density along the central axes versus angular position shows double humps around a dip at the 0° angular positions. The method introduced in this paper proved to be quite efficient for ion beam profile and characteristic studies in PFDs with potential for ion detection studies and other relevant dosimetry applications.

  9. Task demands modulate decision and eye movement responses in the chimeric face test: examining the right hemisphere processing account

    Directory of Open Access Journals (Sweden)

    Jason eCoronel

    2014-03-01

    Full Text Available A large and growing body of work, conducted in both brain-intact and brain-damaged populations, has used the free viewing chimeric face test as a measure of hemispheric dominance for the extraction of emotional information from faces. These studies generally show that normal right-handed individuals tend to perceive chimeric faces as more emotional if the emotional expression is presented on the half of the face to the viewer’s left (left hemiface. However, the mechanisms underlying this lateralized bias remain unclear. Here, we examine the extent to which this bias is driven by right hemisphere processing advantages versus default scanning biases in a unique way -- by changing task demands. In particular, we compare the original task with one in which right-hemisphere-biased processing cannot provide a decision advantage. Our behavioral and eye-movement data are inconsistent with the predictions of a default scanning bias account and support the idea that the left hemiface bias found in the chimeric face test is largely due to strategic use of right hemisphere processing mechanisms.

  10. Experiencing Light's Properties within Your Own Eye

    Science.gov (United States)

    Mauser, Michael

    2011-01-01

    Seeing the reflection, refraction, dispersion, absorption, polarization, and scattering or diffraction of light within your own eye makes these properties of light truly personal. There are practical aspects of these within the eye phenomena, such as eye tracking for computer interfaces. They also offer some intriguing diversions, for example,…

  11. On the relationship between optical variability, visual saliency, and eye fixations: a computational approach.

    Science.gov (United States)

    Garcia-Diaz, Antón; Leborán, Víctor; Fdez-Vidal, Xosé R; Pardo, Xosé M

    2012-06-12

    A hierarchical definition of optical variability is proposed that links physical magnitudes to visual saliency and yields a more reductionist interpretation than previous approaches. This definition is shown to be grounded on the classical efficient coding hypothesis. Moreover, we propose that a major goal of contextual adaptation mechanisms is to ensure the invariance of the behavior that the contribution of an image point to optical variability elicits in the visual system. This hypothesis and the necessary assumptions are tested through the comparison with human fixations and state-of-the-art approaches to saliency in three open access eye-tracking datasets, including one devoted to images with faces, as well as in a novel experiment using hyperspectral representations of surface reflectance. The results on faces yield a significant reduction of the potential strength of semantic influences compared to previous works. The results on hyperspectral images support the assumptions to estimate optical variability. As well, the proposed approach explains quantitative results related to a visual illusion observed for images of corners, which does not involve eye movements.

  12. Preverbal Infants Anticipate that Food Will Be Brought to the Mouth: An Eye Tracking Study of Manual Feeding and Flying Spoons

    Science.gov (United States)

    Kochukhova, Olga; Gredeback, Gustaf

    2010-01-01

    This study relies on eye tracking technology to investigate how humans perceive others' feeding actions. Results demonstrate that 6-month-olds (n = 54) anticipate that food is brought to the mouth when observing an adult feeding herself with a spoon. Still, they fail to anticipate self-propelled (SP) spoons that move toward the mouth and manual…

  13. The importance of the eyes: communication skills in infants of blind parents.

    Science.gov (United States)

    Senju, Atsushi; Tucker, Leslie; Pasco, Greg; Hudry, Kristelle; Elsabbagh, Mayada; Charman, Tony; Johnson, Mark H

    2013-06-07

    The effects of selectively different experience of eye contact and gaze behaviour on the early development of five sighted infants of blind parents were investigated. Infants were assessed longitudinally at 6-10, 12-15 and 24-47 months. Face scanning and gaze following were assessed using eye tracking. In addition, established measures of autistic-like behaviours and standardized tests of cognitive, motor and linguistic development, as well as observations of naturalistic parent-child interaction were collected. These data were compared with those obtained from a larger group of sighted infants of sighted parents. Infants with blind parents did not show an overall decrease in eye contact or gaze following when they observed sighted adults on video or in live interactions, nor did they show any autistic-like behaviours. However, they directed their own eye gaze somewhat less frequently towards their blind mothers and also showed improved performance in visual memory and attention at younger ages. Being reared with significantly reduced experience of eye contact and gaze behaviour does not preclude sighted infants from developing typical gaze processing and other social-communication skills. Indeed, the need to switch between different types of communication strategy may actually enhance other skills during development.

  14. Exclusion of identification by negative superposition

    Directory of Open Access Journals (Sweden)

    Takač Šandor

    2012-01-01

    Full Text Available The paper represents the first report of negative superposition in our country. Photo of randomly selected young, living woman was superimposed on the previously discovered female skull. Computer program Adobe Photoshop 7.0 was used in work. Digitilized photographs of the skull and face, after uploaded to computer, were superimposed on each other and displayed on the monitor in order to assess their possible similarities or differences. Special attention was payed to matching the same anthropometrical points of the skull and face, as well as following their contours. The process of fitting the skull and the photograph is usually started by setting eyes in correct position relative to the orbits. In this case, lower jaw gonions go beyond the face contour and gnathion is highly placed. By positioning the chin, mouth and nose their correct anatomical position cannot be achieved. All the difficulties associated with the superposition were recorded, with special emphasis on critical evaluation of work results in a negative superposition. Negative superposition has greater probative value (exclusion of identification than positive (possible identification. 100% negative superposition is easily achieved, but 100% positive - almost never. 'Each skull is unique and viewed from different perspectives is always a new challenge'. From this point of view, identification can be negative or of high probability.

  15. Prevention of Eye Injuries

    OpenAIRE

    Pashby, Tom

    1981-01-01

    In Canada 30,000 people are registered as blind; in one third of these, blindness might have been avoided. Prevention is the key to reducing the number of eye injuries and blind eyes. The role of the family physician in early identification of treatable conditions and in the education of patients is discussed, but responsibility for prevention belongs to all physicians. The success of prevention is seen in the great reduction in eye injuries in industry and sports since eye protectors have be...

  16. Mapping face recognition information use across cultures

    Directory of Open Access Journals (Sweden)

    Sébastien eMiellet

    2013-02-01

    Full Text Available Face recognition is not rooted in a universal eye movement information-gathering strategy. Western observers favor a local facial feature sampling strategy, whereas Eastern observers prefer sampling face information from a global, central fixation strategy. Yet, the precise qualitative (the diagnostic and quantitative (the amount information underlying these cultural perceptual biases in face recognition remains undetermined.To this end, we monitored the eye movements of Western and Eastern observers during a face recognition task, with a novel gaze-contingent technique: the Expanding Spotlight. We used 2° Gaussian apertures centered on the observers' fixations expanding dynamically at a rate of 1° every 25ms at each fixation - the longer the fixation duration, the larger the aperture size. Identity-specific face information was only displayed within the Gaussian aperture; outside the aperture, an average face template was displayed to facilitate saccade planning. Thus, the Expanding Spotlight simultaneously maps out the facial information span at each fixation location.Data obtained with the Expanding Spotlight technique confirmed that Westerners extract more information from the eye region, whereas Easterners extract more information from the nose region. Interestingly, this quantitative difference was paired with a qualitative disparity. Retinal filters based on spatial frequency decomposition built from the fixations maps revealed that Westerners used local high-spatial frequency information sampling, covering all the features critical for effective face recognition (the eyes and the mouth. In contrast, Easterners achieved a similar result by using global low-spatial frequency information from those facial features.Our data show that the face system flexibly engages into local or global eye movement strategies across cultures, by relying on distinct facial information span and culturally tuned spatially filtered information. Overall, our

  17. Eye contact with neutral and smiling faces: effects on frontal EEG asymmetry and autonomic responses

    Directory of Open Access Journals (Sweden)

    Laura Maria Pönkänen

    2012-05-01

    Full Text Available In our previous studies we have shown that seeing another person live with a direct vs. averted gaze results in greater relative left-sided frontal asymmetry in the electroencephalography (EEG, associated with approach motivation, and in enhanced skin conductance responses indicating autonomic arousal. In our studies, however, the stimulus persons had a neutral expression. In real-life social interaction, eye contact is often associated with a smile, which is another signal of the sender’s approach-related motivation. A smile could therefore enhance the affective-motivational responses to eye contact. In the present study, we investigated whether the facial expression (neutral vs. smile would modulate the frontal EEG asymmetry and autonomic arousal to seeing a direct vs. an averted gaze in faces presented live through a liquid crystal shutter. The results showed that the skin conductance responses were greater for the direct than the averted gaze and that the effect of gaze direction was more pronounced for a smiling than a neutral face. However, the frontal EEG asymmetry results revealed a more complex pattern. Participants whose responses to seeing the other person were overall indicative of leftward frontal activity (indicative of approach showed greater relative left-sided asymmetry for the direct vs. averted gaze, whereas participants whose responses were overall indicative of rightward frontal activity (indicative of avoidance showed greater relative right-sided asymmetry to direct vs. averted gaze. The other person’s facial expression did not have an effect on the frontal EEG asymmetry. These findings may reflect that another’s direct gaze, as compared to their smile, has a more dominant role in regulating perceivers’ approach motivation.

  18. Differences in Sequential Eye Movement Behavior between Taiwanese and American Viewers

    Directory of Open Access Journals (Sweden)

    Yen Ju eLee

    2016-05-01

    Full Text Available Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context.

  19. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology

    Directory of Open Access Journals (Sweden)

    Felipe Fernandez-Mendez

    2017-01-01

    Full Text Available An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis’ victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation.

  20. Attention and Recall of Point-of-sale Tobacco Marketing: A Mobile Eye-Tracking Pilot Study

    Directory of Open Access Journals (Sweden)

    Maansi Bansal-Travers

    2016-01-01

    Full Text Available  Introduction: As tobacco advertising restrictions have increased, the retail ‘power wall’ behind the counter is increasingly invaluable for marketing tobacco products. Objective: The primary objectives of this pilot study were 3-fold: (1 evaluate the attention paid/fixations on the area behind the cash register where tobacco advertising is concentrated and tobacco products are displayed in a real-world setting, (2 evaluate the duration (dwell-time of these fixations, and (3 evaluate the recall of advertising displayed on the tobacco power wall. Methods: Data from 13 Smokers (S and 12 Susceptible or non-daily Smokers (SS aged 180–30 from a mobile eye-tracking study. Mobile-eye tracking technology records the orientation (fixation and duration (dwell-time of visual attention. Participants were randomized to one of three purchase tasks at a convenience store: Candy bar Only (CO; N = 10, Candy bar + Specified cigarette Brand (CSB; N = 6, and Candy bar + cigarette Brand of their Choice (CBC; N = 9. A post-session survey evaluated recall of tobacco marketing. Key outcomes were fixations and dwell-time on the cigarette displays at the point-of-sale. Results: Participants spent a median time of 44 seconds during the standardized time evaluated and nearly three-quarters (72% fixated on the power wall during their purchase, regardless of smoking status (S: 77%, SS: 67% or purchase task (CO: 44%, CSB: 71%, CBC: 100%. In the post session survey, nearly all participants (96% indicated they noticed a cigarette brand and 64% were able to describe a specific part of the tobacco wall or recall a promotional offer. Conclusions: Consumers are exposed to point-of-sale tobacco marketing, regardless of smoking status. FDA should consider regulations that limit exposure to point-of-sale tobacco marketing among consumers.

  1. Attention and Recall of Point-of-sale Tobacco Marketing: A Mobile Eye-Tracking Pilot Study.

    Science.gov (United States)

    Bansal-Travers, Maansi; Adkison, Sarah E; O'Connor, Richard J; Thrasher, James F

    2016-01-01

    As tobacco advertising restrictions have increased, the retail 'power wall' behind the counter is increasingly invaluable for marketing tobacco products. The primary objectives of this pilot study were 3-fold: (1) evaluate the attention paid/fixations on the area behind the cash register where tobacco advertising is concentrated and tobacco products are displayed in a real-world setting, (2) evaluate the duration (dwell-time) of these fixations, and (3) evaluate the recall of advertising displayed on the tobacco power wall. Data from 13 Smokers (S) and 12 Susceptible or non-daily Smokers (SS) aged 180-30 from a mobile eye-tracking study. Mobile-eye tracking technology records the orientation (fixation) and duration (dwell-time) of visual attention. Participants were randomized to one of three purchase tasks at a convenience store: Candy bar Only (CO; N = 10), Candy bar + Specified cigarette Brand (CSB; N = 6), and Candy bar + cigarette Brand of their Choice (CBC; N = 9). A post-session survey evaluated recall of tobacco marketing. Key outcomes were fixations and dwell-time on the cigarette displays at the point-of-sale. Participants spent a median time of 44 seconds during the standardized time evaluated and nearly three-quarters (72%) fixated on the power wall during their purchase, regardless of smoking status (S: 77%, SS: 67%) or purchase task (CO: 44%, CSB: 71%, CBC: 100%). In the post session survey, nearly all participants (96%) indicated they noticed a cigarette brand and 64% were able to describe a specific part of the tobacco wall or recall a promotional offer. Consumers are exposed to point-of-sale tobacco marketing, regardless of smoking status. FDA should consider regulations that limit exposure to point-of-sale tobacco marketing among consumers.

  2. Infant's visual preferences for facial traits associated with adult attractiveness judgements: data from eye-tracking.

    Science.gov (United States)

    Griffey, Jack A F; Little, Anthony C

    2014-08-01

    Human preferences for facial attractiveness appear to emerge at an early stage during infant development. A number of studies have demonstrated that infants display a robust preference for facial attractiveness, preferring to look at physically attractive faces versus less attractive faces as judged by adults. However, to-date, relatively little is known about which traits of the face infants use to base these preferences upon. In contrast, a large number of studies conducted with human adults have identified that preference for attractive faces can be attributed to a number of specific facial traits. The purpose of the experiments here was to measure and assess infant's visual preference via eye-tracker technology for faces manipulated for one of three traits known to effect attractiveness judgments in adult preference tests: symmetry, averageness, and sexually dimorphic traits. Sixty-four infants (28 female and 36 male) aged between 12 and 24 months old each completed a visual paired comparison (VPC) task for one of the three facial dimensions investigated. Data indicated that infants displayed a significant visual preference for facial symmetry analogous to those preferences displayed by adults. Infants also displayed a significant visual preference for feminine versions of faces, in line with some studies of adult preferences. Visual preferences for facial non-averageness, or distinctiveness were also seen, a pattern opposite to that seen in adults. These findings demonstrate that infant's appreciation for facial attractiveness in adult images between the ages of 12 and 24 months of age is based on some, but not all, traits that adults find attractive. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    Science.gov (United States)

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. A developmental change of the visual behavior of the face recognition in the early infancy.

    Science.gov (United States)

    Konishi, Yukihiko; Okubo, Kensuke; Kato, Ikuko; Ijichi, Sonoko; Nishida, Tomoko; Kusaka, Takashi; Isobe, Kenichi; Itoh, Susumu; Kato, Masaharu; Konishi, Yukuo

    2012-10-01

    The purpose of this study was to examine developmental changes in visuocognitive function, particularly face recognition, in early infancy. In this study, we measured eye movement in healthy infants with a preference gaze problem, particularly eye movement between two face stimulations. We used the eye tracker system (Tobii1750, Tobii Technologies, Sweden) to measure eye movement in infants. Subjects were 17 3-month-old infants and 16 4-month-old infants. The subjects looked two types of face stimulation (upright face/scrambled face) at the same time and we measured their visual behavior (preference/looking/eye movement). Our results showed that 4-month-old infants looked at an upright face longer than 3-month infants, and exploratory behavior while comparing two face stimulations significantly increased. In this study, 4-month-old infants showed a preference towards an upright face. The numbers of eye movements between two face stimuli significantly increased in 4-month-old infants. These results suggest that eye movements may be an important index in face cognitive function during early infancy. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  5. Remote gaze tracking system for 3D environments.

    Science.gov (United States)

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  6. Posterior cortical atrophy: An investigation of scan paths generated during Face Matching tasks.

    Directory of Open Access Journals (Sweden)

    Benjamin P Meek

    2013-06-01

    Full Text Available When viewing a face, healthy individuals focus more on the area containing the eyes and upper nose in order to retrieve important featural and configural information. In contrast, individuals with face blindness (prosopagnosia tend to direct fixations towards individual facial features – particularly the mouth. Presented here is an examination of face perception deficits in individuals with Posterior Cortical Atrophy (PCA. PCA is a rare progressive neurodegenerative disorder that is characterized by atrophy in occipito-parietal and occipito-temporal cortices. PCA primarily affects higher visual processing, while memory, reasoning, and insight remain relatively intact. A common symptom of PCA is a decreased effective field of vision caused by the inability to ‘see the whole picture’. Individuals with PCA and healthy control participants completed a same/different discrimination task in which images of faces were presented as cue-target pairs. Eye-tracking equipment and a novel computer-based perceptual task – the Viewing Window paradigm – were used to investigate scan patterns when faces were presented in open view or through a restricted-view, respectively. In contrast to previous prosopagnosia research, individuals with PCA each produced unique scan paths that focused on non-diagnostically useful locations. This focus on non-diagnostically useful locations was also present when using a restricted viewing aperture, suggesting that individuals with PCA have difficulty processing the face at either the featural or configural level. In fact, it appears that the decreased effective field of view in PCA patients is so severe that it results in an extreme dependence on local processing, such that a feature-based approach is not even possible.

  7. Time Course of Visual Attention in Infant Categorization of Cats versus Dogs: Evidence for a Head Bias as Revealed through Eye Tracking

    Science.gov (United States)

    Quinn, Paul C.; Doran, Matthew M.; Reiss, Jason E.; Hoffman, James E.

    2009-01-01

    Previous looking time studies have shown that infants use the heads of cat and dog images to form category representations for these animal classes. The present research used an eye-tracking procedure to determine the time course of attention to the head and whether it reflects a preexisting bias or online learning. Six- to 7-month-olds were…

  8. Aging changes in the face

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/004004.htm Aging changes in the face To use the sharing ... face with age References Brodie SE, Francis JH. Aging and disorders of the eye. In: Fillit HM, ...

  9. Emulation of Physician Tasks in Eye-Tracked Virtual Reality for Remote Diagnosis of Neurodegenerative Disease.

    Science.gov (United States)

    Orlosky, Jason; Itoh, Yuta; Ranchet, Maud; Kiyokawa, Kiyoshi; Morgan, John; Devos, Hannes

    2017-04-01

    For neurodegenerative conditions like Parkinson's disease, early and accurate diagnosis is still a difficult task. Evaluations can be time consuming, patients must often travel to metropolitan areas or different cities to see experts, and misdiagnosis can result in improper treatment. To date, only a handful of assistive or remote methods exist to help physicians evaluate patients with suspected neurological disease in a convenient and consistent way. In this paper, we present a low-cost VR interface designed to support evaluation and diagnosis of neurodegenerative disease and test its use in a clinical setting. Using a commercially available VR display with an infrared camera integrated into the lens, we have constructed a 3D virtual environment designed to emulate common tasks used to evaluate patients, such as fixating on a point, conducting smooth pursuit of an object, or executing saccades. These virtual tasks are designed to elicit eye movements commonly associated with neurodegenerative disease, such as abnormal saccades, square wave jerks, and ocular tremor. Next, we conducted experiments with 9 patients with a diagnosis of Parkinson's disease and 7 healthy controls to test the system's potential to emulate tasks for clinical diagnosis. We then applied eye tracking algorithms and image enhancement to the eye recordings taken during the experiment and conducted a short follow-up study with two physicians for evaluation. Results showed that our VR interface was able to elicit five common types of movements usable for evaluation, physicians were able to confirm three out of four abnormalities, and visualizations were rated as potentially useful for diagnosis.

  10. Human-like object tracking and gaze estimation with PKD android.

    Science.gov (United States)

    Wijayasinghe, Indika B; Miller, Haylie L; Das, Sumit K; Bugnariu, Nicoleta L; Popa, Dan O

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  11. Human-like object tracking and gaze estimation with PKD android

    Science.gov (United States)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  12. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    Science.gov (United States)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  13. 21 CFR 886.4750 - Ophthalmic eye shield.

    Science.gov (United States)

    2010-04-01

    ...) MEDICAL DEVICES OPHTHALMIC DEVICES Surgical Devices § 886.4750 Ophthalmic eye shield. (a) Identification. An ophthalmic eye shield is a device that consists of a plastic or aluminum eye covering intended to... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic eye shield. 886.4750 Section 886.4750...

  14. One visual search, many memory searches: An eye-tracking investigation of hybrid search.

    Science.gov (United States)

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2017-09-01

    Suppose you go to the supermarket with a shopping list of 10 items held in memory. Your shopping expedition can be seen as a combination of visual search and memory search. This is known as "hybrid search." There is a growing interest in understanding how hybrid search tasks are accomplished. We used eye tracking to examine how manipulating the number of possible targets (the memory set size [MSS]) changes how observers (Os) search. We found that dwell time on each distractor increased with MSS, suggesting a memory search was being executed each time a new distractor was fixated. Meanwhile, although the rate of refixation increased with MSS, it was not nearly enough to suggest a strategy that involves repeatedly searching visual space for subgroups of the target set. These data provide a clear demonstration that hybrid search tasks are carried out via a "one visual search, many memory searches" heuristic in which Os examine items in the visual array once with a very low rate of refixations. For each item selected, Os activate a memory search that produces logarithmic response time increases with increased MSS. Furthermore, the percentage of distractors fixated was strongly modulated by the MSS: More items in the MSS led to a higher percentage of fixated distractors. Searching for more potential targets appears to significantly alter how Os approach the task, ultimately resulting in more eye movements and longer response times.

  15. Through the Eyes of the Beholder: Simulated Eye-movement Experience ("SEE") Modulates Valence Bias in Response to Emotional Ambiguity.

    Science.gov (United States)

    Neta, Maital; Dodd, Michael D

    2018-02-01

    Although some facial expressions provide clear information about people's emotions and intentions (happy, angry), others (surprise) are ambiguous because they can signal both positive (e.g., surprise party) and negative outcomes (e.g., witnessing an accident). Without a clarifying context, surprise is interpreted as positive by some and negative by others, and this valence bias is stable across time. When compared to fearful expressions, which are consistently rated as negative, surprise and fear share similar morphological features (e.g., widened eyes) primarily in the upper part of the face. Recently, we demonstrated that the valence bias was associated with a specific pattern of eye movements (positive bias associated with faster fixation to the lower part of the face). In this follow-up, we identified two participants from our previous study who had the most positive and most negative valence bias. We used their eye movements to create a moving window such that new participants viewed faces through the eyes of one our previous participants (subjects saw only the areas of the face that were directly fixated by the original participants in the exact order they were fixated; i.e., Simulated Eye-movement Experience). The input provided by these windows modulated the valence ratings of surprise, but not fear faces. These findings suggest there are meaningful individual differences in how people process faces, and that these differences impact our emotional perceptions. Furthermore, this study is unique in its approach to examining individual differences in emotion by creating a new methodology adapted from those used primarily in the vision/attention domain. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK FOR FACE RECOGNITION USING GABOR FEATURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Muthukannan K

    2013-11-01

    Full Text Available Face detection and recognition is the first step for many applications in various fields such as identification and is used as a key to enter into the various electronic devices, video surveillance, and human computer interface and image database management. This paper focuses on feature extraction in an image using Gabor filter and the extracted image feature vector is then given as an input to the neural network. The neural network is trained with the input data. The Gabor wavelet concentrates on the important components of the face including eye, mouth, nose, cheeks. The main requirement of this technique is the threshold, which gives privileged sensitivity. The threshold values are the feature vectors taken from the faces. These feature vectors are given into the feed forward neural network to train the network. Using the feed forward neural network as a classifier, the recognized and unrecognized faces are classified. This classifier attains a higher face deduction rate. By training more input vectors the system proves to be effective. The effectiveness of the proposed method is demonstrated by the experimental results.

  17. The Rational Adolescent: Strategic Information Processing during Decision Making Revealed by Eye Tracking.

    Science.gov (United States)

    Kwak, Youngbin; Payne, John W; Cohen, Andrew L; Huettel, Scott A

    2015-01-01

    Adolescence is often viewed as a time of irrational, risky decision-making - despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics.

  18. The Rational Adolescent: Strategic Information Processing during Decision Making Revealed by Eye Tracking

    Science.gov (United States)

    Kwak, Youngbin; Payne, John W.; Cohen, Andrew L.; Huettel, Scott A.

    2015-01-01

    Adolescence is often viewed as a time of irrational, risky decision-making – despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics. PMID:26388664

  19. Eye tracker based study: Perception of faces with a cleft lip and nose deformity.

    Science.gov (United States)

    van Schijndel, Olaf; Litschel, Ralph; Maal, Thomas J J; Bergé, Stefaan J; Tasman, Abel-Jan

    2015-10-01

    Quantification of visual attention directed towards cleft stigmata and its impact on the perception of selected personality traits. Forty observers were divided into two groups and their visual scan paths were recorded. Both groups observed a series of photographs displaying full frontal views of the faces of 18 adult patients with clefts, nine with residual cleft stigmata and nine with digitally-corrected stigmata (each patient only appeared once per series). Patients that appeared with residual stigmata in one series appeared digitally corrected in the other series and vice versa. Visual fixation times on the upper lip and nose were compared between the original and corrected photographs. Observers subsequently rated personality traits as perceived using visual analogue scales and the same photographs that they had observed in the series. In faces depicting cleft stigmata observers spent more time looking at the oronasal region of interest, followed by the eyes (39.6%; SD 5.0 and 35.1%; SD 3.6, respectively, p = 0.0198). Observers spent more time looking at the cleft lip compared with the corrected lip (21.2%; SD 4.0 and 16.7%; SD 5.0, respectively, p = 0.006). The differences between questionnaire scores for faces with cleft stigmata compared with faces with corrected stigmata for withdrawn-sociable, discontent-content, lazy-assiduous, unimaginative-creative, unlikeable-likeable, and the sum of individual personality traits were not significant. According to these findings, cleft lip and cleft nose have an attention-drawing potential with the cleft lip being the major attention drawing factor. These data do not provide supportive evidence for the notion reported in literature that patients with clefts are perceived as having negative personality traits. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. 21 CFR 878.4440 - Eye pad.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Eye pad. 878.4440 Section 878.4440 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4440 Eye pad. (a) Identification. An eye pad is...

  1. Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.

    Science.gov (United States)

    Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P

    2018-02-01

    Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Online Sentence Reading in People With Aphasia: Evidence From Eye Tracking.

    Science.gov (United States)

    Knilans, Jessica; DeDe, Gayle

    2015-11-01

    There is a lot of evidence that people with aphasia have more difficulty understanding structurally complex sentences (e.g., object clefts) than simpler sentences (subject clefts). However, subject clefts also occur more frequently in English than object clefts. Thus, it is possible that both structural complexity and frequency affect how people with aphasia understand these structures. Nine people with aphasia and 8 age-matched controls participated in the study. The stimuli consisted of 24 object cleft and 24 subject cleft sentences. The task was eye tracking during reading, which permits a more fine-grained analysis of reading performance than measures such as self-paced reading. As expected, controls had longer reading times for critical regions in object cleft sentences compared with subject cleft sentences. People with aphasia showed the predicted effects of structural frequency. Effects of structural complexity in people with aphasia did not emerge on their first pass through the sentence but were observed when they were rereading critical regions of complex sentences. People with aphasia are sensitive to both structural complexity and structural frequency when reading. However, people with aphasia may use different reading strategies than controls when confronted with relatively infrequent and complex sentence structures.

  3. Structure modulates similarity-based interference in sluicing: An eye tracking study.

    Directory of Open Access Journals (Sweden)

    Jesse A. Harris

    2015-12-01

    Full Text Available In cue-based content-addressable approaches to memory, a target and its competitors are retrieved in parallel from memory via a fast, associative cue-matching procedure under a severely limited focus of attention. Such a parallel matching procedure could in principle ignore the serial order or hierarchical structure characteristic of linguistic relations. I present an eye tracking while reading experiment that investigates whether the sentential position of a potential antecedent modulates the strength of similarity-based interference, a well-studied effect in which increased similarity in features between a target and its competitors results in slower and less accurate retrieval overall. The manipulation trades on an independently established Locality bias in sluiced structures to associate a wh-remnant (which ones in clausal ellipsis with the most local correlate (some wines, as in The tourists enjoyed some wines, but I don’t know which ones. The findings generally support cue-based parsing models of sentence processing that are subject to similarity-based interference in retrieval, and provide additional support to the growing body of evidence that retrieval is sensitive to both the structural position of a target antecedent and its competitors, and the specificity of retrieval cues.

  4. Building metamemorial knowledge over time: insights from eye tracking about the bases of feeling-of-knowing and confidence judgments.

    Science.gov (United States)

    Chua, Elizabeth F; Solinger, Lisa A

    2015-01-01

    Metamemory processes depend on different factors across the learning and memory time-scale. In the laboratory, subjects are often asked to make prospective feeling-of-knowing (FOK) judgments about target retrievability, or are asked to make retrospective confidence judgments (RCJs) about the retrieved target. We examined distinct and shared contributors to metamemory judgments, and how they were built over time. Eye movements were monitored during a face-scene associative memory task. At test, participants viewed a studied scene, then rated their FOK that they would remember the associated face. This was followed by a forced choice recognition test and RCJs. FOK judgments were less accurate than RCJ judgments, showing that the addition of mnemonic experience can increase metacognitive accuracy over time. However, there was also evidence that the given FOK rating influenced RCJs. Turning to eye movements, initial analyses showed that higher cue fluency was related to both higher FOKs and higher RCJs. However, further analyses revealed that the effects of the scene cue on RCJs were mediated by FOKs. Turning to the target, increased viewing time and faster viewing of the correct associate related to higher FOKs, consistent with the idea that target accessibility is a basis of FOKs. In contrast, the amount of viewing directed to the chosen face, regardless of whether it was correct, predicted higher RCJs, suggesting that choice experience is a significant contributor RCJs. We also examined covariates of the change in RCJ rating from the FOK rating, and showed that increased and faster viewing of the chosen face predicted raising one's confidence above one's FOK. Taken together these results suggest that metamemory judgments should not be thought of only as distinct subjective experiences, but complex processes that interact and evolve as new psychological bases for subjective experience become available.

  5. Emotional Face Identification in Youths with Primary Bipolar Disorder or Primary Attention-Deficit/Hyperactivity Disorder

    Science.gov (United States)

    Seymour, Karen E.; Pescosolido, Matthew F.; Reidy, Brooke L.; Galvan, Thania; Kim, Kerri L.; Young, Matthew; Dickstein, Daniel P.

    2013-01-01

    Objective: Bipolar disorder (BD) and attention-deficit/hyperactivity disorder (ADHD) are often comorbid or confounded; therefore, we evaluated emotional face identification to better understand brain/behavior interactions in children and adolescents with either primary BD, primary ADHD, or typically developing controls (TDC). Method: Participants…

  6. EYE TRACKING TO EXPLORE THE IMPACTS OF PHOTOREALISTIC 3D REPRESENTATIONS IN PEDSTRIAN NAVIGATION PERFORMANCE

    Directory of Open Access Journals (Sweden)

    W. Dong

    2016-06-01

    Full Text Available Despite the now-ubiquitous two-dimensional (2D maps, photorealistic three-dimensional (3D representations of cities (e.g., Google Earth have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users’ eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  7. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    Science.gov (United States)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  8. Binocular eye movement control and motion perception: What is being tracked?

    NARCIS (Netherlands)

    J. van der Steen (Hans); J. Dits (Joyce)

    2012-01-01

    textabstractPURPOSE. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that

  9. 21 CFR 1271.290 - Tracking.

    Science.gov (United States)

    2010-04-01

    ... TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.290 Tracking. (a) General. If you perform any... designed to facilitate effective tracking, using the distinct identification code, from the donor to the... for recording the distinct identification code and type of each HCT/P distributed to a consignee to...

  10. Exploring hadronic tau identification with DC1 datat samples a track based approach

    CERN Document Server

    Richter-Was, Elzbieta; Tarrade, F

    2004-01-01

    In this note we discuss the identification of hadronic $\\tau$s. We propose an algorithm, tauID, which starts from a reconstructed, relatively high pT track and then collects calorimetric energy deposition in a fixed cone seeded by the track eta and phi at the vertex. With the proposed algorithm we explore exclusive features of the hadronic $\\tau$ decays and we indicate also the possibility of using an energy-flow based approach for defining the energy scale of the reconstructed tau-candidates. The results presented here are limited to the barrel region (|eta| < 1.5) and are based on the DC1 events simulated without pile-up and electronic noise. We compare the performances of the proposed algorithm and of the base-line tauRec algorithm and draw some conclusions for further studies.

  11. Weight and see: Loading working memory improves incidental identification of irrelevant faces

    Directory of Open Access Journals (Sweden)

    David eCarmel

    2012-08-01

    Full Text Available Are task-irrelevant stimuli processed to a level enabling individual identification? This question is central both for perceptual processing models and for applied settings (e.g., eyewitness testimony. Lavie’s load theory proposes that working memory actively maintains attentional prioritization of relevant over irrelevant information. Loading working memory thus impairs attentional prioritization, leading to increased processing of task-irrelevant stimuli. Previous research has shown that increased working memory load leads to greater interference effects from response competing distractors. Here we test the novel prediction that increased processing of irrelevant stimuli under high working memory load should lead to a greater likelihood of incidental identification of entirely irrelevant stimuli. To test this, we asked participants to perform a word-categorization task while ignoring task-irrelevant images. The categorization task was performed during the retention interval of a working memory task with either low or high load (defined by memory set size. Following the final experimental trial, a surprise question assessed incidental identification of the irrelevant image. Loading working memory was found to improve identification of task-irrelevant faces, but not of building stimuli (shown in a separate experiment to be less distracting. These findings suggest that working memory plays a critical role in determining whether distracting stimuli will be subsequently identified.

  12. Differential Attention to Faces in Infant Siblings of Children with Autism Spectrum Disorder and Associations with Later Social and Language Ability.

    Science.gov (United States)

    Wagner, Jennifer B; Luyster, Rhiannon J; Moustapha, Hana; Tager-Flusberg, Helen; Nelson, Charles A

    2018-01-01

    A growing body of literature has begun to explore social attention in infant siblings of children with autism spectrum disorder (ASD) with hopes of identifying early differences that are associated with later ASD or other aspects of development. The present study used eye-tracking to familiar (mother) and unfamiliar (stranger) faces in two groups of 6-month-old infants: infants with no family history of ASD (low-risk controls; LRC), and infants at high risk for ASD (HRA), by virtue of having an older sibling with ASD. HRA infants were further characterized based on autism classification at 24 months or older as HRA- (HRA without an ASD outcome) or HRA+ (HRA with an ASD outcome). For time scanning faces overall, HRA+ and LRC showed similar patterns of attention, and this was significantly greater than in HRA-. When examining duration of time spent on eyes and mouth, all infants spent more time on eyes than mouth, but HRA+ showed the greatest amount of time looking at these regions, followed by LRC, then HRA-. LRC showed a positive association between 6-month attention to eyes and 18-month social-communicative behavior, while HRA- showed a negative association between attention to eyes at 6 months and expressive language at 18 months (all correlations controlled for non-verbal IQ; HRA- correlations held with and without the inclusion of the small sample of HRA+). Differences found in face scanning at 6 months, as well as associations with social communication at 18 months, point to potential variation in the developmental significance of early social attention in children at low and high risk for ASD.

  13. Robustifying eye interaction

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, John Paulin

    2006-01-01

    This paper presents a gaze typing system based on consumer hardware. Eye tracking based on consumer hardware is subject to several unknown factors. We propose methods using robust statistical principles to accommodate uncertainties in image data as well as in gaze estimates to improve accuracy. We...

  14. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    Science.gov (United States)

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  15. Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation.

    Science.gov (United States)

    McLaughlin, Laura; Bond, Raymond; Hughes, Ciara; McConnell, Jonathan; McFadden, Sonyia

    2017-09-01

    To investigate image interpretation performance by diagnostic radiography students, diagnostic radiographers and reporting radiographers by computing eye gaze metrics using eye tracking technology. Three groups of participants were studied during their interpretation of 8 digital radiographic images including the axial and appendicular skeleton, and chest (prevalence of normal images was 12.5%). A total of 464 image interpretations were collected. Participants consisted of 21 radiography students, 19 qualified radiographers and 18 qualified reporting radiographers who were further qualified to report on the musculoskeletal (MSK) system. Eye tracking data was collected using the Tobii X60 eye tracker and subsequently eye gaze metrics were computed. Voice recordings, confidence levels and diagnoses provided a clear demonstration of the image interpretation and the cognitive processes undertaken by each participant. A questionnaire afforded the participants an opportunity to offer information on their experience in image interpretation and their opinion on the eye tracking technology. Reporting radiographers demonstrated a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took a mean of 2.4s longer to clinically decide on all features compared to students. Reporting radiographers also had a 15% greater accuracy rate (p≤0.001), were more confident (p≤0.001) and took longer to clinically decide on an image diagnosis (p=0.02) than radiographers. Reporting radiographers had a greater mean fixation duration (p=0.01), mean fixation count (p=0.04) and mean visit count (p=0.04) within the areas of pathology compared to students. Eye tracking patterns, presented within heat maps, were a good reflection of group expertise and search strategies. Eye gaze metrics such as time to first fixate, fixation count, fixation duration and visit count within the areas of pathology were indicative of the radiographer's competency. The accuracy and confidence of

  16. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study

    Directory of Open Access Journals (Sweden)

    Akira Tsujimura

    2017-09-01

    Tsujimura A, Kiuchi H, Soda T, et al. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study. Sex Med 2017;5:e169–e174.

  17. The sensitivity of characteristics of cyclone activity to identification procedures in tracking algorithms

    Directory of Open Access Journals (Sweden)

    Irina Rudeva

    2014-12-01

    Full Text Available The IMILAST project (‘Intercomparison of Mid-Latitude Storm Diagnostics’ was set up to compare low-level cyclone climatologies derived from a number of objective identification algorithms. This paper is a contribution to that effort where we determine the sensitivity of three key aspects of Northern Hemisphere cyclone behaviour [namely the number of cyclones, their intensity (defined here in terms of the central pressure and their deepening rates] to specific features in the automatic cyclone identification. The sensitivity is assessed with respect to three such features which may be thought to influence the ultimate climatology produced (namely performance in areas of complicated orography, time of the detection of a cyclone, and the representation of rapidly propagating cyclones. We make use of 13 tracking methods in this analysis. We find that the filtering of cyclones in regions where the topography exceeds 1500 m can significantly change the total number of cyclones detected by a scheme, but has little impact on the cyclone intensity distribution. More dramatically, late identification of cyclones (simulated by the truncation of the first 12 hours of cyclone life cycle leads to a large reduction in cyclone numbers over the both continents and oceans (up to 80 and 40%, respectively. Finally, the potential splitting of the trajectories at times of the fastest propagation has a negligible climatological effect on geographical distribution of cyclone numbers. Overall, it has been found that the averaged deepening rates and averaged cyclone central pressure are rather insensitive to the specifics of the tracking procedure, being more sensitive to the data set used (as shown in previous studies and the geographical location of a cyclone.

  18. 21 CFR 886.1510 - Eye movement monitor.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Eye movement monitor. 886.1510 Section 886.1510...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1510 Eye movement monitor. (a) Identification. An eye movement monitor is an AC-powered device with an electrode intended to measure and record...

  19. Dust Storm Feature Identification and Tracking from 4D Simulation Data

    Science.gov (United States)

    Yu, M.; Yang, C. P.

    2016-12-01

    Dust storms cause significant damage to health, property and the environment worldwide every year. To help mitigate the damage, dust forecasting models simulate and predict upcoming dust events, providing valuable information to scientists, decision makers, and the public. Normally, the model simulations are conducted in four-dimensions (i.e., latitude, longitude, elevation and time) and represent three-dimensional (3D), spatial heterogeneous features of the storm and its evolution over space and time. This research investigates and proposes an automatic multi-threshold, region-growing based identification algorithm to identify critical dust storm features, and track the evolution process of dust storm events through space and time. In addition, a spatiotemporal data model is proposed, which can support the characterization and representation of dust storm events and their dynamic patterns. Quantitative and qualitative evaluations for the algorithm are conducted to test the sensitivity, and capability of identify and track dust storm events. This study has the potential to assist a better early warning system for decision-makers and the public, thus making hazard mitigation plans more effective.

  20. Orienting of attention via observed eye gaze is head-centred.

    Science.gov (United States)

    Bayliss, Andrew P; di Pellegrino, Giuseppe; Tipper, Steven P

    2004-11-01

    Observing averted eye gaze results in the automatic allocation of attention to the gazed-at location. The role of the orientation of the face that produces the gaze cue was investigated. The eyes in the face could look left or right in a head-centred frame, but the face itself could be oriented 90 degrees clockwise or anticlockwise such that the eyes were gazing up or down. Significant cueing effects to targets presented to the left or right of the screen were found in these head orientation conditions. This suggests that attention was directed to the side to which the eyes would have been looking towards, had the face been presented upright. This finding provides evidence that head orientation can affect gaze following, even when the head orientation alone is not a social cue. It also shows that the mechanism responsible for the allocation of attention following a gaze cue can be influenced by intrinsic object-based (i.e. head-centred) properties of the task-irrelevant cue.