WorldWideScience

Sample records for facial expressions depicting

  1. Focal Length Affects Depicted Shape and Perception of Facial Images.

    Science.gov (United States)

    Třebický, Vít; Fialová, Jitka; Kleisner, Karel; Havlíček, Jan

    2016-01-01

    Static photographs are currently the most often employed stimuli in research on social perception. The method of photograph acquisition might affect the depicted subject's facial appearance and thus also the impression of such stimuli. An important factor influencing the resulting photograph is focal length, as different focal lengths produce various levels of image distortion. Here we tested whether different focal lengths (50, 85, 105 mm) affect depicted shape and perception of female and male faces. We collected three portrait photographs of 45 (22 females, 23 males) participants under standardized conditions and camera setting varying only in the focal length. Subsequently, the three photographs from each individual were shown on screen in a randomized order using a 3-alternative forced-choice paradigm. The images were judged for attractiveness, dominance, and femininity/masculinity by 369 raters (193 females, 176 males). Facial width-to-height ratio (fWHR) was measured from each photograph and overall facial shape was analysed employing geometric morphometric methods (GMM). Our results showed that photographs taken with 50 mm focal length were rated as significantly less feminine/masculine, attractive, and dominant compared to the images taken with longer focal lengths. Further, shorter focal lengths produced faces with smaller fWHR. Subsequent GMM revealed focal length significantly affected overall facial shape of the photographed subjects. Thus methodology of photograph acquisition, focal length in this case, can significantly affect results of studies using photographic stimuli perhaps due to different levels of perspective distortion that influence shapes and proportions of morphological traits.

  2. Focal Length Affects Depicted Shape and Perception of Facial Images.

    Directory of Open Access Journals (Sweden)

    Vít Třebický

    Full Text Available Static photographs are currently the most often employed stimuli in research on social perception. The method of photograph acquisition might affect the depicted subject's facial appearance and thus also the impression of such stimuli. An important factor influencing the resulting photograph is focal length, as different focal lengths produce various levels of image distortion. Here we tested whether different focal lengths (50, 85, 105 mm affect depicted shape and perception of female and male faces. We collected three portrait photographs of 45 (22 females, 23 males participants under standardized conditions and camera setting varying only in the focal length. Subsequently, the three photographs from each individual were shown on screen in a randomized order using a 3-alternative forced-choice paradigm. The images were judged for attractiveness, dominance, and femininity/masculinity by 369 raters (193 females, 176 males. Facial width-to-height ratio (fWHR was measured from each photograph and overall facial shape was analysed employing geometric morphometric methods (GMM. Our results showed that photographs taken with 50 mm focal length were rated as significantly less feminine/masculine, attractive, and dominant compared to the images taken with longer focal lengths. Further, shorter focal lengths produced faces with smaller fWHR. Subsequent GMM revealed focal length significantly affected overall facial shape of the photographed subjects. Thus methodology of photograph acquisition, focal length in this case, can significantly affect results of studies using photographic stimuli perhaps due to different levels of perspective distortion that influence shapes and proportions of morphological traits.

  3. Facial Expression Analysis

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon

  4. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  5. Holistic facial expression classification

    Science.gov (United States)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  6. Facial expression and sarcasm.

    Science.gov (United States)

    Rockwell, P

    2001-08-01

    This study examined facial expression in the presentation of sarcasm. 60 responses (sarcastic responses = 30, nonsarcastic responses = 30) from 40 different speakers were coded by two trained coders. Expressions in three facial areas--eyebrow, eyes, and mouth--were evaluated. Only movement in the mouth area significantly differentiated ratings of sarcasm from nonsarcasm.

  7. PCA facial expression recognition

    Science.gov (United States)

    El-Hori, Inas H.; El-Momen, Zahraa K.; Ganoun, Ali

    2013-12-01

    This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. The comparative study of Facial Expression Recognition (FER) techniques namely Principal Component's analysis (PCA) and PCA with Gabor filters (GF) is done. The objective of this research is to show that PCA with Gabor filters is superior to the first technique in terms of recognition rate. To test and evaluates their performance, experiments are performed using real database by both techniques. The universally accepted five principal emotions to be recognized are: Happy, Sad, Disgust and Angry along with Neutral. The recognition rates are obtained on all the facial expressions.

  8. Assessing Pain by Facial Expression: Facial Expression as Nexus

    OpenAIRE

    Prkachin, Kenneth M.

    2009-01-01

    The experience of pain is often represented by changes in facial expression. Evidence of pain that is available from facial expression has been the subject of considerable scientific investigation. The present paper reviews the history of pain assessment via facial expression in the context of a model of pain expression as a nexus connecting internal experience with social influence. Evidence about the structure of facial expressions of pain across the lifespan is reviewed. Applications of fa...

  9. Retinotopy of facial expression adaptation.

    Science.gov (United States)

    Matsumiya, Kazumichi

    2014-01-01

    The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.

  10. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  11. Interaction between facial expression and color

    OpenAIRE

    Kae Nakajima; Tetsuto Minami; Shigeki Nakauchi

    2017-01-01

    Facial color varies depending on emotional state, and emotions are often described in relation to facial color. In this study, we investigated whether the recognition of facial expressions was affected by facial color and vice versa. In the facial expression task, expression morph continua were employed: fear-anger and sadness-happiness. The morphed faces were presented in three different facial colors (bluish, neutral, and reddish color). Participants identified a facial expression between t...

  12. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    Science.gov (United States)

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  13. Processing faces and facial expressions.

    Science.gov (United States)

    Posamentier, Mette T; Abdi, Hervé

    2003-09-01

    This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from "traditional" approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions.

  14. iFace: Facial Expression Training System

    OpenAIRE

    Ito, Kyoko; Kurose, Hiroyuki; Takami, Ai; Nishida, Shogo

    2008-01-01

    In this study, a target facial expression selection interface for a facial expression training system and a facial expression training system were both proposed and developed. Twelve female dentists used the facial expression training system, and evaluations and opinions about the facial expression training system were obtained from these participants. In the future, we will attempt to improve both the target facial expression selection interface and the comparison of a current and a target f...

  15. Facial Asymmetry and Emotional Expression

    CERN Document Server

    Pickin, Andrew

    2011-01-01

    This report is about facial asymmetry, its connection to emotional expression, and methods of measuring facial asymmetry in videos of faces. The research was motivated by two factors: firstly, there was a real opportunity to develop a novel measure of asymmetry that required minimal human involvement and that improved on earlier measures in the literature; and secondly, the study of the relationship between facial asymmetry and emotional expression is both interesting in its own right, and important because it can inform neuropsychological theory and answer open questions concerning emotional processing in the brain. The two aims of the research were: first, to develop an automatic frame-by-frame measure of facial asymmetry in videos of faces that improved on previous measures; and second, to use the measure to analyse the relationship between facial asymmetry and emotional expression, and connect our findings with previous research of the relationship.

  16. Spontaneous Emotional Facial Expression Detection

    Directory of Open Access Journals (Sweden)

    Zhihong Zeng

    2006-08-01

    Full Text Available Change in a speaker’s emotion is a fundamental component in human communication. Automatic recognition of spontaneous emotion would significantly impact human-computer interaction and emotion-related studies in education, psychology and psychiatry. In this paper, we explore methods for detecting emotional facial expressions occurring in a realistic human conversation setting—the Adult Attachment Interview (AAI. Because non-emotional facial expressions have no distinct description and are expensive to model, we treat emotional facial expression detection as a one- class classification problem, which is to describe target objects (i.e., emotional facial expressions and distinguish them from outliers (i.e., non-emotional ones. Our preliminary experiments on AAI data suggest that one-class classification methods can reach a good balance between cost (labeling and computing and recognition performance by avoiding non-emotional expression labeling and modeling.

  17. Cortical control of facial expression.

    Science.gov (United States)

    Müri, René M

    2016-06-01

    The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system.

  18. Compound facial expressions of emotion.

    Science.gov (United States)

    Du, Shichuan; Tao, Yong; Martinez, Aleix M

    2014-04-15

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.

  19. Darwin, deception, and facial expression.

    Science.gov (United States)

    Ekman, Paul

    2003-12-01

    Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.

  20. Simultaneous facial feature tracking and facial expression recognition.

    Science.gov (United States)

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  1. Facial Expression Synthesis Based on Imitation

    OpenAIRE

    Yihjia Tsai; Hwei Jen Lin; Fu Wen Yang

    2012-01-01

    It is an interesting and challenging problem to synthesise vivid facial expression images. In this paper, we propose a facial expression synthesis system which imitates a reference facial expression image according to the difference between shape feature vectors of the neutral image and expression image. To improve the result, two stages of postprocessing are involved. We focus on the facial expressions of happiness, sadness, and surprise. Experimental results show vivid and flexible results.

  2. Exploiting facial expressions for affective video summarisation

    NARCIS (Netherlands)

    Joho, H.; Jose, J.M.; Valenti, R.; Sebe, N.; Marchand-Maillet, S.; Kompatsiaris, I.

    2009-01-01

    This paper presents an approach to affective video summarisation based on the facial expressions (FX) of viewers. A facial expression recognition system was deployed to capture a viewer's face and his/her expressions. The user's facial expressions were analysed to infer personalised affective scenes

  3. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  4. Mapping and Manipulating Facial Expression

    Science.gov (United States)

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  5. Automatic Facial Expression Analysis A Survey

    Directory of Open Access Journals (Sweden)

    C.P. Sumathi

    2013-01-01

    Full Text Available The Automatic Facial Expression Recognition has been one of the latest research topic since1990’s.There have been recent advances in detecting face, facial expression recognition andclassification. There are multiple methods devised for facial feature extraction which helps in identifyingface and facial expressions. This paper surveys some of the published work since 2003 till date. Variousmethods are analysed to identify the Facial expression. The Paper also discusses about the facialparameterization using Facial Action Coding System(FACS action units and the methods whichrecognizes the action units parameters using facial expression data that are extracted. Various kinds offacial expressions are present in human face which can be identified based on their geometric features,appearance features and hybrid features . The two basic concepts of extracting features are based onfacial deformation and facial motion. This article also identifies the techniques based on thecharacteristics of expressions and classifies the suitable methods that can be implemented.

  6. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  7. Realistic facial animation generation based on facial expression mapping

    Science.gov (United States)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  8. Neuroticism delays detection of facial expressions

    OpenAIRE

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a cr...

  9. Neuroticism Delays Detection of Facial Expressions.

    Science.gov (United States)

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions.

  10. Man-machine collaboration using facial expressions

    Science.gov (United States)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  11. Recognizing Action Units for Facial Expression Analysis.

    Science.gov (United States)

    Tian, Ying-Li; Kanade, Takeo; Cohn, Jeffrey F

    2001-02-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

  12. Facial expression recognition using thermal image.

    Science.gov (United States)

    Jiang, Guotai; Song, Xuemin; Zheng, Fuhui; Wang, Peipei; Omer, Ashgan

    2005-01-01

    Facial expression recognition will be studied in this paper using mathematics morphology, through drawing and analyzing the whole geometry characteristics and some geometry characteristics of the interesting area of Infrared Thermal Imaging (IRTI). The results show that geometry characteristic in the interesting region of different expression are obviously different; Facial temperature changes almost with the expression at the same time. Studies have shown feasibility of facial expression recognition on the basis of IRTI. This method can be used to monitor the facial expression in real time, which can be used in auxiliary diagnosis and medical on disease.

  13. Social Use of Facial Expressions in Hylobatids.

    Directory of Open Access Journals (Sweden)

    Linda Scheider

    Full Text Available Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS. We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely 'responded to' by the partner's facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics.

  14. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  15. Robust facial expression recognition via compressive sensing.

    Science.gov (United States)

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  16. Facial expressions recognition with an emotion expressive robotic head

    Science.gov (United States)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  17. Memory for facial expression is influenced by the background music playing during study

    OpenAIRE

    Woloszyn, Michael R.; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were rec...

  18. Expression and communication of doubt/uncertainty through facial expression

    Directory of Open Access Journals (Sweden)

    Pio E. Ricci Bitti

    2014-04-01

    Full Text Available There is a wide debate on the mental state of doubt/uncertainty; one wonders whether it is a predominantly cognitive or emotional state of mind and whether typical facial expressions communicate doubt/uncertainty. To this purpose,through a role playing procedure, a large sample of expressions were collected and afterwards evaluated through a combination of encoding and decoding procedures,including also FACS (Facial Action Coding System analysis. The results have partially confirmed our hypothesis, identifying two typical facial expressions of doubt/uncertainty, which share the same facial actions in the inferior part of the face and show differential facial actions in the upper face.

  19. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  20. Facial Expression Spacial Charts for Describing Dynamic Diversity of Facial Expressions

    Directory of Open Access Journals (Sweden)

    H. Madokoro

    2012-08-01

    Full Text Available This paper presents a new framework to describe individual facial expression spaces, particularly addressing the dynamic diversity of facial expressions that appear as an exclamation or emotion, to create a unique space for each person. We name this framework Facial Expression Spatial Charts (FESCs. The FESCs are created using Self– Organizing Maps (SOMs and Fuzzy Adaptive Resonance Theory (ART of unsupervised neural networks. For facial images with emphasized sparse representations using Gabor wavelet filters, SOMs extract topological information in facial expression images and classify them as categories in the fixed space that are decided by the number of units on the mapping layer. Subsequently, Fuzzy ART integrates categories classified by SOMs using adaptive learning functions under fixed granularity that is controlled by the vigilance parameter. The categories integrated by Fuzzy ART are matched to Expression Levels (ELs for quantifying facial expression intensity based on the arrangement of facial expressions on Russell’s circumplex model. We designate the category that contains neutral facial expression as the basis category. Actually, FESCs can visualize and represent dynamic diversity of facial expressions consisting of ELs extracted from facial expressions. In the experiment, we created an original facial expression dataset consisting of three facial expressions—happiness, anger, and sadness— obtained from 10 subjects during 7–20 weeks at one-week intervals. Results show that the method can adequately display the dynamic diversity of facial expressions between subjects, in addition to temporal changes in each subject. Moreover, we used stress measurement sheets to obtain temporal changes of stress for analyzing psychological effects of the stress that subjects feel. We estimated stress levels of four grades using Support Vector Machines (SVMs. The mean estimation rates for all 10 subjects and for 5 subjects over more than

  1. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  2. Changes in the Caucasian male facial profile as depicted in fashion magazines during the twentieth century.

    Science.gov (United States)

    Nguyen, D D; Turley, P K

    1998-08-01

    The purposes of this study were to (1) measure changes in the young adult Caucasian male profile through time and (2) describe the male profile depicted in current fashion magazines. Profile photographs (n = 116) of male models collected from leading fashion magazines of the last 65 years were analyzed. They were reproduced as slides, and the images were scanned and projected onto a computer monitor. Soft tissue landmarks were digitized and the profiles were corrected for size differences. Six linear, nine angular, and three proportional parameters were measured. Anteroposterior lip position, lip curl, and vermilion area showed statistically significant correlations (r > or = 31, p 0.05). The results showed that (1) the male profile depicted in fashion magazines has changed significantly with time and the changes were in the area of the lips; and (2) there was a trend of increasing lip protrusion, lip curl, and vermilion display. We conclude that similar to the female profile, the esthetic male profile has changed with time.

  3. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fo...

  4. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fol...

  5. Perception of facial expression and facial identity in subjects with social developmental disorders.

    Science.gov (United States)

    Hefter, Rebecca L; Manoach, Dara S; Barton, Jason J S

    2005-11-22

    It has been hypothesized that the social dysfunction in social developmental disorders (SDDs), such as autism, Asperger disorder, and the socioemotional processing disorder, impairs the acquisition of normal face-processing skills. The authors investigated whether this purported perceptual deficit was generalized to both facial expression and facial identity or whether these different types of facial perception were dissociated in SDDs. They studied 26 adults with a variety of SDD diagnoses, assessing their ability to discriminate famous from anonymous faces, their perception of emotional expression from facial and nonfacial cues, and the relationship between these abilities. They also compared the performance of two defined subgroups of subjects with SDDs on expression analysis: one with normal and one with impaired recognition of facial identity. While perception of facial expression was related to the perception of nonfacial expression, the perception of facial identity was not related to either facial or nonfacial expression. Likewise, subjects with SDDs with impaired facial identity processing perceived facial expression as well as those with normal facial identity processing. The processing of facial identity and that of facial expression are dissociable in social developmental disorders. Deficits in perceiving facial expression may be related to emotional processing more than face processing. Dissociations between the perception of facial identity and facial emotion are consistent with current cognitive models of face processing. The results argue against hypotheses that the social dysfunction in social developmental disorder causes a generalized failure to acquire face-processing skills.

  6. The identification of unfolding facial expressions.

    Science.gov (United States)

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  7. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  8. Facial expression recognition in perceptual color space.

    Science.gov (United States)

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation.

  9. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  10. Slowing down facial movements and vocal sounds enhances facial expression recognition and facial-vocal imitation in children with autism

    OpenAIRE

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-01-01

    International audience; This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a st...

  11. Mothers' pupillary responses to infant facial expressions.

    Science.gov (United States)

    Yrttiaho, Santeri; Niehaus, Dana; Thomas, Eileen; Leppänen, Jukka M

    2017-02-06

    Human parental care relies heavily on the ability to monitor and respond to a child's affective states. The current study examined pupil diameter as a potential physiological index of mothers' affective response to infant facial expressions. Pupillary time-series were measured from 86 mothers of young infants in response to an array of photographic infant faces falling into four emotive categories based on valence (positive vs. negative) and arousal (mild vs. strong). Pupil dilation was highly sensitive to the valence of facial expressions, being larger for negative vs. positive facial expressions. A separate control experiment with luminance-matched non-face stimuli indicated that the valence effect was specific to facial expressions and cannot be explained by luminance confounds. Pupil response was not sensitive to the arousal level of facial expressions. The results show the feasibility of using pupil diameter as a marker of mothers' affective responses to ecologically valid infant stimuli and point to a particularly prompt maternal response to infant distress cues.

  12. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    Directory of Open Access Journals (Sweden)

    Lingdan Wu

    2012-01-01

    Full Text Available Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.

  13. Biased Facial Expression Interpretation in Shy Children

    Science.gov (United States)

    Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy

    2016-01-01

    The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…

  14. A comparison of facial expression properties in five hylobatid species

    NARCIS (Netherlands)

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-01-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobat

  15. Facial age affects emotional expression decoding

    Directory of Open Access Journals (Sweden)

    Mara eFölster

    2014-02-01

    Full Text Available Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  16. Facial age affects emotional expression decoding.

    Science.gov (United States)

    Fölster, Mara; Hess, Ursula; Werheid, Katja

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  17. Facial and vocal expressions of emotion.

    Science.gov (United States)

    Russell, James A; Bachorowski, Jo-Anne; Fernandez-Dols, Jose-Miguel

    2003-01-01

    A flurry of theoretical and empirical work concerning the production of and response to facial and vocal expressions has occurred in the past decade. That emotional expressions express emotions is a tautology but may not be a fact. Debates have centered on universality, the nature of emotion, and the link between emotions and expressions. Modern evolutionary theory is informing more models, emphasizing that expressions are directed at a receiver, that the interests of sender and receiver can conflict, that there are many determinants of sending an expression in addition to emotion, that expressions influence the receiver in a variety of ways, and that the receiver's response is more than simply decoding a message.

  18. Judgment of facial expressions and depression persistence

    NARCIS (Netherlands)

    Hale, WW

    1998-01-01

    In research it has been demonstrated that cognitive and interpersonal processes play significant roles in depression development and persistence. The judgment of emotions displayed in facial expressions by depressed patients allows for a better understanding of these processes. In this study, 48 maj

  19. A Robot with Complex Facial Expressions

    Directory of Open Access Journals (Sweden)

    J. Takeno

    2009-08-01

    Full Text Available The authors believe that the consciousness of humans basically originates from languages and their association-like flow of consciousness, and that feelings are generated accompanying respective languages. We incorporated artificial consciousness into a robot; achieved an association flow of language like flow of consciousness; and developed a robot called Kansei that expresses its feelings according to the associations occurring in the robot. To be able to fully communicate with humans, robots must be able to display complex expressions, such as a sense of being thrilled. We therefore added to the Kansei robot a device to express complex feelings through its facial expressions. The Kansei robot is actually an artificial skull made of aluminum, with servomotors built into it. The face is made of relatively soft polyethylene, which is formed to appear like a human face. Facial expressions are generated using 19 servomotors built into the skull, which pull metal wires attached to the facial “skin” to create expressions. The robot at present is capable of making six basic expressions as well as complex expressions, such as happiness and fear combined.

  20. The face of leadership: perceiving leaders from facial expression

    OpenAIRE

    Trichas, Savvas

    2011-01-01

    Facial expressions appear to have a powerful influence on the perception of leadership. The aim of the five studies presented here was to add to our knowledge about the contribution of facial expression to the perception of leadership. In particular, these five studies were used to explore which facial expressions influence perceptions of leadership and how these facial expressions influence leadership perceptions. Participants’ prototypes of leadership were examined by assessing implicit lea...

  1. The face of leadership: perceiving leaders from facial expression

    OpenAIRE

    Trichas, Savvas

    2011-01-01

    Facial expressions appear to have a powerful influence on the perception of leadership. The aim of the five studies presented here was to add to our knowledge about the contribution of facial expression to the perception of leadership. In particular, these five studies were used to explore which facial expressions influence perceptions of leadership and how these facial expressions influence leadership perceptions. Participants’ prototypes of leadership were examined by assessing implicit lea...

  2. Holistic gaze strategy to categorize facial expression of varying intensities.

    Directory of Open Access Journals (Sweden)

    Kun Guo

    Full Text Available Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region, however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.

  3. Stability of Facial Affective Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    H. Fatouros-Bergman

    2012-01-01

    Full Text Available Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS. In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature.

  4. Categorical Perception of Affective and Linguistic Facial Expressions

    Science.gov (United States)

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  5. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  6. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  7. Inferring Emotion from Facial Expressions in Social Contexts : A Role of Self-Construal?

    Directory of Open Access Journals (Sweden)

    Ana Maria Draghici

    2010-01-01

    Full Text Available 'The study attempted to replicate the findings of Masuda and colleagues (2008, testing a modified hypothesis: when judging people’s emotions from facial expressions, interdependence-primed participants, in contrast to independence-primed participants, incorporate information from the social context, i.e. facial expressions of surrounding people.  This was done in order to check if self construal could be the main variable influencing the cultural differences in emotion perception documented by Masuda and colleagues. Participants viewed cartoon images depicting a happy, sad, or neutral character in its facial expression, surrounded by other characters expressing graded congruent or incongruent facial expressions. The hypothesis was only (partially confirmed for the emotional judgments of a neutral facial expression target. However, a closer look at the individual means indicated both assimilation and contrast effects, without a systematic manner in which the background characters' facial expressions would have been incorporated in the participants' judgments, for either of the priming groups. The results are discussed in terms of priming and priming success, and possible moderators other than self-construal for the effect found by Masuda and colleagues.'

  8. Time perception and dynamics of facial expressions of emotions.

    Directory of Open Access Journals (Sweden)

    Sophie L Fayolle

    Full Text Available Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant, but one was high-arousing (expressing anger and the other low-arousing (expressing sadness. Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.

  9. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    Science.gov (United States)

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  10. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    2012-01-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  11. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  12. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  13. The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development.

    Science.gov (United States)

    Spangler, Sibylle M; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.

  14. Automatic Recognition of Facial Actions in Spontaneous Expressions

    Directory of Open Access Journals (Sweden)

    Marian Stewart Bartlett

    2006-09-01

    Full Text Available Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS. The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.

  15. Allometry of facial mobility in anthropoid primates: implications for the evolution of facial expression.

    Science.gov (United States)

    Dobson, Seth D

    2009-01-01

    Body size may be an important factor influencing the evolution of facial expression in anthropoid primates due to allometric constraints on the perception of facial movements. Given this hypothesis, I tested the prediction that observed facial mobility is positively correlated with body size in a comparative sample of nonhuman anthropoids. Facial mobility, or the variety of facial movements a species can produce, was estimated using a novel application of the Facial Action Coding System (FACS). I used FACS to estimate facial mobility in 12 nonhuman anthropoid species, based on video recordings of facial activity in zoo animals. Body mass data were taken from the literature. I used phylogenetic generalized least squares (PGLS) to perform a multiple regression analysis with facial mobility as the dependent variable and two independent variables: log body mass and dummy-coded infraorder. Together, body mass and infraorder explain 92% of the variance in facial mobility. However, the partial effect of body mass is much stronger than for infraorder. The results of my study suggest that allometry is an important constraint on the evolution of facial mobility, which may limit the complexity of facial expression in smaller species. More work is needed to clarify the perceptual bases of this allometric pattern.

  16. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    Science.gov (United States)

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  18. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Science.gov (United States)

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  19. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  20. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  1. Adults' responsiveness to children's facial expressions.

    Science.gov (United States)

    Aradhye, Chinmay; Vonk, Jennifer; Arida, Danielle

    2015-07-01

    We investigated the effect of young children's (hereafter children's) facial expressions on adult responsiveness. In Study 1, 131 undergraduate students from a midsized university in the midwestern United States rated children's images and videos with smiling, crying, or neutral expressions on cuteness, likelihood to adopt, and participants' experienced distress. Looking times at images and videos along with perception of cuteness, likelihood to adopt, and experienced distress using 10-point Likert scales were measured. Videos of smiling children were rated as cuter and more likely to be adopted and were viewed for longer times compared with videos of crying children, which evoked more distress. In Study 2, we recorded responses from 101 of the same participants in an online survey measuring gender role identity, empathy, and perspective taking. Higher levels of femininity (as measured by Bem's Sex Role Inventory) predicted higher "likely to adopt" ratings for crying images. These findings indicate that adult perception of children and motivation to nurture are affected by both children's facial expressions and adult characteristics and build on existing literature to demonstrate that children may use expressions to manipulate the motivations of even non-kin adults to direct attention toward and perhaps nurture young children.

  2. Intensity dependence in high-level facial expression adaptation aftereffect.

    Science.gov (United States)

    Hong, Sang Wook; Yoon, K Lira

    2017-06-14

    Perception of a facial expression can be altered or biased by a prolonged viewing of other facial expressions, known as the facial expression adaptation aftereffect (FEAA). Recent studies using antiexpressions have demonstrated a monotonic relation between the magnitude of the FEAA and adaptor extremity, suggesting that facial expressions are opponent coded and represented continuously from one expression to its antiexpression. However, it is unclear whether the opponent-coding scheme can account for the FEAA between two facial expressions. In the current study, we demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as a function of the intensity of adapting facial expressions, consistent with the predictions based on the opponent-coding model. Further, the monotonic increase in the FEAA occurred even when the intensity of an adapting face was too weak for its expression to be recognized. These results together suggest that multiple facial expressions are encoded and represented by balanced activity of neural populations tuned to different facial expressions.

  3. Mirroring Facial Expressions and Emotions in Dyadic Conversations

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    This paper presents an investigation of mirroring facial expressions and the emotions which they convey in dyadic naturally occurring first encounters. Mirroring facial expressions are a common phenomenon in face-to-face interactions, and they are due to the mirror neuron system which has been...... and overlapping facial expressions are very frequent. In this study, we want to determine whether the overlapping facial expressions are mirrored or are otherwise correlated in the encounters, and to what extent mirroring facial expressions convey the same emotion. The results of our study show that the majority...... of smiles and laughs, and one fifth of the occurrences of raised eyebrows are mirrored in the data. Moreover some facial traits in co-occurring expressions co-occur more often than it would be expected by chance. Finally, amusement, and to a lesser extent friendliness, are often emotions shared by both...

  4. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    Science.gov (United States)

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  5. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    Science.gov (United States)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  6. The dynamic rotation of Langer's lines on facial expression.

    Science.gov (United States)

    Bush, James; Ferguson, Mark W J; Mason, Tracey; McGrouther, Gus

    2007-01-01

    Karl Langer investigated directional variations in the mechanical and physical properties of skin [Gibson T. Editorial. Karl Langer (1819-1887) and his lines. Br J Plast Surg 1978;31:1-2]. He produced a series of diagrams depicting lines of cleavage in the skin [Langer K. On the anatomy and physiology of the skin I. The cleavability of the cutis. Br J Plast Surg 1978;31:3-8] and showed that the orientation of these lines coincided with the dominant axis of mechanical tension in the skin [Langer K. On the anatomy and physiology of the skin II. Skin tension. Br J Plast Surg 1978;31:93-106]. Previously these lines have been considered as a static feature. We set out to determine whether Langer's lines have a dynamic element and to define any rotation of the orientation of Langer's lines on the face with facial movement. One hundred and seventy-five naevi were excised from the face and neck of 72 volunteers using circular dermal punch biopsies. Prior to surgery a vertical line was marked on the skin through the centre of each naevus. After excision distortions of the resulting wounds were observed. The orientation of the long axis of each wound, in relation to the previously marked vertical line, was measured with a goniometer with the volunteer at rest and holding their face in five standardised facial expressions: mouth open, smiling, eyes tightly shut, frowning and eyebrows raised. The aim was to measure the orientation of the long axis of the wound with the face at rest and subsequent rotation of the wound with facial movement. After excision elliptical distortion was seen in 171 of the 175 wounds at rest. Twenty-nine wounds maintained the same orientation of distortion in all of the facial expressions. In the remaining wounds the long axis of the wound rotated by up to 90 degrees . The amount of rotation varied between sites (p>0.0001). We conclude that Langer's lines are not a static feature but are dynamic with rotation of up to 90 degrees . It is possible that

  7. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  8. Shadows alter facial expressions of Noh masks.

    Science.gov (United States)

    Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo

    2013-01-01

    A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers' recognition of the emotional expressions. In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa's smile. They also agree with the aesthetic principle of Japanese traditional art "yugen (profound grace and subtlety)", which highly appreciates subtle emotional expressions in the darkness.

  9. Shadows alter facial expressions of Noh masks.

    Directory of Open Access Journals (Sweden)

    Nobuyuki Kawai

    Full Text Available BACKGROUND: A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers' recognition of the emotional expressions. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. CONCLUSIONS/SIGNIFICANCE: Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa's smile. They also agree with the aesthetic principle of Japanese traditional art "yugen (profound grace and subtlety", which highly appreciates subtle emotional expressions in the darkness.

  10. Facial Expression Recognition in Nonvisual Imagery

    Science.gov (United States)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  11. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  12. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  13. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently.

  14. Extraction of Eyes for Facial Expression Identification of Students

    Directory of Open Access Journals (Sweden)

    G.Sofia,

    2010-07-01

    Full Text Available Facial expressions play an essential role in communications in social interactions with other human beings which deliver rich information about their emotions. Facial expression analysis has wide range ofapplications in the areas such as Psychology, Animations, Interactive games, Image retrieval and Image understanding. Selecting the relevant feature and ignoring the unimportant feature is the key step in facial expression recognition system. Here, we propose an efficient method for identifying the expressions of the students torecognize their comprehension from the facial expressions in static images containing the frontal view of the human face. Our goal is to categorize the facial expressions of the students in the given image into two basic emotional expression states – comprehensible, incomprehensible. One of the key action units in the face to expose expression is eye. In this paper, Facial expressions are identified from the expressions of the eyes. Our method consists of three steps, Edge detection, Eye extraction and Emotion recognition. Edge detection is performed through Prewitt operator. Extraction of eyes is performed using iterative search algorithm on the edge image. All the extracted information are combined together to form the feature vector. Finally, the features are given as an input for a BPN classifier and thus the facial expressions are being identified. The proposed method is tested on the Yale Face database.

  15. Enhanced subliminal emotional responses to dynamic facial expressions

    Directory of Open Access Journals (Sweden)

    Wataru eSato

    2014-09-01

    Full Text Available Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1 and 30 (Experiment 2 ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.

  16. Meta-Analysis of the First Facial Expression Recognition Challenge.

    Science.gov (United States)

    Valstar, M F; Mehu, M; Bihan Jiang; Pantic, M; Scherer, K

    2012-08-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.

  17. Facial Expression Recognition Using Stationary Wavelet Transform Features

    Directory of Open Access Journals (Sweden)

    Huma Qayyum

    2017-01-01

    Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.

  18. Dielectric elastomer actuators for facial expression

    Science.gov (United States)

    Wang, Yuzhe; Zhu, Jian

    2016-04-01

    Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.

  19. The Not Face: A grammaticalization of facial expressions of emotion

    Science.gov (United States)

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  20. Shadows Alter Facial Expressions of Noh Masks

    Science.gov (United States)

    Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo

    2013-01-01

    Background A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers’ recognition of the emotional expressions. Methodology/Principal Findings In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Conclusions/Significance Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa’s smile. They also agree with the aesthetic principle of Japanese traditional art “yugen (profound grace and subtlety)”, which highly appreciates subtle emotional expressions in the darkness. PMID:23940748

  1. Colour Perception on Facial Expression towards Emotion

    Directory of Open Access Journals (Sweden)

    Rubita Sudirman

    2012-12-01

    Full Text Available This study is to investigate human perceptions on pairing of facial expressions of emotion with colours. A group of 27 subjects consisting mainly of younger and Malaysian had participated in this study. For each of the seven faces, which expresses the basic emotions neutral, happiness, surprise, anger, disgust, fear and sadness, a single colour is chosen from the eight basic colours for the match of best visual look to the face accordingly. The different emotions appear well characterized by a single colour. The approaches used in this experiment for analysis are psychology disciplines and colours engineering. These seven emotions are being matched by the subjects with their perceptions and feeling. Then, 12 male and 12 female data are randomly chosen from among the previous data to make a colour perception comparison between genders. The successes or failures in running of this test depend on the possibility of subjects to propose their every single colour for each expression. The result will translate into number and percentage as a guide for colours designers and psychology field.

  2. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  3. Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition

    OpenAIRE

    Brenna,

    2013-01-01

    Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored...

  4. Reduced facial expressiveness in Parkinson's disease: A pure motor disorder?

    Science.gov (United States)

    Ricciardi, Lucia; Bologna, Matteo; Morgante, Francesca; Ricciardi, Diego; Morabito, Bruno; Volpe, Daniele; Martino, Davide; Tessitore, Alessandro; Pomponi, Massimiliano; Bentivoglio, Anna Rita; Bernabei, Roberto; Fasano, Alfonso

    2015-11-15

    Impaired emotional facial expressiveness is an important feature in Parkinson's disease (PD). Although there is evidence of a possible relationship between reduced facial expressiveness and altered emotion recognition or imagery in PD, it is unknown whether other aspects of the emotional processing, such as subjective emotional experience (alexithymia), might influence hypomimia in this condition. In this study wee aimed to investigate possible relationship between reduced facial expressiveness and altered emotion processing (including facial recognition and alexithymia) in patients with PD. Forty PD patients and seventeen healthy controls were evaluated. Facial expressiveness was rated on video recordings, according to the UPDRS-III item 19 and using an ad hoc scale assessing static and dynamic facial expression and posed emotions. Six blind raters evaluated the patients' videos. Emotion facial recognition was tested using the Ekman Test; alexithymia was assessed using Toronto Alexithymia Scale (TAS-20). PD patients had a significantly reduced static and dynamic facial expressiveness and a deficit in posing happiness and surprise. They performed significantly worse than healthy controls in recognizing surprise (p=0.03). The Ekman total score positively correlated with the global expressiveness (R^2=0.39, p=0.01) and with the expressiveness of disgust (R^2=0.32, p=0.01). The occurrence of alexithymia was not different between PD patients and HC; however, a significant negative correlation between the expressiveness of disgust was found for a subscore of TAS (R^2=-.447, p=0.007). Reduced facial expressiveness in PD may be in part related to difficulties with emotional recognition in a context of an unimpaired subjective emotional experience. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Understanding facial expressions of pain in patients with depression

    NARCIS (Netherlands)

    Lautenbacher, Stefan; Bär, Karl-Juergen; Eisold, Patricia; Kunz, Miriam

    2016-01-01

    Although depression is associated with more clinical pain complaints, psychophysical data sometimes point to hypoalgesic alterations. Studying the more reflex-like facial expression of pain in patients with depression may offer a new perspective. Facial and psychophysical responses to non-painful an

  6. Discrimination of gender using facial image with expression change

    Science.gov (United States)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  7. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  8. Facial expression of emotions in borderline personality disorder and depression.

    Science.gov (United States)

    Renneberg, Babette; Heyn, Katrin; Gebhard, Rita; Bachmann, Silke

    2005-09-01

    Borderline personality disorder (BPD) is characterized by marked problems in interpersonal relationships and emotion regulation. The assumption of emotional hyper-reactivity in BPD is tested regarding the facial expression of emotions, an aspect highly relevant for communication processes and a central feature of emotion regulation. Facial expressions of emotions are examined in a group of 30 female inpatients with BPD, 27 women with major depression and 30 non-patient female controls. Participants were videotaped while watching two short movie sequences, inducing either positive or negative emotions. Frequency of emotional facial expressions and intensity of happiness expressions were examined, using the Emotional Facial Action Coding System (EMFACS-7, Friesen & Ekman, EMFACS-7: Emotional Facial Action Coding System, Version 7. Unpublished manual, 1984). Group differences were analyzed for the negative and the positive mood-induction procedure separately. Results indicate that BPD patients reacted similar to depressed patients with reduced facial expressiveness to both films. The highest emotional facial activity to both films and most intense happiness expressions were displayed by the non-clinical control group. Current findings contradict the assumption of a general hyper-reactivity to emotional stimuli in patients with BPD.

  9. A New Method of 3D Facial Expression Animation

    Directory of Open Access Journals (Sweden)

    Shuo Sun

    2014-01-01

    Full Text Available Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR, the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA, we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.

  10. Memory for facial expression is influenced by the background music playing during study.

    Science.gov (United States)

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities.

  11. Automated and objective action coding of facial expressions in patients with acute facial palsy.

    Science.gov (United States)

    Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando

    2015-05-01

    Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.

  12. Electrophysiological correlates of the efficient detection of emotional facial expressions.

    Science.gov (United States)

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Toichi, Motomi

    2014-04-29

    Behavioral studies have shown that emotional facial expressions are detected more rapidly and accurately than are neutral expressions. However, the neural mechanism underlying this efficient detection has remained unclear. To investigate this mechanism, we measured event-related potentials (ERPs) during a visual search task in which participants detected the normal emotional facial expressions of anger and happiness or their control stimuli, termed "anti-expressions," within crowds of neutral expressions. The anti-expressions, which were created using a morphing technique that produced changes equivalent to those in the normal emotional facial expressions compared with the neutral facial expressions, were most frequently recognized as emotionally neutral. Behaviorally, normal expressions were detected faster and more accurately and were rated as more emotionally arousing than were the anti-expressions. Regarding ERPs, the normal expressions elicited larger early posterior negativity (EPN) at 200-400ms compared with anti-expressions. Furthermore, larger EPN was related to faster and more accurate detection and higher emotional arousal. These data suggest that the efficient detection of emotional facial expressions is implemented via enhanced activation of the posterior visual cortices at 200-400ms based on their emotional significance. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    Directory of Open Access Journals (Sweden)

    Letizia Palumbo

    Full Text Available Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1. This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2. Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3. The bias survived insertion of a 400 ms blank (Experiment 4. These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects. We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism, which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  14. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    Science.gov (United States)

    Palumbo, Letizia; Jellema, Tjeerd

    2013-01-01

    Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  15. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    Science.gov (United States)

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  16. Neural Mechanism of Facial Expression Perception in Intellectually Gifted Adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    The current study investigated the relationship between general intelligence and the three stages of facial expression processing. Two groups of adolescents with different levels of general intelligence were required to identify three types of facial expressions (happy, sad, and neutral faces...... average IQ adolescents during the facial expression identification task. The electrophysiological responses showed that no significant IQ-related differences were found for P1 responses during the early visual processing stage. During the middle processing stage, high IQ adolescents had faster structural...... attentional modulation, with larger late positive potential (LPP) amplitudes compared to adolescents with average IQ. The current study revealed that adolescents with different intellectual levels used different neural dynamic processes during these three stages in the processing of facial expressions....

  17. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  18. French-speaking Children's Freely Produced Labels for Facial Expressions

    Directory of Open Access Journals (Sweden)

    Reem eMaassarani

    2014-06-01

    Full Text Available In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. Five- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003 and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children.

  19. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  20. How facial expressions of emotion affect distance perception

    Directory of Open Access Journals (Sweden)

    Nam-Gyoon eKim

    2015-11-01

    Full Text Available Facial expressions of emotion are thought to convey expressers’ behavioral intentions, thus priming observers’ approach and avoidance tendencies appropriately. The present study examined whether detecting expressions of behavioral intent influence perceivers’ estimation of the expresser’s distance from them. Eighteen undergraduates (9 male and 9 female participated in the study. Six facial expressions were chosen on the basis of degree of threat—anger, hate (threatening expressions, shame, surprise (neutral expressions, pleasure and joy (safe expressions. Each facial expression was presented on a tablet PC held by an assistant covered by a black drape who stood 1m, 2m, or 3m away from participants. Participants performed a visual matching task to report the perceived distance. Results showed that facial expression influenced distance estimation, with faces exhibiting threatening or safe expressions judged closer than those showing neutral expressions. Females’ judgments were more likely to be influenced; but these influences largely disappeared beyond the 2m distance. These results suggest that facial expressions of emotion (particularly threatening or safe emotions influence others’ (especially females’ distance estimations but only within close proximity.

  1. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    Science.gov (United States)

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  2. Concurrent development of facial identity and expression discrimination

    OpenAIRE

    Dalrymple, Kirsten A.; Visconti di Oleggio Castello, Matteo; Elison, Jed T.; Gobbini, M. Ida

    2017-01-01

    Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is rep...

  3. Factors contributing to individual differences in facial expression categorisation.

    Science.gov (United States)

    Green, Corinne; Guo, Kun

    2016-12-29

    Individuals vary in perceptual accuracy when categorising facial expressions, yet it is unclear how these individual differences in non-clinical population are related to cognitive processing stages at facial information acquisition and interpretation. We tested 104 healthy adults in a facial expression categorisation task, and correlated their categorisation accuracy with face-viewing gaze allocation and personal traits assessed with Autism Quotient, anxiety inventory and Self-Monitoring Scale. The gaze allocation had limited but emotion-specific impact on categorising expressions. Specifically, longer gaze at the eyes and nose regions were coupled with more accurate categorisation of disgust and sad expressions, respectively. Regarding trait measurements, higher autistic score was coupled with better recognition of sad but worse recognition of anger expressions, and contributed to categorisation bias towards sad expressions; whereas higher anxiety level was associated with greater categorisation accuracy across all expressions and with increased tendency of gazing at the nose region. It seems that both anxiety and autistic-like traits were associated with individual variation in expression categorisation, but this association is not necessarily mediated by variation in gaze allocation at expression-specific local facial regions. The results suggest that both facial information acquisition and interpretation capabilities contribute to individual differences in expression categorisation within non-clinical populations.

  4. Importance of the brow in facial expressiveness during human communication.

    Science.gov (United States)

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  5. Altered prosaposin expression in the rat facial nerve nucleus following facial nerve transection and repair

    Institute of Scientific and Technical Information of China (English)

    Dong Wang; Wenlong Luo; Cuiying Zhou; Jingjing Li

    2009-01-01

    BACKGROUND: Studies have demonstrated that damaged facial nerves synthesize prosaposin to promote repair of facial neurons.OBJECTIVE: To observe time-course changes of prosaposin expression in the facial nerve nucleus of Sprague Dawley rats following facial nerve transection and repair.DESIGN, TIME AND SETTING: A randomized control neuropathological animal experiment was performed in Chongqing Medical University between March 2007 and September 2008.MATERIALS: A total of 48 adult, male, Sprague Dawley rats were selected and randomly divided into transection and transection + end-to-end anastomosis groups (n =24). Rabbit anti-rat prosaposin antibody, instant SABC immunohistochemical kit, and antibody dilution solution were purchased from Wuhan Uscn Science Co., Ltd., China.METHODS: In the transection group, the nerve trunk of the distal retroauricular branch of the left facial nerves was ligated in Sprague Dawley rats, and a 5-mm nerve trunk at the distal end of the ligation site was removed. In the transection + end-to-end anastomosis group, epineurial anastomosis was performed immediately following transection of the left facial nerves. The right facial nerves in the two groups sewed as the normal control group.MAIN OUTCOME MEASURES: The number of prosaposin-positive neurons, as well as intensity of immunostaining in facial nerve nucleus, following transection and end-to-end anastomosis were determined by immunohistochemistry at 1,3, 7, 14, 21, and 35 days after injury.RESULTS: Transection group: transection of facial nerves resulted in increased number of prosaposin-positive neurons and immunoreactivity intensity in the facial nucleus on day 1. These values significantly increased by day 3. Expression was greater than in the control side. The peak of the reduction was reached at 7 days post-surgery. Transection + end-to-end anastomosis group: the number of prosaposin-positive neurons and immunoreactivity intensity was reduced in the facial nerve nucleus following

  6. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    Directory of Open Access Journals (Sweden)

    Krystyna Rymarczyk

    Full Text Available Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation. In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.

  7. What do facial expressions of emotion express in young children? The relationship between facial display and EMG measures

    Directory of Open Access Journals (Sweden)

    Michela Balconi

    2014-04-01

    Full Text Available The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings and psychophysiological correlates (facial electromyography, EMG were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust. About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear in terms of high arousal, whereas some others were less level arousal (such as sadness. Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a “mirror” effect with respect of the observed faces.

  8. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    Science.gov (United States)

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  9. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  10. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  11. Automatic Facial Expression Recognition Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Ali K. K. Bermani

    2012-12-01

    Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.

  12. Continuous pain intensity estimation from facial expressions

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Rudovic, Ognjen; Pantic, Maja

    2012-01-01

    Automatic pain recognition is an evolving research area with promising applications in health care. In this paper, we propose the first fully automatic approach to continuous pain intensity estimation from facial images. We first learn a set of independent regression functions for continuous pain in

  13. Continuous pain intensity estimation from facial expressions

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Rudovic, Ognjen; Pantic, Maja

    2012-01-01

    Automatic pain recognition is an evolving research area with promising applications in health care. In this paper, we propose the first fully automatic approach to continuous pain intensity estimation from facial images. We first learn a set of independent regression functions for continuous pain

  14. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  15. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    Science.gov (United States)

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability.

  16. Structure-preserving sparse decomposition for facial expression analysis.

    Science.gov (United States)

    Taheri, Sima; Qiang Qiu; Chellappa, Rama

    2014-08-01

    Although facial expressions can be decomposed in terms of action units (AUs) as suggested by the facial action coding system, there have been only a few attempts that recognize expression using AUs and their composition rules. In this paper, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of AUs. First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition. Since domain experts' knowledge may not always be available for constructing an AU-dictionary, we also propose a structure-preserving dictionary learning algorithm, which we use to learn a structured dictionary as well as divide expressive faces into several semantic regions. Experimental results on publicly available expression data sets demonstrate the effectiveness of the proposed approach for facial expression analysis.

  17. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    Science.gov (United States)

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  18. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...... for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... an educational computer game, which provides physical interaction by employing natural user interface (NUI), we aim to support early intervention and to foster facial expression learning....

  19. Efficient Facial Expression and Face Recognition using Ranking Method

    Directory of Open Access Journals (Sweden)

    Murali Krishna kanala

    2015-06-01

    Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.

  20. Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression Feature Space using Self-Mapping

    Directory of Open Access Journals (Sweden)

    Masaki Ishii

    2008-06-01

    Full Text Available This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap using the Self-Organizing Maps (SOM of unsupervised learning and Counter Propagation Networks (CPN of supervised learning together. The proposed method consists of two steps. In the first step, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial expression categories and the representative images of each category are extracted. Psychological significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness, surprise, and fear is assigned to each extracted category. In the latter step, the categories and the representative images described above are learned using the CPN of a large mapping space, and a category map that expresses the topological characteristics of facial expression is generated. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.

  1. Three-year-olds' rapid facial electromyographic responses to emotional facial expressions and body postures.

    Science.gov (United States)

    Geangu, Elena; Quadrelli, Ermanno; Conte, Stefania; Croci, Emanuela; Turati, Chiara

    2016-04-01

    Rapid facial reactions (RFRs) to observed emotional expressions are proposed to be involved in a wide array of socioemotional skills, from empathy to social communication. Two of the most persuasive theoretical accounts propose RFRs to rely either on motor resonance mechanisms or on more complex mechanisms involving affective processes. Previous studies demonstrated that presentation of facial and bodily expressions can generate rapid changes in adult and school-age children's muscle activity. However, to date there is little to no evidence to suggest the existence of emotional RFRs from infancy to preschool age. To investigate whether RFRs are driven by motor mimicry or could also be a result of emotional appraisal processes, we recorded facial electromyographic (EMG) activation from the zygomaticus major and frontalis medialis muscles to presentation of static facial and bodily expressions of emotions (i.e., happiness, anger, fear, and neutral) in 3-year-old children. Results showed no specific EMG activation in response to bodily emotion expressions. However, observing others' happy faces led to increased activation of the zygomaticus major and decreased activation of the frontalis medialis, whereas observing others' angry faces elicited the opposite pattern of activation. This study suggests that RFRs are the result of complex mechanisms in which both affective processes and motor resonance may play an important role. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Training Facial Expression Production in Children on the Autism Spectrum

    Science.gov (United States)

    Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.

    2014-01-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…

  3. A modulatory role for facial expressions in prosopagnosia

    NARCIS (Netherlands)

    de Gelder, B.; Frissen, I.H.E.; Barton, J.; Hadjikhani, N.K.

    2003-01-01

    Brain-damaged patients experience difficulties in recognizing a face (prosopagnosics), but they can still recognize its expression. The dissociation between these two face-related skills has served as a keystone of models of face processing. We now report that the presence of a facial expression can

  4. [The neural networks of facial expression].

    Science.gov (United States)

    Gordillo, F; Mestas, L; Castillo, G; Perez, M A; Lopez, R M; Arana, J M

    2017-02-01

    Introduccion. La percepcion de caras involucra una amplia red de conexiones entre regiones corticales y subcorticales que intercambian y sincronizan informacion a traves de haces de sustancia blanca. Este preciso sistema de comunicacion puede verse afectado tanto a traves de las propias estructuras como por las vias que las conectan. Objetivos. Delimitar el sustrato neuronal que subyace a la percepcion de la expresion facial y analizar los diferentes factores que participan modulando la integridad de esta red neuronal, con el fin de proponer mejoras en los programas de rehabilitacion. Desarrollo. Cuando la compleja red de conexiones que participa en la percepcion de la expresion facial se altera por traumatismos, patologias neurodegenerativas, trastornos del desarrollo, incluso por aislamiento social o contextos negativos, se deteriora tambien la capacidad para interactuar de manera adaptativa con el entorno. Conclusiones. La posibilidad de restaurar la integridad de la red neuronal encargada del procesamiento de la expresion facial pasa por tener en cuenta diferentes variables que en mayor o menor grado se han mostrado capaces de modificar la estructura o funcionalidad de las redes neuronales, como el entrenamiento aerobico, la estimulacion magnetica transcraneal, la estimulacion electrica transcraneal y el aprendizaje, sin bien estas variables estarian condicionadas por la edad, el tipo y evolucion del trastorno o el contexto generador, lo que plantearia la necesidad de protocolos de rehabilitacion ajustados y orientados a delimitar el sustrato neuronal del deficit.

  5. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  6. Facial Expression Recognition Using 3D Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Young-Hyen Byeon

    2014-12-01

    Full Text Available This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis and TMPCA(Tensor-based Multilinear Principal Component Analysis to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.

  7. Development of a System for Automatic Facial Expression Analysis

    Science.gov (United States)

    Diago, Luis A.; Kitaoka, Tetsuko; Hagiwara, Ichiro

    Automatic recognition of facial expressions can be an important component of natural human-machine interactions. While a lot of samples are desirable for estimating more accurately the feelings of a person (e.g. likeness) about a machine interface, in real world situation, only a small number of samples must be obtained because the high cost in collecting emotions from observed person. This paper proposes a system that solves this problem conforming to individual differences. A new method is developed for facial expression classification based on the combination of Holographic Neural Networks (HNN) and Type-2 Fuzzy Logic. For the recognition of emotions induced by facial expressions, compared with former HNN and Support Vector Machines (SVM) classifiers, proposed method achieved the best generalization performance using less learning time than SVM classifiers.

  8. Neural Responses to Rapid Facial Expressions of Fear and Surprise

    Directory of Open Access Journals (Sweden)

    Ke Zhao

    2017-05-01

    Full Text Available Facial expression recognition is mediated by a distributed neural system in humans that involves multiple, bilateral regions. There are six basic facial expressions that may be recognized in humans (fear, sadness, surprise, happiness, anger, and disgust; however, fearful faces and surprised faces are easily confused in rapid presentation. The functional organization of the facial expression recognition system embodies a distinction between these two emotions, which is investigated in the present study. A core system that includes the right parahippocampal gyrus (BA 30, fusiform gyrus, and amygdala mediates the visual recognition of fear and surprise. We found that fearful faces evoked greater activity in the left precuneus, middle temporal gyrus (MTG, middle frontal gyrus, and right lingual gyrus, whereas surprised faces were associated with greater activity in the right postcentral gyrus and left posterior insula. These findings indicate the importance of common and separate mechanisms of the neural activation that underlies the recognition of fearful and surprised faces.

  9. Dissociating Face Identity and Facial Expression Processing Via Visual Adaptation

    Directory of Open Access Journals (Sweden)

    Hong Xu

    2012-10-01

    Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.

  10. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolesce......-attentive change detection on social-emotional information.......The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...

  11. Visual decodification of some facial expressions through microimitation.

    Science.gov (United States)

    Ruggieri, V; Fiorenza, M; Sabatini, N

    1986-04-01

    We examined the level of muscular tension of mentalis muscle of 36 students in graphic design at rest and during the presentation of three slides reproducing facial expressions. Analysis showed an increase in the myographic level of mentalis muscle from the third second of measurement onwards after the presentation of the slide in which contraction of the chin was involved. We interpret this result by hypothesizing that the decodification of some facial expressions is realized through a microreproduction of the stimulus from the decodifying subject.

  12. Recognition of facial expressions of emotion in panic disorder.

    Science.gov (United States)

    Cai, Liqiang; Chen, Wanzhen; Shen, Yuedi; Wang, Xinling; Wei, Lili; Zhang, Yingchun; Wang, Wei; Chen, Wei

    2012-01-01

    Whether patients with panic disorder behave differently or not when recognizing the facial expressions of emotion remains unsettled. We tested 21 outpatients with panic disorder and 34 healthy subjects, with a photo set from the Matsumoto and Ekman Japanese and Caucasian facial expressions of emotion, which includes anger, contempt, disgust, fear, happiness, sadness, and surprise. Compared to the healthy subjects, patients showed lower accuracies when recognizing disgust and fear, but a higher accuracy when recognizing surprise. These results suggest that the altered specificity to these emotions leads tso self-awareness mechanisms to prevent further emotional reactions in panic disorder patients. Copyright © 2012 S. Karger AG, Basel.

  13. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    Science.gov (United States)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  14. Brucella melitensis and Mycobacterium tuberculosis depict overlapping gene expression patterns induced in infected THP-1 macrophages.

    Science.gov (United States)

    Masoudian, M; Derakhshandeh, A; Ghahramani Seno, M M

    2015-01-01

    Pathogens infecting mammalian cells have developed various strategies to suppress and evade their hosts' defensive mechanisms. In this line, the intracellular bacteria that are able to survive and propagate within their host cells must have developed strategies to avert their host's killing attitude. Studying the interface of host-pathogen confrontation can provide valuable information for defining therapeutic approaches. Brucellosis, caused by the Brucella strains, is a zoonotic bacterial disease that affects thousands of humans and animals around the world inflicting discomfort and huge economic losses. Similar to many other intracellular dwelling bacteria, infections caused by Brucella are difficult to treat, and hence any attempt at identifying new and common therapeutic targets would prove beneficial for the purpose of curing infections caused by the intracellular bacteria. In THP-1 macrophage infected with Brucella melitensis we studied the expression levels of four host's genes, i.e. EMP2, ST8SIA4, HCP5 and FRMD5 known to be involved in pathogenesis of Mycobacterium tuberculosis. Our data showed that at this molecular level, except for FRMD5 that was downregulated, the other three genes were upregulated by B. melitensis. Brucella melitensis and M. tuberculosis go through similar intracellular processes and interestingly two of the investigated genes, i.e. EMP2 and ST4SIA8 were upregulated in THP-1 cell infected with B. melitensis similar to that reported for THP-1 cells infected with M. tuberculosis. At the host-pathogen interaction interface, this study depicts overlapping changes for different bacteria with common survival strategies; a fact that implies designing therapeutic approaches based on common targets may be possible.

  15. Emotional facial expression detection in the peripheral visual field.

    Directory of Open Access Journals (Sweden)

    Dimitri J Bayle

    Full Text Available BACKGROUND: In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied. METHODOLOGY/PRINCIPAL FINDINGS: Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity. CONCLUSIONS: Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.

  16. Neonatal pain facial expression: evaluating the primal face of pain.

    Science.gov (United States)

    Schiavenato, Martin; Byers, Jacquie F; Scovanner, Paul; McMahon, James M; Xia, Yinglin; Lu, Naiji; He, Hua

    2008-08-31

    The primal face of pain (PFP) is postulated to be a common and universal facial expression to pain, hardwired and present at birth. We evaluated its presence by applying a computer-based methodology consisting of "point-pair" comparisons captured from video to measure facial movement in the pain expression by way of change across two images: one image before and one image after a painful stimulus (heel-stick). Similarity of facial expression was analyzed in a sample of 57 neonates representing both sexes and 3 ethnic backgrounds (African American, Caucasian and Hispanic/Latino) while controlling for these extraneous and potentially modulating factors: feeding type (bottle, breast, or both), behavioral state (awake or asleep), and use of epidural and/or other perinatal anesthesia. The PFP is consistent with previous reports of expression of pain in neonates and is characterized by opening of the mouth, drawing in of the brows, and closing of the eyes. Although facial expression was not identical across or among groups, our analyses showed no particular clustering or unique display by sex, or ethnicity. The clinical significance of this commonality of pain display, and of the origin of its potential individual variation begs further evaluation.

  17. Perception of dynamic facial emotional expressions in adolescents with autism spectrum disorders

    NARCIS (Netherlands)

    Kessels, R.P.C.; Spee, P.S.; Hendriks, A.W.C.J.

    2010-01-01

    Previous studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust

  18. Perception of dynamic facial emotional expressions in adolescents with autism spectrum disorders.

    NARCIS (Netherlands)

    Kessels, R.P.C.; Spee, P.; Hendriks, A.W.

    2010-01-01

    Previous studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust

  19. Rapid influence of emotional scenes on encoding of facial expressions: An ERP study

    National Research Council Canada - National Science Library

    Righart, R. G. R; de Gelder, B

    2008-01-01

    ... to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made...

  20. Continuous emotion detection using EEG signals and facial expressions

    NARCIS (Netherlands)

    Soleymani, Mohammad; Asghari-Esfeden, Sadjad; Pantic, Maja; Fu, Yun

    Emotions play an important role in how we select and consume multimedia. Recent advances on affect detection are focused on detecting emotions continuously. In this paper, for the first time, we continuously detect valence from electroencephalogram (EEG) signals and facial expressions in response to

  1. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    2011-01-01

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly u

  2. Teachers' Perception Regarding Facial Expressions as an Effective Teaching Tool

    Science.gov (United States)

    Butt, Muhammad Naeem; Iqbal, Mohammad

    2011-01-01

    The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…

  3. Categorical Representation of Facial Expressions in the Infant Brain

    Science.gov (United States)

    Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.

    2009-01-01

    Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…

  4. Categorical Perception of Emotional Facial Expressions in Preschoolers

    Science.gov (United States)

    Cheal, Jenna L.; Rutherford, M. D.

    2011-01-01

    Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…

  5. The accuracy of intensity ratings of emotions from facial expressions

    Directory of Open Access Journals (Sweden)

    Kostić Aleksandra P.

    2003-01-01

    Full Text Available The results of a study on the accuracy of intensity ratings of emotion from facial expressions are reported. The so far research into the field has shown that spontaneous facial expressions of basic emotions are a reliable source of information about the category of emotion. The question is raised of whether this can be true for the intensity of emotion as well and whether the accuracy of intensity ratings is dependent on the observer’s sex and vocational orientation. A total of 228 observers of both sexes and of various vocational orientations rated the emotional intensity of presented facial expressions on a scale-range from 0 to 8. The results have supported the hypothesis that spontaneous facial expressions of basic emotions do provide sufficient information about emotional intensity. The hypothesis on the interdependence between the accuracy of intensity ratings of emotion and the observer’s sex and vocational orientation has not been confirmed. However, the accuracy of intensity rating has been proved to vary with the category of the emotion presented.

  6. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  7. Mirroring Facial Expressions and Emotions in Dyadic Conversations

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    of smiles and laughs, and one fifth of the occurrences of raised eyebrows are mirrored in the data. Moreover some facial traits in co-occurring expressions co-occur more often than it would be expected by chance. Finally, amusement, and to a lesser extent friendliness, are often emotions shared by both...

  8. Further Evidence on Preschoolers' Interpretation of Facial Expressions.

    Science.gov (United States)

    Bullock, Merry; Russell, James A.

    1985-01-01

    Assessed through two studies the organization and basis for preschool children's (n=240) and adults' (n=60) categorization of emotions. In one, children and adults chose facial expressions that exemplify emotion categories such as fear, anger, and happiness. In another they grouped emotions differing in arousal level or pleasure-displeasure…

  9. Specificity of Facial Expression Labeling Deficits in Childhood Psychopathology

    Science.gov (United States)

    Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2007-01-01

    Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…

  10. The role of facial expression in resisting enjoyable advertisements

    NARCIS (Netherlands)

    Lewiński, P.

    2015-01-01

    This dissertation on consumer resistance to enjoyable advertisements is positioned in the areas of persuasive communication and social psychology. In this thesis, it is argued that consumers can resist persuasion by controlling their facial expressions of emotion when exposed to an advertisement. Fo

  11. N170 sensitivity to facial expression: A meta-analysis.

    Science.gov (United States)

    Hinojosa, J A; Mercado, F; Carretié, L

    2015-08-01

    The N170 component is the most important electrophysiological index of face processing. Early studies concluded that it was insensitive to facial expression, thus supporting dual theories postulating separate mechanisms for identity and expression encoding. However, recent evidence contradicts this assumption. We conducted a meta-analysis to resolve inconsistencies and to derive theoretical implications. A systematic revision of 128 studies analyzing N170 in response to neutral and emotional expressions yielded 57 meta-analyzable experiments (involving 1645 healthy adults). First, the N170 was found to be sensitive to facial expressions, supporting proposals arguing for integrated rather than segregated mechanisms in the processing of identity and expression. Second, this sensitivity is heterogeneous, with anger, fear and happy faces eliciting the largest N170 amplitudes. Third, we explored some modulatory factors, including the focus of attention - N170 amplitude was found to be also sensitive to unattended expressions - or the reference electrode -common reference reinforcing the effects- . In sum, N170 is a valuable tool to study the neural processing of facial expressions in order to develop current theories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

    Science.gov (United States)

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

  13. Interference between conscious and unconscious facial expression information.

    Directory of Open Access Journals (Sweden)

    Xing Ye

    Full Text Available There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful of a visible face in the presence of a nearby invisible face (with happy or fearful expression. Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information.

  14. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  15. Drug effects on responses to emotional facial expressions: recent findings.

    Science.gov (United States)

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally.

  16. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    Science.gov (United States)

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  17. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    Science.gov (United States)

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  18. Does Parkinson's disease lead to alterations in the facial expression of pain?

    NARCIS (Netherlands)

    Priebe, Janosch A; Kunz, Miriam; Morcinek, Christian; Rieckmann, Peter; Lautenbacher, Stefan

    2015-01-01

    Hypomimia which refers to a reduced degree in facial expressiveness is a common sign in Parkinson's disease (PD). The objective of our study was to investigate how hypomimia affects PD patients' facial expression of pain. The facial expressions of 23 idiopathic PD patients in the Off-phase (without

  19. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  20. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  1. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  2. On the mutual effects of pain and emotion: facial pain expressions enhance pain perception and vice versa are perceived as more arousing when feeling pain.

    Science.gov (United States)

    Reicherts, Philipp; Gerdes, Antje B M; Pauli, Paul; Wieser, Matthias J

    2013-06-01

    Perception of emotional stimuli alters the perception of pain. Although facial expressions are powerful emotional cues - the expression of pain especially plays a crucial role for the experience and communication of pain - research on their influence on pain perception is scarce. In addition, the opposite effect of pain on the processing of emotion has been elucidated even less. To further scrutinize mutual influences of emotion and pain, 22 participants were administered painful and nonpainful thermal stimuli while watching dynamic facial expressions depicting joy, fear, pain, and a neutral expression. As a control condition of low visual complexity, a central fixation cross was presented. Participants rated the intensity of the thermal stimuli and evaluated valence and arousal of the facial expressions. In addition, facial electromyography was recorded as an index of emotion and pain perception. Results show that faces per se, compared to the low-level control condition, decreased pain, suggesting a general attention modulation of pain by complex (social) stimuli. The facial response to painful stimulation revealed a significant correlation with pain intensity ratings. Most important, painful thermal stimuli increased the arousal of simultaneously presented pain expressions, and in turn, pain expressions resulted in higher pain ratings compared to all other facial expressions. These findings demonstrate that the modulation of pain and emotion is bidirectional with pain faces being mostly prone to having mutual influences, and support the view of interconnections between pain and emotion. Furthermore, the special relevance of pain faces for the processing of pain was demonstrated.

  3. Algorithms for Facial Expression Action Tracking and Facial Expression Recognition%人脸表情运动跟踪与表情识别算法

    Institute of Scientific and Technical Information of China (English)

    李於俊; 汪增福

    2011-01-01

    对于人脸视频中的每一帧,提出一种静态人脸表情识别算法,人脸表情运动参数被提取出来后,根据表情生理知识来分类表情;为了应对知识的不足,提出一种静态表情识别和动态表情识别相结合的算法,以基于多类表情马尔可夫链和粒子滤波的统计框架结合生理知识来同时提取人脸表情运动和识别表情.实验证明了算法的有效性.%For each frame in the facial video sequence, an algorithm for static facial expression recognition is proposed firstly, facial expression is recognized after facial actions are retrieved according to facial expression knowledge. Coping with lacking of knowledge , an algorithm combining static facial expression recognition and dynamic facial expression recognition is proposed, facial actions as well as facial expression are simultaneously retrieved using a stochastic framework based on multi-class expressional Markov chains, particle filter and facial expression knowledge. Experiment result confirms the effective of these algorithms.

  4. Facial expressions of emotion are not culturally universal.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  5. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... require considerable skills for interaction. Before the age of 6, most children with autism do not have such basic motor skills in order to manipulate a mouse or a keyboard. Our approach takes account of the specific characteristics of preschoolers with autism and their physical inabilities. By creating......The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...

  6. Forming impressions: effects of facial expression and gender stereotypes.

    Science.gov (United States)

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  7. Happy facial expression processing with different social interaction cues: an fMRI study of individuals with schizotypal personality traits.

    Science.gov (United States)

    Huang, Jia; Wang, Yi; Jin, Zhen; Di, Xin; Yang, Tao; Gur, Ruben C; Gur, Raquel E; Shum, David H K; Cheung, Eric F C; Chan, Raymond C K

    2013-07-01

    In daily life facial expressions change rapidly and the direction of change provides important clues about social interaction. The aim of conducting this study was to elucidate the dynamic happy facial expression processing with different social interaction cues in individuals with (n=14) and without (n=14) schizotypal personality disorder (SPD) traits. Using functional magnetic resonance imaging (fMRI), dynamic happy facial expression processing was examined by presenting video clips depicting happiness appearing and disappearing under happiness inducing ('praise') or reducing ('blame') interaction cues. The happiness appearing condition consistently elicited more brain activations than the happiness disappearing condition in the posterior cingulate bilaterally in all participants. Further analyses showed that the SPD group was less deactivated than the non-SPD group in the right anterior cingulate cortex in the happiness appearing-disappearing contrast. The SPD group deactivated more than the non-SPD group in the left posterior cingulate and right superior temporal gyrus in the praise-blame contrast. Moreover, the incongruence of cues and facial expression activated the frontal-thalamus-caudate-parietal network, which is involved in emotion recognition and conflict resolution. These results shed light on the neural basis of social interaction deficits in individuals with schizotypal personality traits.

  8. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    OpenAIRE

    Roberta eDaini; Chiara Maddalena Comparetti; Paola eRicciardelli

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently fro...

  9. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study

    OpenAIRE

    Righart, R. G. R.; de Gelder, B.

    2008-01-01

    In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim ...

  10. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  11. Effects of Facial Expressions on Recognizing Emotions in Dance Movements

    Directory of Open Access Journals (Sweden)

    Nao Shikanai

    2011-10-01

    Full Text Available Effects of facial expressions on recognizing emotions expressed in dance movements were investigated. Dancers expressed three emotions: joy, sadness, and anger through dance movements. We used digital video cameras and a 3D motion capturing system to record and capture the movements. We then created full-video displays with an expressive face, full-video displays with an unexpressive face, stick figure displays (no face, or point-light displays (no face from these data using 3D animation software. To make point-light displays, 13 markers were attached to the body of each dancer. We examined how accurately observers were able to identify the expression that the dancers intended to create through their dance movements. Dance experienced and inexperienced observers participated in the experiment. They watched the movements and rated the compatibility of each emotion with each movement on a 5-point Likert scale. The results indicated that both experienced and inexperienced observers could identify all the emotions that dancers intended to express. Identification scores for dance movements with an expressive face were higher than for other expressions. This finding indicates that facial expressions affect the identification of emotions in dance movements, whereas only bodily expressions provide sufficient information to recognize emotions.

  12. The facial expression of schizophrenic patients applied with infrared thermal facial image sequence

    National Research Council Canada - National Science Library

    Bo-Lin Jian; Chieh-Li Chen; Wen-Lin Chu; Min-Wei Huang

    2017-01-01

    .... Thus, this study used non-contact infrared thermal facial images (ITFIs) to analyze facial temperature changes evoked by different emotions in moderately and markedly ill schizophrenia patients...

  13. Americans and Palestinians judge spontaneous facial expressions of emotion.

    Science.gov (United States)

    Kayyal, Mary H; Russell, James A

    2013-10-01

    The claim that certain emotions are universally recognized from facial expressions is based primarily on the study of expressions that were posed. The current study was of spontaneous facial expressions shown by aborigines in Papua New Guinea (Ekman, 1980); 17 faces claimed to convey one (or, in the case of blends, two) basic emotions and five faces claimed to show other universal feelings. For each face, participants rated the degree to which each of the 12 predicted emotions or feelings was conveyed. The modal choice for English-speaking Americans (n = 60), English-speaking Palestinians (n = 60), and Arabic-speaking Palestinians (n = 44) was the predicted label for only 4, 5, and 4, respectively, of the 17 faces for basic emotions, and for only 2, 2, and 2, respectively, of the 5 faces for other feelings. Observers endorsed the predicted emotion or feeling moderately often (65%, 55%, and 44%), but also denied it moderately often (35%, 45%, and 56%). They also endorsed more than one (or, for blends, two) label(s) in each face-on average, 2.3, 2.3, and 1.5 of basic emotions and 2.6, 2.2, and 1.5 of other feelings. There were both similarities and differences across culture and language, but the emotional meaning of a facial expression is not well captured by the predicted label(s) or, indeed, by any single label.

  14. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  15. A 3D Facial Expression Tracking Method Using Piecewise Deformations

    Directory of Open Access Journals (Sweden)

    Jing Chi

    2013-02-01

    Full Text Available We present a new fast method for 3D facial expression tracking based on piecewise non-rigid deformations. Our method takes as input a video-rate sequence of face meshes that record the shape and time-varying expressions of a human face, and deforms a source mesh to match each input mesh to output a new mesh sequence with the same connectivity that reflects the facial shape and expressional variations. In mesh matching, we automatically segment the source mesh and estimate a non-rigid transformation for each segment to approximate the input mesh closely. Piecewise non-rigid transformation significantly reduces computational complexity and improves tracking speed because it greatly decreases the unknowns to be estimated. Our method can also achieve desired tracking accuracy because segmentation can be adjusted automatically and flexibly to approximate arbitrary deformations on the input mesh. Experiments demonstrate the efficiency of our method.

  16. Real Time Facial Expression Recognition Using a Novel Method

    Directory of Open Access Journals (Sweden)

    Saumil Srivastava

    2012-04-01

    Full Text Available This paper discusses a novel method for Facial Expression Recognition System which performs facial expression analysis in a near real time from a live web cam feed. Primary objectives were to get results in a near real time with light invariant, person independent and pose invariant way. The system is composed of two different entities trainer and evaluator. Each frame of video feed is passed through a series of steps including haar classifiers, skin detection, feature extraction, feature points tracking, creating a learned Support Vector Machine model to classify emotions to achieve a tradeoff between accuracy and result rate. A processing time of 100-120 ms per 10 frames was achieved with accuracy of around 60%. We measure our accuracy in terms of variety of interaction and classification scenarios. We conclude by discussing relevance of our work to human computer interaction and exploring further measures that can be taken.

  17. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    Science.gov (United States)

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.

  18. Affective priming using facial expressions modulates liking for abstract art.

    Directory of Open Access Journals (Sweden)

    Albert Flexas

    Full Text Available We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms and extended (SOA = 300 ms conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.

  19. Affective Priming Using Facial Expressions Modulates Liking for Abstract Art

    Science.gov (United States)

    Flexas, Albert; Rosselló, Jaume; Christensen, Julia F.; Nadal, Marcos; Olivera La Rosa, Antonio; Munar, Enric

    2013-01-01

    We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20ms) and extended (SOA = 300ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation. PMID:24260350

  20. UX_Mate: from facial expressions to UX evaluation

    DEFF Research Database (Denmark)

    Staiano, Jacopo; Menendez Blanco, Maria; Battocchi, Alberto

    2012-01-01

    In this paper we propose and evaluate UX_Mate, a non-invasive system for the automatic assessment of User eXperience (UX). In addition, we contribute a novel database of annotated and synchronized videos of interactive behavior and facial expressions. UX_Mate is a modular system which tracks facial...... expressions of users, interprets them based on pre-set rules, and generates predictions about the occurrence of a target emotional state, which can be linked to interaction events. The system simplifies UX evaluation providing an indication of event occurrence. UX_Mate has several advantages compared to other...... state of the art systems: easy deployment in the user's natural environment, avoidance of invasive devices, and extreme cost reduction. The paper reports a pilot and a validation study on a total of 46 users, where UX_Mate was used for identifying interaction difficulties. The studies show encouraging...

  1. Affective priming using facial expressions modulates liking for abstract art.

    Science.gov (United States)

    Flexas, Albert; Rosselló, Jaume; Christensen, Julia F; Nadal, Marcos; Olivera La Rosa, Antonio; Munar, Enric

    2013-01-01

    We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms) and extended (SOA = 300 ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.

  2. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease.

    Science.gov (United States)

    Livingstone, Steven R; Vezer, Esztella; McGarry, Lucy M; Lang, Anthony E; Russo, Frank A

    2016-01-01

    Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. To assess the presence of facial mimicry in patients with Parkinson's disease. Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] -0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the "masked face" syndrome of PD.

  3. A Developmental Examination of Amygdala Response to Facial Expressions

    OpenAIRE

    Guyer, Amanda E.; Monk, Christopher S.; McClure-Tone, Erin B.; Nelson, Eric E.; Roberson-Nay, Roxann; Adler, Abby D.; Fromm, Stephen J.; Leibenluft, Ellen; Daniel S Pine; Ernst, Monique

    2008-01-01

    Several lines of evidence implicate the amygdala in face– emotion processing, particularly for fearful facial expressions. Related findings suggest that face–emotion processing engages the amygdala within an interconnected circuitry that can be studied using a functional-connectivity approach. Past work also underscores important functional changes in the amygdala during development. Taken together, prior research on amygdala function and development reveals a need for more work examining dev...

  4. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  5. Mukha Mo: A Preliminary Study on Filipino Facial Expressions

    Directory of Open Access Journals (Sweden)

    Richard Jonathan O. Taduran

    2012-12-01

    Full Text Available This study tested the universality hypothesis on facial expression judgment by applying cross-cultural agreement tests on Filipinos. The Facial Action Coding System constructed by Ekman and Friesen (1976 was used as basis for creating stimuli photos that 101 college student observers were madeto identify. Contextualization for each emotion was also solicited from subjects to provide qualitative bases for their judgments. The results showed that for five of the six emotions studied, excepting fear, the majority of the observers judged the expressions as predicted. The judgment of happiness supplied the strongest evidence for universality, having the highest correctness rate and inter-observer agreement. There was also high agreement among observersand between Filipinos and other cultures about the most intense and second most intense emotion signaled by each stimulus for these five emotions. Difficulty with the recognition of fear, as well as its common association with the emotion of sadness, has been found. Such findings shall serve as baseline data for the study of facial expressions in the Philippines.

  6. Modulation of incentivized dishonesty by disgust facial expressions

    Directory of Open Access Journals (Sweden)

    Julian eLim

    2015-07-01

    Full Text Available Disgust modulates moral decisions involving harming others. We recently specified that this effect is bi-directionally modulated by individual sensitivity to disgust. Here, we show that this effect generalizes to the moral domain of honesty and extends to outcomes with real-world impact. We employed a dice-rolling task in which participants were incentivized to dishonestly report outcomes to increase their potential final monetary payoff. Disgust or control facial expressions were presented subliminally on each trial. Our results reveal that the disgust facial expressions altered honest reporting as a bi-directional function moderated by individual sensitivity. Combining these data with those from prior experiments revealed that the effect of disgust presentation on both harm judgments and honesty could be accounted for by the same bidirectional function, with no significant effect of domain. This clearly demonstrates that disgust facial expressions produce the same modulation of moral judgments across different moral foundations (harm and honesty. Our results suggest strong overlap in the cognitive/neural processes of moral judgments across moral foundations, and provide a framework for further studies to specify the integration of emotional information in moral decision making.

  7. Active AU Based Patch Weighting for Facial Expression Recognition

    Science.gov (United States)

    Xie, Weicheng; Shen, Linlin; Yang, Meng; Lai, Zhihui

    2017-01-01

    Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed. PMID:28146094

  8. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  9. Comparing the Recognition of Emotional Facial Expressions in Patients with

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2014-05-01

    Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.

  10. Body actions change the appearance of facial expressions.

    Directory of Open Access Journals (Sweden)

    Carlo Fantoni

    Full Text Available Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant's global experience (a neutral face appeared happy and a slightly angry face neutral, while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.

  11. Laterality of Facial Expressions of Emotion: Universal and Culture-Specific Influences

    OpenAIRE

    Manas K. Mandal; Nalini Ambady

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others ar...

  12. French-speaking children’s freely produced labels for facial expressions

    OpenAIRE

    Reem eMaassarani; Pierre eGosselin; Patricia eMontembeault; Mathieu eGagnon

    2014-01-01

    In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were co...

  13. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    2013-12-01

    Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.

  14. The Effect of Sad Facial Expressions on Weight Judgment

    Directory of Open Access Journals (Sweden)

    Trent D Weston

    2015-04-01

    Full Text Available Although the body weight evaluation (e.g., normal or overweight of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes towards obese person. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged.

  15. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    Science.gov (United States)

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. ALTERED KINEMATICS OF FACIAL EMOTION EXPRESSION AND EMOTION RECOGNITION DEFICITS ARE UNRELATED IN PARKINSON'S DISEASE

    Directory of Open Access Journals (Sweden)

    Matteo Bologna

    2016-12-01

    Full Text Available Background: Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson’s disease (PD. However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective: To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Methods: Eighteen patients with PD and 16 healthy controls were enrolled in the study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analysed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients were evaluated using the Spearman’s test and multiple regression analysis.Results: The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps0.05. Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients (all Ps>0.05.Conclusion: The present results provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  17. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  18. Electromyographic responses to emotional facial expressions in 6-7 year olds: A feasibility study

    NARCIS (Netherlands)

    Deschamps, P.K.H.; Schutte, I.; Kenemans, J.L.; Matthys, W.C.H.J.; Schutter, D.J.L.G.

    2012-01-01

    Preliminary studies have demonstrated that school-aged children (average age 9-10 years) show mimicry responses to happy and angry facial expressions. The aim of the present study was to assess the feasibility of using facial electromyography (EMG) as a method to study facial mimicry responses in

  19. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.

  20. Beyond Personality Traits: Which Facial Expressions Imply Dominance in Two-Person Interaction Scenes?

    Science.gov (United States)

    Ueda, Yoshiyuki; Yoshikawa, Sakiko

    2017-09-04

    The ability to perceive a person's dominance plays an important role in survival and pro-social behavior. Perceived dominance has been investigated via assessments of facial expressions in 1-on-1 interaction situations, with expressions of anger and disgust judged to be more dominant. Given that human social interactions are complex, and multiple individuals interact at the same time, we investigated perceptions of trait dominance (an individual's competence and tendency to engage in dominant behavior) and relative dominance (an individual's social dominance within a social group). Participants were asked to rate the trait dominance of individuals depicted in pictorial stimuli. Results indicated that participants judged individuals expressing anger and disgust higher on trait dominance than individuals expressing happiness. Interestingly, when participants judged which of 2 individuals were more dominant in a confrontation scene, they judged individuals with happy expressions to be more dominant. These perceptions were consistent independent of the overall context. These results suggest that humans perceive social dominance without comparing personality trait dominance, and that criteria for evaluating social and personality trait dominance differ. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Modèles de choix discrets pour la reconnaissance des expressions faciales statiques

    OpenAIRE

    Danalet, Antonin

    2007-01-01

    Ce projet de semestre présente l’utilisation des modèles de choix discret pour construire un modèle de perception des expressions faciales statiques potentiellement utilisable pour la reconnaissance et la classification de ces expressions. La description de ces expressions s’inspire du système Facial Action Coding System (FACS) de Paul Ekman, basé sur une analyse anatomique de l’action faciale. L’ensemble de choix contient 6 expressions faciales universelles plus l’expression neutre. Chaque a...

  2. Fear modulates visual awareness similarly for facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Bernard M.C. Stienen

    2011-11-01

    Full Text Available BackgroundSocial interaction depends on a multitude of signals carrying information about the emotional state of others. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. The relative importance of facial and bodily signals is still poorly understood. In order to better understand the relative contribution of affective signals from the face only or from the rest of the body we used a binocular rivalry experiment. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. We report in this paper two behavioral experiments addressing these questions.MethodIn the first experiment we directly contrasted fearful, angry and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 seconds and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies.ResultsTaken together the two experiments show that there is no clear bias towards either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear.

  3. Recognition of emotion in facial expression by people with Prader-Willi syndrome.

    Science.gov (United States)

    Whittington, J; Holland, T

    2011-01-01

    People with Prader-Willi syndrome (PWS) may have mild intellectual impairments but less is known about their social cognition. Most parents/carers report that people with PWS do not have normal peer relationships, although some have older or younger friends. Two specific aspects of social cognition are being able to recognise other people's emotion and to then respond appropriately. In a previous study, mothers/carers thought that 26% of children and 23% of adults with PWS would not respond to others' feelings. They also thought that 64% could recognise happiness, sadness, anger and fear and a further 30% could recognise happiness and sadness. However, reports of emotion recognition and response to emotion were partially dissociated. It was therefore decided to test facial emotion recognition directly. The participants were 58 people of all ages with PWS. They were shown a total of 20 faces, each depicting one of the six basic emotions and asked to say what they thought that person was feeling. The faces were shown one at a time in random order and each was accompanied by a reminder of the six basic emotions. This cohort of people with PWS correctly identified 55% of the different facial emotions. These included 90% of happy faces, 55% each of sad and surprised faces, 43% of disgusted faces, 40% of angry faces and 37% of fearful faces. Genetic subtype differences were found only in the predictors of recognition scores, not in the scores themselves. Selective impairment was found in fear recognition for those with PWS who had had a depressive illness and in anger recognition for those with PWS who had had a psychotic illness. The inability to read facial expressions of emotion is a deficit in social cognition apparent in people with PWS. This may be a contributing factor in their difficulties with peer relationships. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  4. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    OpenAIRE

    Seongah Chin; Chung-Yeon Lee

    2013-01-01

    In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time....

  5. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  6. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    OpenAIRE

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggr...

  7. Face recognition using facial expression: a novel approach

    Science.gov (United States)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  8. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits

    Science.gov (United States)

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081

  9. Memory for facial expression is influenced by the background music playing during study

    National Research Council Canada - National Science Library

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    .... Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad...

  10. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    Science.gov (United States)

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  11. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  12. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  13. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2013-01-01

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  14. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  15. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  16. View-constrained latent variable model for multi-view facial expression classification

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2014-01-01

    We propose a view-constrained latent variable model for multi-view facial expression classification. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, followed by the expression classification in the shared manifold. For learning, we use the expr

  17. Pain-related bias in the classification of emotionally ambiguous facial expressions in mothers of children with chronic abdominal pain.

    Science.gov (United States)

    Liossi, Christina; White, Paul; Croome, Natasha; Hatira, Popi

    2012-03-01

    This study sought to determine whether mothers of young people with chronic abdominal pain (CAP) compared to mothers of pain-free children show a pain recognition bias when they classify facial emotional expressions. One hundred demographically matched mothers of children with CAP (n=50) and control mothers (n=50) were asked to identify different emotions expressed by adults in 2 experiments. In experiment 1, participants were required to identify the emotion in a series of facial images that depicted 100% intensity of the following emotions: Pain, Sadness, Anger, Fear, Happiness, and Neutral. In experiment 2, mothers were required to identify the predominant emotion in a series of computer-interpolated ("morphed") facial images. In this experiment, pain was combined with Sad, Angry, Fearful, Happy, and Neutral facial expressions in different proportions-that is, 90%:10%, 70%:30%, 50%:50%, 30%:70%, 10%:90%. All participants completed measures of state and trait anxiety, depression, and anxiety sensitivity. In experiment 1, there was no difference in the performance of the 2 groups of mothers. In experiment 2, it was found that overall mothers of children with CAP were classifying ambiguous emotional expressions predominantly as pain. Mean response times for CAP and control groups did not differ significantly. Mothers of children with CAP did not report more anxiety, depression, and anxiety sensitivity compared to control mothers. It is concluded that mothers of children with CAP show a pain bias when interpreting ambiguous emotional expressions, which possibly contributes to the maintenance of this condition in children via specific parenting behaviours.

  18. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  19. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    Science.gov (United States)

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.

  20. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  1. Recognition of static and dynamic facial expressions: Influences of sex, type and intensity of emotion

    OpenAIRE

    2013-01-01

    Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunt...

  2. Facial expression of positive emotions in individuals with eating disorders.

    Science.gov (United States)

    Dapelo, Marcela M; Hart, Sharon; Hale, Christiane; Morris, Robin; Lynch, Thomas R; Tchanturia, Kate

    2015-11-30

    A large body of research has associated Eating Disorders with difficulties in socio-emotional functioning and it has been argued that they may serve to maintain the illness. This study aimed to explore facial expressions of positive emotions in individuals with Anorexia Nervosa (AN) and Bulimia Nervosa (BN) compared to healthy controls (HC), through an examination of the Duchenne smile (DS), which has been associated with feelings of enjoyment, amusement and happiness (Ekman et al., 1990). Sixty participants (AN=20; BN=20; HC=20) were videotaped while watching a humorous film clip. The duration and intensity of DS were subsequently analyzed using the facial action coding system (FACS) (Ekman and Friesen, 2003). Participants with AN displayed DS for shorter durations than BN and HC participants, and their DS had lower intensity. In the clinical groups, lower duration and intensity of DS were associated with lower BMI, and use of psychotropic medication. The study is the first to explore DS in people with eating disorders, providing further evidence of difficulties in the socio-emotional domain in people with AN. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Nasal-Temporal Asymmetries in Suprathreshold Facial Expressions of Emotion

    Directory of Open Access Journals (Sweden)

    I Ntonia

    2014-08-01

    Full Text Available In this study, we investigated nasal-temporal asymmetries resulting from exposure to visible facial expressions of emotion. The literature has so far reported attentional asymmetries on spatial perception, and nasal-temporal asymmetries resulting from backward masked visual stimuli activated through non-conscious perception. To our knowledge however, no attempt has been made to test such asymmetries with un-masked, consciously visible emotional stimuli. Here, we report on response differences in binocular and monocular viewing on the perception of visible facial expressions of affect. In Study 1, 24 right-handed adults completed a speeded forced-choice paradigm, in which they binocularly viewed bilateral displays of a neutral face in one hemifield, simultaneously paired either with a happy or angry face of variable emotional salience in the opposite hemifield, for 50ms; the task was to indicate the left-right location of the emotional face. In Study 2 (N=23, right-handed, participants completed the same paradigm mononocularly while alternately patching either left and right eye in successive blocks. Under binocular viewing, we found an overall advantage for localising happy faces, further intensified when displayed in the right visual hemifield, and evident even when the emotional expression was extremely subtle. For monocularly viewed stimuli, we observed a response latency asymmetry with faster responses to temporally viewed happy faces. However there was higher overall accuracy for nasal stimuli regardless of emotion, further intensified when the emotional face appeared on the left. Our findings show that the nasal-temporal asymmetries previously associated exclusively with the non-conscious perception of emotional stimuli, manifest as ‘trigger-happy’ responses when emotional stimuli become visible. This asymmetrical reduction in response latency for nasal stimuli could be attributed an overall attentional bias towards the temporal area

  4. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    Directory of Open Access Journals (Sweden)

    Hiromitsu Miyata

    Full Text Available BACKGROUND: A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. CONCLUSIONS/SIGNIFICANCE: The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the

  5. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  6. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  7. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  8. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    Science.gov (United States)

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  9. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  10. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    Science.gov (United States)

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  11. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  12. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    Science.gov (United States)

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  13. Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli

    NARCIS (Netherlands)

    Lewinski, P.; Fransen, M.L.; Tan, E.S.H.

    2014-01-01

    We present a psychophysiological study of facial expressions of happiness (FEH) produced by advertisements using the FaceReader system (Noldus, 2013) for automatic analysis of facial expressions of basic emotions (FEBE; Ekman, 1972). FaceReader scores were associated with self-reports of the adverti

  14. Mu desynchronization during observation and execution of facial expressions in 30-month-old children

    Directory of Open Access Journals (Sweden)

    Holly Rayson

    2016-06-01

    Full Text Available Simulation theories propose that observing another’s facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development.

  15. Compound facial expressions of emotion: from basic research to clinical applications.

    Science.gov (United States)

    Du, Shichuan; Martinez, Aleix M

    2015-12-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.

  16. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  17. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    Science.gov (United States)

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  18. Reduced capacity in automatic processing of facial expression in restrictive anorexia nervosa and obesity

    NARCIS (Netherlands)

    Cserjesi, Renata; Vermeulen, Nicolas; Lenard, Laszlo; Luminet, Olivier

    2011-01-01

    There is growing evidence that disordered eating is associated with facial expression recognition and emotion processing problems. In this study, we investigated the question of whether anorexia and obesity occur on a continuum of attention bias towards negative facial expressions in comparison with

  19. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  20. Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli

    NARCIS (Netherlands)

    Lewinski, P.; Fransen, M.L.; Tan, E.S.H.

    2014-01-01

    We present a psychophysiological study of facial expressions of happiness (FEH) produced by advertisements using the FaceReader system (Noldus, 2013) for automatic analysis of facial expressions of basic emotions (FEBE; Ekman, 1972). FaceReader scores were associated with self-reports of the

  1. The influence of the context on facial expressions perception: a behavioral study on the Kuleshov effect

    DEFF Research Database (Denmark)

    Calbi, Marta; Heimann, Katrin; Barratt, Daniel

    2017-01-01

    Facial expressions are of major importance to understanding the emotions, intentions, and mental states of others. Strikingly, so far most studies on the perception and comprehension of emotions have used isolated facial expressions as stimuli; for example, photographs of actors displaying facial...... expressions belonging to one of the so called ‘basic emotions’. However, our real experience during social interactions is different: facial expressions of emotion are mostly perceived in a wider context, constituted by body language, the surrounding environment, and our beliefs and expectations. Already...... in the early twentieth century, the Russian filmmaker Lev Kuleshov argued that such context could significantly change our interpretation of facial expressions. Prior experiments have shown behavioral effects pointing in this direction, but have only used static images portraying some basic emotions. In our...

  2. Modulation of perception and brain activity by predictable trajectories of facial expressions.

    Science.gov (United States)

    Furl, N; van Rijsbergen, N J; Kiebel, S J; Friston, K J; Treves, A; Dolan, R J

    2010-03-01

    People track facial expression dynamics with ease to accurately perceive distinct emotions. Although the superior temporal sulcus (STS) appears to possess mechanisms for perceiving changeable facial attributes such as expressions, the nature of the underlying neural computations is not known. Motivated by novel theoretical accounts, we hypothesized that visual and motor areas represent expressions as anticipated motion trajectories. Using magnetoencephalography, we show predictable transitions between fearful and neutral expressions (compared with scrambled and static presentations) heighten activity in visual cortex as quickly as 165 ms poststimulus onset and later (237 ms) engage fusiform gyrus, STS and premotor areas. Consistent with proposed models of biological motion representation, we suggest that visual areas predictively represent coherent facial trajectories. We show that such representations bias emotion perception of subsequent static faces, suggesting that facial movements elicit predictions that bias perception. Our findings reveal critical processes evoked in the perception of dynamic stimuli such as facial expressions, which can endow perception with temporal continuity.

  3. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  4. Emotional Verbalization and Identification of Facial Expressions in Teenagers’ Communication

    Directory of Open Access Journals (Sweden)

    I. S. Ivanova

    2013-01-01

    Full Text Available The paper emphasizes the need for studying the subjective effectiveness criteria of interpersonal communication and importance of effective communication for personality development in adolescence. The problemof undeveloped representation of positive emotions in communication process is discussed. Both the identification and verbalization of emotions are regarded by the author as the basic communication skills. The experimental data regarding the longitude and age levels are described, the gender differences in identification and verbalization of emotions considered. The outcomes of experimental study demonstrate that the accuracy of facial emotional expressions of teenage boys and girls changes at different rates. The prospects of defining the age norms for identification and verbalization of emotions are analyzed.

  5. Paedomorphic facial expressions give dogs a selective advantage.

    Directory of Open Access Journals (Sweden)

    Bridget M Waller

    Full Text Available How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  6. [Facial expressions of negative emotions in clinical interviews: The development, reliability and validity of a categorical system for the attribution of functions to facial expressions of negative emotions].

    Science.gov (United States)

    Bock, Astrid; Huber, Eva; Peham, Doris; Benecke, Cord

    2015-01-01

    The development (Study 1) and validation (Study 2) of a categorical system for the attribution of facial expressions of negative emotions to specific functions. The facial expressions observed inOPDinterviews (OPD-Task-Force 2009) are coded according to the Facial Action Coding System (FACS; Ekman et al. 2002) and attributed to categories of basic emotional displays using EmFACS (Friesen & Ekman 1984). In Study 1 we analyze a partial sample of 20 interviews and postulate 10 categories of functions that can be arranged into three main categories (interactive, self and object). In Study 2 we rate the facial expressions (n=2320) from the OPD interviews (10 minutes each interview) of 80 female subjects (16 healthy, 64 with DSM-IV diagnosis; age: 18-57 years) according to the categorical system and correlate them with problematic relationship experiences (measured with IIP,Horowitz et al. 2000). Functions of negative facial expressions can be attributed reliably and validly with the RFE-Coding System. The attribution of interactive, self-related and object-related functions allows for a deeper understanding of the emotional facial expressions of patients with mental disorders.

  7. Faces and bodies: perception and mimicry of emotionally congruent and incongruent facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Mariska eKret

    2013-02-01

    Full Text Available Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important. Here we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and from emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment, and their facial reactions measured with electromyography (EMG. The behavioral results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, also vice versa. From their facial expression, it appeared that observers acted with signs of negative emotionality (increased corrugator activity to angry and fearful facial expressions and with positive emotionality (increased zygomaticus to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body ameliorates the recognition of the emotion.

  8. A Developmental Examination of Amygdala Response to Facial Expressions

    Science.gov (United States)

    Guyer, Amanda E.; Monk, Christopher S.; McClure-Tone, Erin B.; Nelson, Eric E.; Roberson-Nay, Roxann; Adler, Abby D.; Fromm, Stephen J.; Leibenluft, Ellen; Pine, Daniel S.; Ernst, Monique

    2010-01-01

    Several lines of evidence implicate the amygdala in face– emotion processing, particularly for fearful facial expressions. Related findings suggest that face–emotion processing engages the amygdala within an interconnected circuitry that can be studied using a functional-connectivity approach. Past work also underscores important functional changes in the amygdala during development. Taken together, prior research on amygdala function and development reveals a need for more work examining developmental changes in the amygdala’s response to fearful faces and in amygdala functional connectivity during face processing. The present study used event-related functional magnetic resonance imaging to compare 31 adolescents (9–17 years old) and 30 adults (21–40 years old) on activation to fearful faces in the amygdala and other regions implicated in face processing. Moreover, these data were used to compare patterns of amygdala functional connectivity in adolescents and adults. During passive viewing, adolescents demonstrated greater amygdala and fusiform activation to fearful faces than did adults. Functional connectivity analysis revealed stronger connectivity between the amygdala and the hippocampus in adults than in adolescents. Within each group, variability in age did not correlate with amygdala response, and sex-related developmental differences in amygdala response were not found. Eye movement data collected outside of the magnetic resonance imaging scanner using the same task suggested that developmental differences in amygdala activation were not attributable to differences in eye-gaze patterns. Amygdala hyperactivation in response to fearful faces may explain increased vulnerability to affective disorders in adolescence; stronger amygdala–hippocampus connectivity in adults than adolescents may reflect maturation in learning or habituation to facial expressions. PMID:18345988

  9. Facial feedback affects valence jugdments of dynamic and static emotional expressions

    Directory of Open Access Journals (Sweden)

    Sylwia eHyniewska

    2015-03-01

    Full Text Available The ability to judge others’ emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers’ facial actions and emotion judgments have reported mixed findings. This study is the first to measure emotion judgments in terms of valence and arousal dimensions while comparing dynamic versus static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36 were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering and zygomaticus major muscle (cheek raising. They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek-raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions.

  10. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    Science.gov (United States)

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  11. Recognition of facial expressions of different emotional intensities in patients with frontotemporal lobar degeneration

    NARCIS (Netherlands)

    Kessels, Roy P. C.; Gerritsen, Lotte; Montagne, Barbara; Ackl, Nibal; Diehl, Janine; Danek, Adrian

    2007-01-01

    Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more d

  12. Pose-variant facial expression recognition using an embedded image system

    Science.gov (United States)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  13. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    Science.gov (United States)

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all Ps emotion recognition deficits were unrelated in patients (all Ps > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all Ps > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  14. Can we distinguish emotions from faces? Investigation of implicit and explicit processes of peak facial expressions

    Directory of Open Access Journals (Sweden)

    Yanmei Wang

    2016-08-01

    Full Text Available Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes

  15. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  16. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    Science.gov (United States)

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  17. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  18. Représentation invariante des expressions faciales. : Application en analyse multimodale des émotions.

    OpenAIRE

    Soladié, Catherine,

    2013-01-01

    More and more applications aim at automating the analysis of human behavior to assist or replace the experts who are conducting these analyzes. This thesis deals with the analysis of facial expressions, which provide key information on these behaviors.Our work proposes an innovative solution to effectively define a facial expression, regardless of the morphology of the subject. The approach is based on the organization of expressions.We show that the organization of expressions, such as defin...

  19. Serotonin transporter gene-linked polymorphism affects detection of facial expressions.

    Directory of Open Access Journals (Sweden)

    Ai Koizumi

    Full Text Available Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition, is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%. Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d' to happy than to sad expressions, while participants with long allele(s showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.

  20. A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.

    Science.gov (United States)

    Humphreys, Kate; Avidan, Galia; Behrmann, Marlene

    2007-01-01

    Whether the ability to recognize facial expression can be preserved in the absence of the recognition of facial identity remains controversial. The current study reports the results of a detailed investigation of facial expression recognition in three congenital prosopagnosic (CP) participants, in comparison with two patients with acquired prosopagnosia (AP) and a large group of 30 neurologically normal participants, including individually age- and gender-matched controls. Participants completed a fine-grained expression recognition paradigm requiring a six-alternative forced-choice response to continua of morphs of six different basic facial expressions (e.g. happiness and surprise). Accuracy, sensitivity and reaction times were measured. The performance of all three CP individuals was indistinguishable from that of controls, even for the most subtle expressions. In contrast, both individuals with AP displayed pronounced difficulties with the majority of expressions. The results from the CP participants attest to the dissociability of the processing of facial identity and of facial expression. Whether this remarkably good expression recognition is achieved through normal, or compensatory, mechanisms remains to be determined. Either way, this normal level of performance does not extend to include facial identity.

  1. Developmental changes in facial expression recognition in Japanese school-age children.

    Science.gov (United States)

    Naruse, Susumu; Hashimoto, Toshiaki; Mori, Kenji; Tsuda, Yoshimi; Takahara, Mitsue; Kagami, Shoji

    2013-01-01

    Facial expressions hold abundant information and play a central part in communication. In daily life, we must construct amicable interpersonal relationships by communicating through verbal and nonverbal behaviors. While school-age is a period of rapid social growth, few studies exist that study developmental changes in facial expression recognition during this age. This study investigated developmental changes in facial expression recognition by examining observers' gaze on others' expressions. 87 school-age children from first to sixth grade (41 boys, 46 girls). The Tobii T60 Eye-tracker(Tobii Technologies, Sweden) was used to gauge eye movement during a task of matching pre-instructed emotion words and facial expressions images (neutral, angry, happy, surprised, sad, disgusted) presented on a monitor fixed at a distance of 50 cm. In the task of matching the six facial expression images and emotion words, the mid- and higher-grade children answered more accurately than the lower-grade children in matching four expressions, excluding neutral and happy. For fixation time and fixation count, the lower-grade children scored lower than other grade children, gazing on all facial expressions significantly fewer times and for shorter periods. It is guessed that the stage from lower grades to middle grades is a turning point in facial recognition.

  2. Effect of facial expressions on student's comprehension recognition in virtual educational environments.

    Science.gov (United States)

    Sathik, Mohamed; Jonathan, Sofia G

    2013-01-01

    The scope of this research is to examine whether facial expression of the students is a tool for the lecturer to interpret comprehension level of students in virtual classroom and also to identify the impact of facial expressions during lecture and the level of comprehension shown by these expressions. Our goal is to identify physical behaviours of the face that are linked to emotional states, and then to identify how these emotional states are linked to student's comprehension. In this work, the effectiveness of a student's facial expressions in non-verbal communication in a virtual pedagogical environment was investigated first. Next, the specific elements of learner's behaviour for the different emotional states and the relevant facial expressions signaled by the action units were interpreted. Finally, it focused on finding the impact of the relevant facial expression on the student's comprehension. Experimentation was done through survey, which involves quantitative observations of the lecturers in the classroom in which the behaviours of students were recorded and statistically analyzed. The result shows that facial expression is the most frequently used nonverbal communication mode by the students in the virtual classroom and facial expressions of the students are significantly correlated to their emotions which helps to recognize their comprehension towards the lecture.

  3. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Directory of Open Access Journals (Sweden)

    Mohammed Hazim Alkawaz

    2014-01-01

    Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  4. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  5. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  6. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions.

    Science.gov (United States)

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions.

  7. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  8. Facial pain expression in dementia: a review of the experimental and clinical evidence

    NARCIS (Netherlands)

    Lautenbacher, Stefan; Kunz, Miriam

    2016-01-01

    The analysis of the facial expression of pain promises to be one of the most sensitive tools for the detection of pain in patients with moderate to severe forms of dementia, who can no longer self-report pain. Fine-grain analysis using the Facial Action Coding System (FACS) is possible in research b

  9. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  10. The use of facial expressions for pain assessment purposes in dementia: A narrative review

    NARCIS (Netherlands)

    Oosterman, J.M.; Zwakhalen, S.; Sampson, E.L.; Kunz, M.

    2016-01-01

    Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial e

  11. The use of facial expressions for pain assessment purposes in dementia : a narrative review

    NARCIS (Netherlands)

    Oosterman, Joukje M; Zwakhalen, Sandra; Sampson, Elizabeth L; Kunz, Miriam

    2016-01-01

    Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial e

  12. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    Science.gov (United States)

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  13. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    Science.gov (United States)

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  14. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class) continu

  15. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  16. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  17. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    Science.gov (United States)

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (Pchildren have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  18. [Emotional intelligence and oscillatory responses on the emotional facial expressions].

    Science.gov (United States)

    Kniazev, G G; Mitrofanova, L G; Bocharov, A V

    2013-01-01

    Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women) in age 18-30 years. Participants were instructed to evaluate emotional expression (angry, happy and neutral) of each presented face on an analog scale ranging from -100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500-870 ms) event-related theta synchronization in high emotional intelligence subject was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon presentation of angry faces. This suggests the existence of a mechanism that can be selectively increase the positive emotions and reduce negative emotions.

  19. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    OpenAIRE

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed t...

  20. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.

  1. Différences dans les expressions faciales selon l'âge et le sexe

    OpenAIRE

    Houstis, Odyssia

    2010-01-01

    Notre but était de valider une méthode qui nous permet de mesurer les expressions faciales, d'appliquer cette méthode pour évaluer quantitativement les expressions faciales chez les adultes et chez les enfants, et de comparer les résultats obtenues pour les enfants en croissance et les adultes afin de pouvoir estimer les effets de l'âge et du sexe. Notre méthode 2-dimensionnelle est fiable et utile pour mesurer quantitativement les expressions faciales d'une façon simple, efficace, et peu coû...

  2. Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique

    Directory of Open Access Journals (Sweden)

    Jeemoni Kalita

    2013-03-01

    Full Text Available In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.

  3. Evolutionary Computational Method of Facial Expression Analysis for Content-based Video Retrieval using 2-Dimensional Cellular Automata

    CERN Document Server

    Geetha, P

    2010-01-01

    In this paper, Deterministic Cellular Automata (DCA) based video shot classification and retrieval is proposed. The deterministic 2D Cellular automata model captures the human facial expressions, both spontaneous and posed. The determinism stems from the fact that the facial muscle actions are standardized by the encodings of Facial Action Coding System (FACS) and Action Units (AUs). Based on these encodings, we generate the set of evolutionary update rules of the DCA for each facial expression. We consider a Person-Independent Facial Expression Space (PIFES) to analyze the facial expressions based on Partitioned 2D-Cellular Automata which capture the dynamics of facial expressions and classify the shots based on it. Target video shot is retrieved by comparing the similar expression is obtained for the query frame's face with respect to the key faces expressions in the database video. Consecutive key face expressions in the database that are highly similar to the query frame's face, then the key faces are use...

  4. Predicting Emotions in Facial Expressions from the Annotations in Naturally Occurring First Encounters

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2014-01-01

    This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations...... to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We...... also investigate the relation between emotions and the communicative functions of facial expressions. Both emotion labels and their values in a three dimensional space are identified. The three dimensions are Pleasure, Arousal and Dominance. The results of our experiments indicate that the classifiers...

  5. Predicting Emotions in Facial Expressions from the Annotations in Naturally Occurring First Encounters

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2014-01-01

    This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotation...... to it are reliable and can be used to model and test emotional behaviours in emotional cognitive infocommunicative systems.......This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations...... to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We...

  6. The role of the analyst's facial expressions in psychoanalysis and psychoanalytic therapy.

    Science.gov (United States)

    Searles, H F

    This paper, while acknowledging implicitly the importance of transference-distortions in the patient's perceptions of the analyst's countenance, focuses primarily upon the real changes in the latter's facial expressions. The analyst's face has a central role in the phase of therapeutic symbiosis, as well as in subsequent individuation. It is in the realm of the analyst's facial expressions that the borderline patient, for example, can best find a bridge out of autism and into therapeutically symbiotic relatedness with the analyst. During this latter phase, then, each participant's facial expressions "belong" as much to the other as to oneself; that is, the expressions of each person are in the realm of transitional phenomena for both of them. The analyst's facial expressions are a highly, and often centrally, significant dimension of both psychoanalysis and psychoanalytic therapy. Illustrative clinical vignettes are presented from work with both patients who use the couch and those who do not.

  7. Perception of Emotional Facial Expressions in Amyotrophic Lateral Sclerosis (ALS) at Behavioural and Brain Metabolic Level

    Science.gov (United States)

    Aho-Özhan, Helena E. A.; Keller, Jürgen; Heimrath, Johanna; Uttner, Ingo; Kassubek, Jan; Birbaumer, Niels; Ludolph, Albert C.; Lulé, Dorothée

    2016-01-01

    Introduction Amyotrophic lateral sclerosis (ALS) primarily impairs motor abilities but also affects cognition and emotional processing. We hypothesise that subjective ratings of emotional stimuli depicting social interactions and facial expressions is changed in ALS. It was found that recognition of negative emotions and ability to mentalize other’s intentions is reduced. Methods Processing of emotions in faces was investigated. A behavioural test of Ekman faces expressing six basic emotions was presented to 30 ALS patients and 29 age-, gender and education matched healthy controls. Additionally, a subgroup of 15 ALS patients that were able to lie supine in the scanner and 14 matched healthy controls viewed the Ekman faces during functional magnetic resonance imaging (fMRI). Affective state and a number of daily social contacts were measured. Results ALS patients recognized disgust and fear less accurately than healthy controls. In fMRI, reduced brain activity was seen in areas involved in processing of negative emotions replicating our previous results. During processing of sad faces, increased brain activity was seen in areas associated with social emotions in right inferior frontal gyrus and reduced activity in hippocampus bilaterally. No differences in brain activity were seen for any of the other emotional expressions. Inferior frontal gyrus activity for sad faces was associated with increased amount of social contacts of ALS patients. Conclusion ALS patients showed decreased brain and behavioural responses in processing of disgust and fear and an altered brain response pattern for sadness. The negative consequences of neurodegenerative processes in the course of ALS might be counteracted by positive emotional activity and positive social interactions. PMID:27741285

  8. Perception of Emotional Facial Expressions in Amyotrophic Lateral Sclerosis (ALS) at Behavioural and Brain Metabolic Level.

    Science.gov (United States)

    Aho-Özhan, Helena E A; Keller, Jürgen; Heimrath, Johanna; Uttner, Ingo; Kassubek, Jan; Birbaumer, Niels; Ludolph, Albert C; Lulé, Dorothée

    2016-01-01

    Amyotrophic lateral sclerosis (ALS) primarily impairs motor abilities but also affects cognition and emotional processing. We hypothesise that subjective ratings of emotional stimuli depicting social interactions and facial expressions is changed in ALS. It was found that recognition of negative emotions and ability to mentalize other's intentions is reduced. Processing of emotions in faces was investigated. A behavioural test of Ekman faces expressing six basic emotions was presented to 30 ALS patients and 29 age-, gender and education matched healthy controls. Additionally, a subgroup of 15 ALS patients that were able to lie supine in the scanner and 14 matched healthy controls viewed the Ekman faces during functional magnetic resonance imaging (fMRI). Affective state and a number of daily social contacts were measured. ALS patients recognized disgust and fear less accurately than healthy controls. In fMRI, reduced brain activity was seen in areas involved in processing of negative emotions replicating our previous results. During processing of sad faces, increased brain activity was seen in areas associated with social emotions in right inferior frontal gyrus and reduced activity in hippocampus bilaterally. No differences in brain activity were seen for any of the other emotional expressions. Inferior frontal gyrus activity for sad faces was associated with increased amount of social contacts of ALS patients. ALS patients showed decreased brain and behavioural responses in processing of disgust and fear and an altered brain response pattern for sadness. The negative consequences of neurodegenerative processes in the course of ALS might be counteracted by positive emotional activity and positive social interactions.

  9. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  10. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Yamin eWang

    2013-12-01

    Full Text Available Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round in identity and matching mouth (opened or closed in facial expression. Garner interference was found either from identity to expression (Experiment 1 or from expression to identity (Experiment 2. Interference was also found in both directions (Experiment 3 or in neither direction (Experiment 4. The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research.

  11. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    Science.gov (United States)

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction.

  12. Facial Expression Recognition Based on Features Derived From the Distinct LBP and GLCM

    Directory of Open Access Journals (Sweden)

    Gorti Satyanarayana Murty

    2014-01-01

    Full Text Available Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. This paper, presents recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM with a new structural approach derived from distinct LBP’s (DLBP ona 3 x 3 First order Compressed Image (FCI. The proposed method precisely recognizes the 7 categories of expressions i.e.: neutral, happiness, sadness, surprise, anger, disgust and fear. The proposed method contains three phases. In the first phase each 5 x 5 sub image is compressed into a 3 x 3 sub image. The second phase derives two distinct LBP’s (DLBP using the Triangular patterns between the upper and lower parts of the 3 x 3 sub image. In the third phase GLCM is constructed based on the DLBP’s and feature parameters are evaluated for precise facial expression recognition. The derived DLBP is effective because it integrated with GLCM and provides better classification performance. The proposed method overcomes the disadvantages of statistical and formal LBP methods in estimating the facial expressions. The experimental results demonstrate the effectiveness of the proposed method on facial expression recognition.

  13. Processing of individual items during ensemble coding of facial expressions

    Directory of Open Access Journals (Sweden)

    Huiyun Li

    2016-09-01

    Full Text Available There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 milliseconds for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.

  14. A Modified Sparse Representation Method for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-01-01

    Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  15. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  16. Web-based Visualisation of Head Pose and Facial Expressions Changes:

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2016-01-01

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... different sets of data, we introduce a system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). An approach build on discriminative random regression forests was selected in order to rapidly...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...

  17. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  18. Laterality of facial expressions of emotion: Universal and culture-specific influences

    National Research Council Canada - National Science Library

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    .... In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced...

  19. Impaired Attribution of Emotion to Facial Expressions in Anxiety and Major Depression

    NARCIS (Netherlands)

    Demenescu, Liliana R.; Kortekaas, Rudie; den Boer, Johan A.; Aleman, Andre

    2010-01-01

    Background: Recognition of others' emotions is an important aspect of interpersonal communication. In major depression, a significant emotion recognition impairment has been reported. It remains unclear whether the ability to recognize emotion from facial expressions is also impaired in anxiety

  20. Web-based Visualisation of Head Pose and Facial Expressions Changes:

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...... different sets of data, we introduce a system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). An approach build on discriminative random regression forests was selected in order to rapidly...

  1. Body Cues, Not Facial Expressions, Discriminate Between Intense Positive and Negative Emotions

    NARCIS (Netherlands)

    Aviezer, H.; Trope, Y.; Todorov, A.T.

    2012-01-01

    The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion,

  2. Revealing variations in perception of mental states from dynamic facial expressions: a cautionary note.

    Directory of Open Access Journals (Sweden)

    Elisa Back

    Full Text Available Although a great deal of research has been conducted on the recognition of basic facial emotions (e.g., anger, happiness, sadness, much less research has been carried out on the more subtle facial expressions of an individual's mental state (e.g., anxiety, disinterest, relief. Of particular concern is that these mental state expressions provide a crucial source of communication in everyday life but little is known about the accuracy with which natural dynamic facial expressions of mental states are identified and, in particular, the variability in mental state perception that is produced. Here we report the findings of two studies that investigated the accuracy and variability with which dynamic facial expressions of mental states were identified by participants. Both studies used stimuli carefully constructed using procedures adopted in previous research, and free-report (Study 1 and forced-choice (Study 2 measures of response accuracy and variability. The findings of both studies showed levels of response accuracy that were accompanied by substantial variation in the labels assigned by observers to each mental state. Thus, when mental states are identified from facial expressions in experiments, the identities attached to these expressions appear to vary considerably across individuals. This variability raises important issues for understanding the identification of mental states in everyday situations and for the use of responses in facial expression research.

  3. Processing facial expressions of emotion: upright vs. inverted images

    Directory of Open Access Journals (Sweden)

    David eBimler

    2013-02-01

    Full Text Available We studied discrimination of briefly presented Upright vs. Inverted emotional facial expressions (FEs, hypothesising that inversion would impair emotion decoding by disrupting holistic FE processing. Stimuli were photographs of seven emotion prototypes, of a male and female poser (Ekman and Friesen, 1976, and eight intermediate morphs in each set. Subjects made speeded Same/Different judgements of emotional content for all Upright (U or Inverted (I pairs of FEs, presented for 500 ms, 100 times each pair. Signal Detection Theory revealed the sensitivity measure d' to be slightly but significantly higher for the Upright FEs. In further analysis using multidimensional scaling (MDS, percentages of Same judgements were taken as an index of pairwise perceptual similarity, separately for U and I presentation mode. The outcome was a 4D ‘emotion expression space’, with FEs represented as points and the dimensions identified as Happy–Sad, Surprise/Fear, Disgust and Anger. The solutions for U and I FEs were compared by means of cophenetic and canonical correlation, Procrustes analysis and weighted-Euclidean analysis of individual difference. Differences in discrimination produced by inverting FE stimuli were found to be small and manifested as minor changes in the MDS structure or weights of the dimensions. Solutions differed substantially more between the two posers, however. Notably, for stimuli containing elements of Happiness (whether U or I, the MDS structure revealed some signs of categorical perception, indicating that mouth curvature – the dominant feature conveying Happiness – is visually salient and receives early processing. The findings suggest that for briefly-presented FEs, Same/Different decisions are dominated by low-level visual analysis of abstract patterns of lightness and edge filters, but also reflect emerging featural analysis. These analyses, insensitive to face orientation, enable initial positive/negative Valence

  4. Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification

    DEFF Research Database (Denmark)

    Rodriguez, Pau; Cucurull, Guillem; Gonzàlez, Jordi

    2017-01-01

    appearance versus taking into account the whole image: As a result, we outperform current state- of-the-art AUC performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also...... report competitive results in the Cohn Kanade+ facial expression database....... in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first...

  5. Synthesis of Facial Image with Expression Based on Muscular Contraction Parameters Using Linear Muscle and Sphincter Muscle

    Science.gov (United States)

    Ahn, Seonju; Ozawa, Shinji

    We aim to synthesize individual facial image with expression based on muscular contraction parameters. We have proposed a method of calculating the muscular contraction parameters from arbitrary face image without using learning for each individual. As a result, we could generate not only individual facial expression, but also the facial expressions of various persons. In this paper, we propose the muscle-based facial model; the facial muscles define both the linear and the novel sphincter. Additionally, we propose a method of synthesizing individual facial image with expression based on muscular contraction parameters. First, the individual facial model with expression is generated by fitting using the arbitrary face image. Next, the muscular contraction parameters are calculated that correspond to the expression displacement of the input face image. Finally, the facial expression is synthesized by the vertex displacements of a neutral facial model based on calculated muscular contraction parameters. Experimental results reveal that the novel sphincter muscle can synthesize facial expressions of the facial image, which corresponds to the actual face image with arbitrary and mouth or eyes expression.

  6. Comparative Study of Human Age Estimation with or without Preclassification of Gender and Facial Expression

    OpenAIRE

    Nguyen, Dat Tien; Cho, So Ra; Shin, Kwang Yong; Bang, Jae Won; Park, Kang Ryoung

    2014-01-01

    Age estimation has many useful applications, such as age-based face classification, finding lost children, surveillance monitoring, and face recognition invariant to age progression. Among many factors affecting age estimation accuracy, gender and facial expression can have negative effects. In our research, the effects of gender and facial expression on age estimation using support vector regression (SVR) method are investigated. Our research is novel in the following four ways. First, the a...

  7. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  8. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Sato Wataru

    2012-08-01

    Full Text Available Abstract Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD. However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI. Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG, fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG. Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  9. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study.

    Science.gov (United States)

    Righart, Ruthger; de Gelder, Beatrice

    2008-09-01

    In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing.

  10. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study

    Science.gov (United States)

    Righart, Ruthger

    2008-01-01

    In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing. PMID:19015119

  11. Facial redness, expression, and masculinity influence perceptions of anger and health.

    Science.gov (United States)

    Young, Steven G; Thorstenson, Christopher A; Pazda, Adam D

    2016-12-29

    Past research has found that skin colouration, particularly facial redness, influences the perceived health and emotional state of target individuals. In the current work, we explore several extensions of this past research. In Experiment 1, we manipulated facial redness incrementally on neutral and angry faces and had participants rate each face for anger and health. Different red effects emerged, as perceived anger increased in a linear manner as facial redness increased. Health ratings instead showed a curvilinear trend, as both extreme paleness and redness were rated as less healthy than moderate levels of red. Experiment 2 replicated and extended these findings by manipulating the masculinity of both angry and neutral faces that varied in redness. The results found the effect of red on perceived anger and health was moderated by masculine face structure. Collectively, these results show that facial redness has context dependent effects that vary based on facial expression, appearance, and differentially impact ratings of emotional states and health.

  12. Electromyographic responses to emotional facial expressions in 6-7year olds: a feasibility study.

    Science.gov (United States)

    Deschamps, P K H; Schutte, I; Kenemans, J L; Matthys, W; Schutter, D J L G

    2012-08-01

    Preliminary studies have demonstrated that school-aged children (average age 9-10years) show mimicry responses to happy and angry facial expressions. The aim of the present study was to assess the feasibility of using facial electromyography (EMG) as a method to study facial mimicry responses in younger children aged 6-7years to emotional facial expressions of other children. Facial EMG activity to the presentation of dynamic emotional faces was recorded from the corrugator, zygomaticus, frontalis and depressor muscle in sixty-one healthy participants aged 6-7years. Results showed that the presentation of angry faces was associated with corrugator activation and zygomaticus relaxation, happy faces with an increase in zygomaticus and a decrease in corrugator activation, fearful faces with frontalis activation, and sad faces with a combination of corrugator and frontalis activation. This study demonstrates the feasibility of measuring facial EMG response to emotional facial expressions in 6-7year old children. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Differential expression of wound fibrotic factors between facial and trunk dermal fibroblasts.

    Science.gov (United States)

    Kurita, Masakazu; Okazaki, Mutsumi; Kaminishi-Tanikawa, Akiko; Niikura, Mamoru; Takushima, Akihiko; Harii, Kiyonori

    2012-01-01

    Clinically, wounds on the face tend to heal with less scarring than those on the trunk, but the causes of this difference have not been clarified. Fibroblasts obtained from different parts of the body are known to show different properties. To investigate whether the characteristic properties of facial and trunk wound healing are caused by differences in local fibroblasts, we comparatively analyzed the functional properties of superficial and deep dermal fibroblasts obtained from the facial and trunk skin of seven individuals, with an emphasis on tendency for fibrosis. Proliferation kinetics and mRNA and protein expression of 11 fibrosis-associated factors were investigated. The proliferation kinetics of facial and trunk fibroblasts were identical, but the expression and production levels of profibrotic factors, such as extracellular matrix, transforming growth factor-β1, and connective tissue growth factor mRNA, were lower in facial fibroblasts when compared with trunk fibroblasts, while the expression of antifibrotic factors, such as collagenase, basic fibroblast growth factor, and hepatocyte growth factor, showed no clear trends. The differences in functional properties of facial and trunk dermal fibroblasts were consistent with the clinical tendencies of healing of facial and trunk wounds. Thus, the differences between facial and trunk scarring are at least partly related to the intrinsic nature of the local dermal fibroblasts.

  14. The assessment of facial expressions in piglets undergoing tail docking and castration: towards the development of the Piglet Grimace Scale

    Directory of Open Access Journals (Sweden)

    Pierpaolo Di Giminiani

    2016-11-01

    Full Text Available Many piglets are exposed to potentially painful husbandry procedures within the first week of life, including tail docking and castration, without the provision of either anaesthesia or analgesia. The assessment methods used to evaluate pain experienced by piglets are often affected by low specificity and practical limitations, prompting the investigation of alternative methodologies. The assessment of changes in facial expression following a painful event has been successfully applied to several species. The objective of this pilot study was to evaluate the utility of a Grimace Scale applied to neonatal pigs to evaluate pain evoked by tail docking and castration.Eight female piglets, sus scrofa domesticus (Landrace/Large White X synthetic sire line underwent tail docking and 15 male piglets (75% Large White and 25% Belgian Landrace were exposed to the castration procedure. Clear images of the faces of the piglets were collected immediately pre- and post-procedure. The images were used by experienced observers to identify Facial Action Units (FAU which changed in individuals over this period and a scoring scale was depicted in a training manual. A set of randomly selected images were then combined in a scorebook, which was evaluated after training by 30 scorers, blind to the treatment. The scale for most FAU was used with a high level of consistency across all observers. Tail docking induced a significant change (P<0.05 only in the ‘orbital tightening’ Action Unit, whereas no change in any unit was observed in castrated piglets. In this initial stage of development, orbital tightening at least seems to have the potential to be applied to investigate painful conditions in neonatal pigs. Nonetheless, more studies are needed to assess its full effectiveness and to evaluate the influence of possible confounds (e.g. handling stress on the observed changes in facial expressions.

  15. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  16. Does gaze direction modulate facial expression processing in children with autism spectrum disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent motivational tendency (i.e., an avoidant facial expression with averted eye gaze) than those with an incongruent motivational tendency. Children with ASD (9-14 years old; n = 14) were not affected by the gaze direction of facial stimuli. This finding was replicated in Experiment 2, which presented only the eye region of the face to typically developing children (n = 10) and children with ASD (n = 10). These results demonstrated that children with ASD do not encode and/or integrate multiple communicative signals based on their affective or motivational tendency.

  17. Image coding based on maximum entropy partitioning for identifying improbable intensities related to facial expressions

    Indian Academy of Sciences (India)

    SEBA SUSAN; NANDINI AGGARWAL; SHEFALI CHAND; AYUSH GUPTA

    2016-12-01

    In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization experiments

  18. A Facial Expression Classification System Integrating Canny, Principal Component Analysis and Artificial Neural Network

    CERN Document Server

    Thai, Le Hoang; Hai, Tran Son

    2011-01-01

    Facial Expression Classification is an interesting research problem in recent years. There are a lot of methods to solve this problem. In this research, we propose a novel approach using Canny, Principal Component Analysis (PCA) and Artificial Neural Network. Firstly, in preprocessing phase, we use Canny for local region detection of facial images. Then each of local region's features will be presented based on Principal Component Analysis (PCA). Finally, using Artificial Neural Network (ANN)applies for Facial Expression Classification. We apply our proposal method (Canny_PCA_ANN) for recognition of six basic facial expressions on JAFFE database consisting 213 images posed by 10 Japanese female models. The experimental result shows the feasibility of our proposal method.

  19. Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jihun Hamm

    2014-01-01

    Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

  20. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    Science.gov (United States)

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  1. Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials.

    Science.gov (United States)

    Recio, Guillermo; Schacht, Annekathrin; Sommer, Werner

    2014-02-01

    Emotional facial expressions usually arise dynamically from a neutral expression. Yet, most previous research focused on static images. The present study investigated basic aspects of processing dynamic facial expressions. In two experiments, we presented short videos of facial expressions of six basic emotions and non-emotional facial movements emerging at variable and fixed rise times, attaining different intensity levels. In event-related brain potentials (ERP), effects of emotion but also for non-emotional movements appeared as early posterior negativity (EPN) between 200 and 350ms, suggesting an overall facilitation of early visual encoding for all facial movements. These EPN effects were emotion-unspecific. In contrast, relative to happiness and neutral expressions, negative emotional expressions elicited larger late positive ERP components (LPCs), indicating a more elaborate processing. Both EPN and LPC amplitudes increased with expression intensity. Effects of emotion and intensity were additive, indicating that intensity (understood as the degree of motion) increases the impact of emotional expressions but not its quality. These processes can be driven by all basic emotions, and there is little emotion-specificity even when statistical power is considerable (N (Experiment 2)=102). Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Interpreting Text Messages with Graphic Facial Expression by Deaf and Hearing People

    Directory of Open Access Journals (Sweden)

    Chihiro eSaegusa

    2015-04-01

    Full Text Available In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says yes with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?. They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness, using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., no tended to mean no irrespective of facial expression for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people.

  3. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    Science.gov (United States)

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  4. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  5. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial

  6. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients

  7. Speed, amplitude, and asymmetry of lip movement in voluntary puckering and blowing expressions: implications for facial assessment.

    Science.gov (United States)

    Schmidt, Karen L; VanSwearingen, Jessie M; Levenstein, Rachel M

    2005-07-01

    The context of voluntary movement during facial assessment has significant effects on the activity of facial muscles. Using automated facial analysis, we found that healthy subjects instructed to blow produced lip movements that were longer in duration and larger in amplitude than when subjects were instructed to pucker. We also determined that lip movement for puckering expressions was more asymmetric than lip movement in blowing. Differences in characteristics of lip movement were noted using facial movement analysis and were associated with the context of the movement. The impact of the instructions given for voluntary movement on the characteristics of facial movement might have important implications for assessing the capabilities and deficits of movement control in individuals with facial movement disorders. If results generalize to the clinical context, assessment of generally focused voluntary facial expressions might inadequately demonstrate the full range of facial movement capability of an individual patient.

  8. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  9. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    Science.gov (United States)

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  10. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    Science.gov (United States)

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  11. Does Facial Expressivity Count? How Typically Developing Children Respond Initially to Children with Autism

    Science.gov (United States)

    Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-01-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…

  12. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  13. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  14. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  15. Psychopathic traits in adolescents and recognition of emotion in facial expressions

    Directory of Open Access Journals (Sweden)

    Silvio José Lemos Vasconcellos

    2014-12-01

    Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.

  16. Analysis and evaluation of facial expression and perceived age for designing automotive frontal views

    Science.gov (United States)

    Fujiwara, Takayuki; Kawasumi, Mikiko; Koshimizu, Hiroyasu

    2007-01-01

    We propose a method for quantifying the design of automotive frontal view based on the research on the human visual impression to the facial expression. We have researched to evaluate the automotive frontal face by using the facial words and the perceived age. Then we verified experimentally how effectively the line drawing image could work and coche-PICASSO image could be used for the image stimulation. As a result of this paper, a part of the facial words could be strongly correlated to both the facial expressions and the perceived age in the line drawing image. Besides, it was also known that the perceived age in the coche-PICASSO image was always younger than those of the line drawing image.

  17. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    Science.gov (United States)

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  18. Behavioural dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    Directory of Open Access Journals (Sweden)

    Roberta eDaini

    2014-12-01

    Full Text Available Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP, and basic emotional expressions. To this end, we carried out a behavioural study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm, could be identical (neutral to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs’ impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non emotional facial expression (task 1. Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2. These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition

  19. Laterality of Facial Expressions of Emotion: Universal and Culture-Specific Influences

    Directory of Open Access Journals (Sweden)

    Manas K. Mandal

    2004-01-01

    Full Text Available Recent research indicates that (a the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.

  20. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    Science.gov (United States)

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  1. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    Science.gov (United States)

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  2. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    OpenAIRE

    2014-01-01

    Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflec...

  3. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  4. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  5. Long-term academic stress enhances early processing of facial expressions.

    Science.gov (United States)

    Zhang, Liang; Qin, Shaozheng; Yao, Zhuxi; Zhang, Kan; Wu, Jianhui

    2016-11-01

    Exposure to long-term stress can lead to a variety of emotional and behavioral problems. Although widely investigated, the neural basis of how long-term stress impacts emotional processing in humans remains largely elusive. Using event-related brain potentials (ERPs), we investigated the effects of long-term stress on the neural dynamics of emotionally facial expression processing. Thirty-nine male college students undergoing preparation for a major examination and twenty-one matched controls performed a gender discrimination task for faces displaying angry, happy, and neutral expressions. The results of the Perceived Stress Scale showed that participants in the stress group perceived higher levels of long-term stress relative to the control group. ERP analyses revealed differential effects of long-term stress on two early stages of facial expression processing: 1) long-term stress generally augmented posterior P1 amplitudes to facial stimuli irrespective of expression valence, suggesting that stress can increase sensitization to visual inputs in general, and 2) long-term stress selectively augmented fronto-central P2 amplitudes for angry but not for neutral or positive facial expressions, suggesting that stress may lead to increased attentional prioritization to processing negative emotional stimuli. Together, our findings suggest that long-term stress has profound impacts on the early stages of facial expression processing, with an increase at the very early stage of general information inputs and a subsequent attentional bias toward processing emotionally negative stimuli.

  6. P2-31: In-Group Advantage in Negative Facial Expressions

    Directory of Open Access Journals (Sweden)

    Li-Chuan Hsu

    2012-10-01

    Full Text Available To perceive facial expressions is suggested to be universal. However, studies have shown the in-group advantage (IGA in recognition of facial expressions (e.g., Matsumoto, 1989, 1992 which is that people understand emotions more accurately when these emotions are expressed by members of their own culture group. A balanced design was used to investigate whether this IGA was showed in Western people and as well as in Asian people (Taiwanese. An emotional identification task was adopted to ask participants to identify positive (happy and negative (sadness, fear, and anger faces among Eastern and Western faces. We used Eastern faces from the Taiwanese Facial Expression Image Database (Chen, 2007 and Western faces from Ekman & Frisen (1979. Both reaction times and accuracies of performance were measured. Results showed that even all participants can identify positive and negative faces accurately; Asia participants responded significantly faster to negative Eastern faces than to negative Western faces. The similar IGA effect was also shown in Western participants. However, no such culture difference was found to positive faces. The results revealed the in-group advantage of the perception of facial expressions was specific to negative emotions and question the universality of perceiving facial expressions.

  7. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  8. Social anxiety and trustworthiness judgments of dynamic facial expressions of emotion.

    Science.gov (United States)

    Gutiérrez-García, Aida; Calvo, Manuel G

    2016-09-01

    Perception of trustworthiness in other people is essential for successful social interaction. Facial expressions-as conveyers of feelings and intentions-are an important source of this information. We investigated how social anxiety is related to biases in the judgment of faces towards un/trustworthiness depending on type of emotional expression and expressive intensity. Undergraduates with clinical levels of social anxiety and low-anxiety controls were presented with 1-s video-clips displaying facial happiness, anger, fear, sadness, disgust, surprise, or neutrality, at various levels of emotional intensity. Participants judged how trustworthy the expressers looked like. Social anxiety was associated with enhanced distrust towards angry and disgusted expressions, and this occurred at lower intensity thresholds, relative to non-anxious controls. There was no effect for other negative expressions (sadness and fear), basically ambiguous expressions (surprise and neutral), or happy faces. The social anxiety and the control groups consisted of more females than males, although this gender disproportion was the same in both groups. Also, the expressive speed rate was different for the various intensity conditions, although such differences were equated for all the expressions and for both groups. Individuals with high social anxiety overestimate perceived social danger even from subtle facial cues, thus exhibiting a threat-related interpretative bias in the form of untrustworthiness judgments. Such a bias is, nevertheless, limited to facial expressions conveying direct threat such as hostility and rejection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    OpenAIRE

    2012-01-01

    Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye trackin...

  10. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus

    Directory of Open Access Journals (Sweden)

    Laëtitia Maréchal

    2017-06-01

    Full Text Available Background Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. Methods The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants’ level of experience was defined as either: (1 naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2 exposed: shown pictures of the different Barbary macaques’ facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3 expert: worked with Barbary macaques for at least two months. Results Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques’ facial expressions improved the ability of inexperienced participants to better discriminate neutral

  11. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus).

    Science.gov (United States)

    Maréchal, Laëtitia; Levy, Xandria; Meints, Kerstin; Majolo, Bonaventura

    2017-01-01

    Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants' level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques' facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques' facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for

  12. Recognition of Facial Expressions of Different Emotional Intensities in Patients with Frontotemporal Lobar Degeneration

    Directory of Open Access Journals (Sweden)

    Roy P. C. Kessels

    2007-01-01

    Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.

  13. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  14. In Your Face: Startle to Emotional Facial Expressions Depends on Face Direction

    Science.gov (United States)

    Michalsen, Henriette; Øvervoll, Morten

    2017-01-01

    Although faces are often included in the broad category of emotional visual stimuli, the affective impact of different facial expressions is not well documented. The present experiment investigated startle electromyographic responses to pictures of neutral, happy, angry, and fearful facial expressions, with a frontal face direction (directed) and at a 45° angle to the left (averted). Results showed that emotional facial expressions interact with face direction to produce startle potentiation: Greater responses were found for angry expressions, compared with fear and neutrality, with directed faces. When faces were averted, fear and neutrality produced larger responses compared with anger and happiness. These results are in line with the notion that startle is potentiated to stimuli signaling threat. That is, a forward directed angry face may signal a threat toward the observer, and a fearful face directed to the side may signal a possible threat in the environment.

  15. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert‐introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k‐fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  16. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    Science.gov (United States)

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  17. Facial EMG responses to emotional expressions are related to emotion perception ability.

    Science.gov (United States)

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  18. Facial EMG responses to emotional expressions are related to emotion perception ability.

    Directory of Open Access Journals (Sweden)

    Janina Künecke

    Full Text Available Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110 in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  19. "You Should Have Seen the Look on Your Face…": Self-awareness of Facial Expressions.

    Science.gov (United States)

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities.

  20. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    Science.gov (United States)

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  1. Acute pathological changes of facial nucleus and expressions of postsynaptic density protein-95 following facial nerve injury of varying severity A semi-quantitative analysis

    Institute of Scientific and Technical Information of China (English)

    Jingjing Li; Wenlong Luo

    2008-01-01

    BACKGROUND: Previous studies have demonstrated that postsynaptic density protein-95 (PSD-95) is widely distributed in the central nervous system and is related to the development of the CNS and sensory signal transmission as well as acute or chronic nerve cell death following ischemic brain injury.OBJECTIVE: To semi-quantitatively determine the pathological changes of apoptotic facial neurons and the expression of PSD-95 in the facial nucleus following facial nerve injury of varying extents using immunohistochemical staining methods.DESIGN, TIME AND SETTING: Randomized, controlled animal experiments were performed in the Ultrasonic Institute of the Second Affiliated Hospital of Chongqing University of Medical Sciences from September to December 2007.MATERIALS: Sixty-five healthy, adult, Sprague-Dawley (SD) rats, both male and female, were used for this study. Rabbit anti-rat PSD-95 polyclonal antibody was purchased from Beijing Biosynthesis Biotechnology Co., Ltd.METHODS: SD rats were randomly assigned into a control group with five rats and three injured groups with 20 rats per group. Exposure, clamp and cut for bilateral facial nerve trunks were performed in the rats of the injury groups, and no injury was inflicted on the rats of the control group.MAIN OUTCOME MEASURES: The brainstems of all the rats were excised on days 1, 3, 7, and 14 post injury, and then the facial nuclei were stained with hematoxylin-eosin to observe any pathological changes due to apoptosis in facial neurons. PSD-95 expression in facial nuclei was detected by immunohistochemistry, and the number of PSD-95 positive cells was counted under a light microscope.RESULTS: The expression of PSD-95 in the facial nucleus and morphology of the facial neuron within the exposure group had no obvious changes at various points in time tested (P>0.05). However, the expressions of PSD-95 in the facial nucleus of the clamp group and cut group increased on day 1 post injury (Pclamp group>exposure group

  2. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    Science.gov (United States)

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Reactions to facial expressions: effects of social context and speech anxiety on responses to neutral, anger, and joy expressions.

    Science.gov (United States)

    Vrana, Scott R; Gross, Daniel

    2004-03-01

    Male and female participants (n=19) high or low in speech fear viewed pictures of faces posed in anger, neutral, and joyful expressions for 8s each. Zygomaticus major and corrugator supercilii EMG, skin conductance, and heart rate were measured during picture viewing, and subjective ratings were made after each picture. Compared to anger expressions, joy expressions were responded to with greater zygomatic EMG, less corrugator EMG, and greater heart rate and skin conductance. Physiological response to neutral expressions was similar to response to anger expressions. Expressions posed by women were responded to physiologically more negatively than expressions posed by men. More fearful participants exhibited more negative and less positive facial expressions, and skin conductance responses suggesting greater attention when viewing negative expressions. Results suggest that reactions to facial expressions are influenced by social context, and are not simple mimicry.

  4. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    Science.gov (United States)

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  5. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    Science.gov (United States)

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  6. Facial expression preserving privacy protection using image melding

    OpenAIRE

    Nakashima, Yuta; Koyama, Tetsuya; Yokoya, Naokazu; Babaguchi, Noboru

    2015-01-01

    An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are f...

  7. Strategies for Perceiving Facial Expressions in Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.

    2014-01-01

    Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…

  8. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  9. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  10. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based.

  11. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    Science.gov (United States)

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  12. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  13. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  14. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  15. Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    NARCIS (Netherlands)

    Nijholt, Antinus; Poppe, Ronald Walter; Unknown, [Unknown

    2008-01-01

    In this workshop of the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008), the emphasis is on research on facial and bodily expressions for the control and adaptation of games. We distinguish between two forms of expressions, depending on whether the user has the

  16. Dynamic and Static Facial Expressions Decoded from Motion-Sensitive Areas in the Macaque Monkey

    Science.gov (United States)

    Furl, Nicholas; Hadj-Bouziane, Fadila; Liu, Ning; Averbeck, Bruno B.; Ungerleider, Leslie G.

    2012-01-01

    Humans adeptly use visual motion to recognize socially-relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We employed functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas (Mf areas), which responded more to dynamic faces compared to static faces, and face-selective areas, which responded selectively to faces compared to objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and non-confusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in FST and STPm/LST, confirming their already well-established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion. PMID:23136433

  17. Singing emotionally: A study of pre-production, production, and post-production facial expressions

    Directory of Open Access Journals (Sweden)

    Lena Rachel Quinto

    2014-04-01

    Full Text Available Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalisation (pre-production, during vocalisation (production, and immediately after vocalisation (post-production. The stimuli were recordings of seven vocalists’ facial movements as they sang short (14 syllable melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgement varied with singer, emotion and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements is largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as neutral. An analysis of the motions of singers revealed systematic changes in facial movement as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.

  18. Singing emotionally: a study of pre-production, production, and post-production facial expressions.

    Science.gov (United States)

    Quinto, Lena R; Thompson, William F; Kroos, Christian; Palmer, Caroline

    2014-01-01

    Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as "neutral." An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.

  19. Putting the face in context: Body expressions impact facial emotion processing in human infants

    Directory of Open Access Journals (Sweden)

    Purva Rajhans

    2016-06-01

    Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  20. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder

    Directory of Open Access Journals (Sweden)

    Xiaozhe Peng

    2017-06-01

    Full Text Available Internet Gaming Disorder (IGD is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC in the processing of subliminally presented facial expressions (sad, happy, and neutral with event-related potentials (ERPs. The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad–neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing in response to neutral expressions compared to happy expressions in the happy–neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy–neutral expressions context, as well as sad and neutral expressions in the sad–neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy–neutral expressions context.Highlights:• The present study investigated whether the unconscious processing of facial expressions is influenced by

  1. Intelligent Avatar on E-Learning Using Facial Expression an Haptic

    Directory of Open Access Journals (Sweden)

    Ahmad Hoirul Basori

    2011-04-01

    Full Text Available he process of introducing emotion can be improved through three-dimensional (3D tutoring system. The problem that still not solved is how to provide realistic tutor (avatar in virtual environment. This paper propose an approach to teach children on understanding emotion sensation through facial expression and sense of touch (haptic.The algorithm is created by calculating constant factor (f based on maximum value of RGB and magnitude force then magnitude force range will be associated into particular colour. The Integration process will be started from rendering the facial expression then followed by adjusting the vibration power to emotion value. The result that achieved on experiment, it show around 71% students agree with the classification of magnitude force into emotion representation. Respondents commented that high magnitude force create similar sensation when respondents feel anger, while low magnitude force is more relaxing to respondents. Respondents also said that haptic and facial expression is very interactive and realistic.

  2. Robust Facial Expression Recognition via Sparse Representation and Multiple Gabor filters

    Directory of Open Access Journals (Sweden)

    Rania Salah El-Sayed

    2013-04-01

    Full Text Available Facial expressions recognition plays important role in human communication. It has become one of the most challenging tasks in the pattern recognition field. It has many applications such as: human computer interaction, video surveillance, forensic applications, criminal investigations, and in many other fields. In this paper we propose a method for facial expression recognition (FER. This method provides new insights into two issues in FER: feature extraction and robustness. For feature extraction we are using sparse representation approach after applying multiple Gabor filter and then using support vector machine (SVM as classifier. We conduct extensive experiments on standard facial expressions database to verify the performance of proposed method. And we compare the result with other approach.

  3. Vibrotactile Rendering of Human Emotions on the Manifold of Facial Expressions

    Directory of Open Access Journals (Sweden)

    Shafiq ur Réhman

    2008-07-01

    Full Text Available Facial expressions play an important role in every day social interaction. To enhance the daily life experience for the visually impaired, we present the Facial Expression Appearance vibroTactile System (FEATS, which uses a vibrotactile chair as the social interface for the visually impaired. An array of vibrating motors are mounted spatially on the back of an office chair. The Locally Linear Embedding (LLE algorithm is extended to compute the manifold of facial expressions, which is used to control vibration of motors to render emotions. Thus, the chair could provide the visually impaired with on-line dynamic emotion information about the person he/she is communicating with. Usability evaluation of the system is carried out. The results are encouraging and demonstrate usability for the visually impaired. The user studies show that perfect recognition accuracy of emotion type is achieved by the FEATS.

  4. A selective emotional decision-making bias elicited by facial expressions.

    Science.gov (United States)

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices.

  5. Self-relevance appraisal influences facial reactions to emotional body expressions.

    Directory of Open Access Journals (Sweden)

    Julie Grèzes

    Full Text Available People display facial reactions when exposed to others' emotional expressions, but exactly what mechanism mediates these facial reactions remains a debated issue. In this study, we manipulated two critical perceptual features that contribute to determining the significance of others' emotional expressions: the direction of attention (toward or away from the observer and the intensity of the emotional display. Electromyographic activity over the corrugator muscle was recorded while participants observed videos of neutral to angry body expressions. Self-directed bodies induced greater corrugator activity than other-directed bodies; additionally corrugator activity was only influenced by the intensity of anger expresssed by self-directed bodies. These data support the hypothesis that rapid facial reactions are the outcome of self-relevant emotional processing.

  6. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  7. Faces and bodies: perception and mimicry of emotionally congruent and incongruent facial and bodily expressions

    OpenAIRE

    2013-01-01

    Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important. Here we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and from emo...

  8. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  9. Catégorisation des expressions faciales par marches aléatoires sur graphe

    OpenAIRE

    Chahir, Youssef; Zinbi, Youssef; Aziz, Kheir Eddine

    2007-01-01

    National audience; La description des expressions faciales est un ensemble d'interprétations successives de composantes faciales. Un sens est construit à partir de distances de bas niveaux (présence ou non d'une composante, position éventuelle). Dans cet article, nous proposons une approche basée sur la diffusion géométrique par marches aléatoires sur graphe, pour la catégorisation des expressions faciles. L'idée de base est de considérer un graphe modélisant tous les visages d'une base de gr...

  10. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution...

  11. Neural substrates of human facial expression of pleasant emotion induced by comic films: a PET Study.

    Science.gov (United States)

    Iwase, Masao; Ouchi, Yasuomi; Okada, Hiroyuki; Yokoyama, Chihiro; Nobezawa, Shuji; Yoshikawa, Etsuji; Tsukada, Hideo; Takeda, Masaki; Yamashita, Ko; Takeda, Masatoshi; Yamaguti, Kouzi; Kuratsune, Hirohiko; Shimizu, Akira; Watanabe, Yasuyoshi

    2002-10-01

    Laughter or smile is one of the emotional expressions of pleasantness with characteristic contraction of the facial muscles, of which the neural substrate remains to be explored. This currently described study is the first to investigate the generation of human facial expression of pleasant emotion using positron emission tomography and H(2)(15)O. Regional cerebral blood flow (rCBF) during laughter/smile induced by visual comics and the magnitude of laughter/smile indicated significant correlation in the bilateral supplementary motor area (SMA) and left putamen (P < 0.05, corrected), but no correlation in the primary motor area (M1). In the voluntary facial movement, significant correlation between rCBF and the magnitude of EMG was found in the face area of bilateral M1 and the SMA (P < 0.001, uncorrected). Laughter/smile, as opposed to voluntary movement, activated the visual association areas, left anterior temporal cortex, left uncus, and orbitofrontal and medial prefrontal cortices (P < 0.05, corrected), whereas voluntary facial movement generated by mimicking a laughing/smiling face activated the face area of the left M1 and bilateral SMA, compared with laughter/smile (P < 0.05, corrected). We demonstrated distinct neural substrates of emotional and volitional facial expression and defined cognitive and experiential processes of a pleasant emotion, laughter/smile.

  12. Surface Electromyography-Based Facial Expression Recognition in Bi-Polar Configuration

    Directory of Open Access Journals (Sweden)

    Mahyar Hamedi

    2011-01-01

    Full Text Available Problem statement: Facial expression recognition has been improved recently and it has become a significant issue in diagnostic and medical fields, particularly in the areas of assistive technology and rehabilitation. Apart from their usefulness, there are some problems in their applications like peripheral conditions, lightening, contrast and quality of video and images. Approach: Facial Action Coding System (FACS and some other methods based on images or videos were applied. This study proposed two methods for recognizing 8 different facial expressions such as natural (rest, happiness in three conditions, anger, rage, gesturing ‘a’ like in apple word and gesturing no by pulling up the eyebrows based on Three-channels in Bi-polar configuration by SEMG. Raw signals were processed in three main steps (filtration, feature extraction and active features selection sequentially. Processed data was fed into Support Vector Machine and Fuzzy C-Means classifiers for being classified into 8 facial expression groups. Results: 91.8 and 80.4% recognition ratio had been achieved for FCM and SVM respectively. Conclusion: The confirmed enough accuracy and power in this field of study and FCM showed its better ability and performance in comparison with SVM. It’s expected that in near future, new approaches in the frequency bandwidth of each facial gesture will provide better results.

  13. Sex differences in neural activation to facial expressions denoting contempt and disgust.

    Directory of Open Access Journals (Sweden)

    André Aleman

    Full Text Available The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt than women. We performed an experiment using functional magnetic resonance imaging (fMRI, in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus, anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions, in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our

  14. Sex Differences in Neural Activation to Facial Expressions Denoting Contempt and Disgust

    Science.gov (United States)

    Aleman, André; Swart, Marte

    2008-01-01

    The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt) than women. We performed an experiment using functional magnetic resonance imaging (fMRI), in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus), anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions), in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our results suggest a

  15. Social alienation in schizophrenia patients: association with insula responsiveness to facial expressions of disgust.

    Directory of Open Access Journals (Sweden)

    Christian Lindner

    Full Text Available INTRODUCTION: Among the functional neuroimaging studies on emotional face processing in schizophrenia, few have used paradigms with facial expressions of disgust. In this study, we investigated whether schizophrenia patients show less insula activation to macro-expressions (overt, clearly visible expressions and micro-expressions (covert, very brief expressions of disgust than healthy controls. Furthermore, departing from the assumption that disgust faces signal social rejection, we examined whether perceptual sensitivity to disgust is related to social alienation in patients and controls. We hypothesized that high insula responsiveness to facial disgust predicts social alienation. METHODS: We used functional magnetic resonance imaging to measure insula activation in 36 schizophrenia patients and 40 healthy controls. During scanning, subjects passively viewed covert and overt presentations of disgust and neutral faces. To measure social alienation, a social loneliness scale and an agreeableness scale were administered. RESULTS: Schizophrenia patients exhibited reduced insula activation in response to covert facial expressions of disgust. With respect to macro-expressions of disgust, no between-group differences emerged. In patients, insula responsiveness to covert faces of disgust was positively correlated with social loneliness. Furthermore, patients' insula responsiveness to covert and overt faces of disgust was negatively correlated with agreeableness. In controls, insula responsiveness to covert expressions of disgust correlated negatively with agreeableness. DISCUSSION: Schizophrenia patients show reduced insula responsiveness to micro-expressions but not macro-expressions of disgust compared to healthy controls. In patients, low agreeableness was associated with stronger insula response to micro- and macro-expressions of disgust. Patients with a strong tendency to feel uncomfortable with social interactions appear to be characterized by a

  16. Spontaneous facial expression in unscripted social interactions can be measured automatically.

    Science.gov (United States)

    Girard, Jeffrey M; Cohn, Jeffrey F; Jeni, Laszlo A; Sayette, Michael A; De la Torre, Fernando

    2015-12-01

    Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.

  17. Facial Feedback Affects Perceived Intensity but Not Quality of Emotional Expressions.

    Science.gov (United States)

    Lobmaier, Janek S; Fischer, Martin H

    2015-08-26

    Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others' emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.

  18. Facial Feedback Affects Perceived Intensity but Not Quality of Emotional Expressions

    Directory of Open Access Journals (Sweden)

    Janek S. Lobmaier

    2015-08-01

    Full Text Available Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy or qualitatively (i.e. from sad to happy, or from happy to sad. Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing.

  19. Co-expression of calretinin and parvalbumin in the rat facial nucleus

    Institute of Scientific and Technical Information of China (English)

    Qiben Wang; Linfeng Zheng; Qinghong Huang; Yanbin Meng; Manyuan Kuang

    2008-01-01

    BACKGROUND: Calretinin and parvalbumin are members of the intracellular calcium binding protein family, which transform Ca2+ bioinformation into regulation of neuronal and neural network activities. OBJECTIVE: To observe expression and co-expression of calretinin and parvalbumin in rat facial nucleus neurons. DESIGN, TIME AND SETTING: Neuronal morphology experiment was performed at the Research Laboratory of Applied Anatomy, Department Neurobiology and Anatomy, Xiangya Medical College of Central South University from August to October 2007. MATERIALS: Five healthy, adult Sprague Dawley rats were selected. Polyclonal rabbit-anti-parvalbumin and mouse-anti-calretinin were provided by Sigma, USA. METHODS: Rat brains were obtained and cut into coronal slices using a freezing microtome. Slices from the experimental group were immunofluorescent stained with polyclonal rabbit-anti-parvalbumin and mouse-anti-calretinin antibodies. The control group sections were stained with normal rabbit and mouse sera. MAIN OUTCOME MEASURES: lmmunofluorescent double-staining was used to detect calretinin and parvalbumin expression. Nissi staining was utilized for facial nucleus localization and neuronal morphology analysis. RESULTS: The majority of facial motor neurons was polygon-shaped, and expressed calretinin and parvalbumin. The calretinin-immunopositive neurons also exhibited parvalbumin immunoreactivity, that is, calretinin and parvalbumin were co-expressed in the same neuron. CONCLUSION: Calretinin and parvalbumin were expressed in facial nucleus neurons, with varied distribution.

  20. Dynamics of autonomic nervous system responses and facial expressions to odors

    Directory of Open Access Journals (Sweden)

    Wei eHe

    2014-02-01

    Full Text Available Why we like or dislike certain products may be better captured by physiological and behavioral measures of the autonomic nervous system than by conscious or classical sensory tests. Responses to pleasant and unpleasant food odors presented in varying concentrations were assessed continuously using facial expressions and responses of the autonomic nervous system (ANS. Results of 26 young and healthy female participants showed that the unpleasant fish odor triggered higher heart rates and skin conductance responses, lower skin temperature, fewer neutral facial expressions and more disgusted and angry expressions (p < .05. Neutral facial expressions differentiated between odors within 100 ms, after the start of the odor presentation followed by expressions of disgust (180 ms, anger (500 ms, surprised (580 ms, sadness (820 ms, scared (1020 ms, and happy (1780 ms (all p values < .05. Heart rate differentiated between odors after 400 ms, whereas skin conductance responses differentiated between odors after 3920 ms. At shorter intervals (between 520 and 1000 ms and between 2690 and 3880 ms skin temperature for fish was higher than that for orange, but became considerable lower after 5440 ms. This temporal unfolding of emotions in reactions to odors, as seen in facial expressions and physiological measurements supports sequential appraisal theories.

  1. Priming emotional facial expressions as evidenced by event-related brain potentials.

    Science.gov (United States)

    Werheid, Katja; Alpay, Gamze; Jentzsch, Ines; Sommer, Werner

    2005-02-01

    As human faces are important social signals in everyday life, processing of facial affect has recently entered into the focus of neuroscientific research. In the present study, priming of faces showing the same emotional expression was measured with the help of event-related potentials (ERPs) in order to investigate the temporal characteristics of processing facial expressions. Participants classified portraits of unfamiliar persons according to their emotional expression (happy or angry). The portraits were either preceded by the face of a different person expressing the same affect (primed) or the opposite affect (unprimed). ERPs revealed both early and late priming effects, independent of stimulus valence. The early priming effect was characterized by attenuated frontal ERP amplitudes between 100 and 200 ms in response to primed targets. Its dipole sources were localised in the inferior occipitotemporal cortex, possibly related to the detection of expression-specific facial configurations, and in the insular cortex, considered to be involved in affective processes. The late priming effect, an enhancement of the late positive potential (LPP) following unprimed targets, may evidence greater relevance attributed to a change of emotional expressions. Our results (i) point to the view that a change of affect-related facial configuration can be detected very early during face perception and (ii) support previous findings on the amplitude of the late positive potential being rather related to arousal than to the specific valence of an emotional signal.

  2. The face-specific N170 component is modulated by emotional facial expression

    Directory of Open Access Journals (Sweden)

    Tottenham Nim

    2007-01-01

    Full Text Available Abstract Background According to the traditional two-stage model of face processing, the face-specific N170 event-related potential (ERP is linked to structural encoding of face stimuli, whereas later ERP components are thought to reflect processing of facial affect. This view has recently been challenged by reports of N170 modulations by emotional facial expression. This study examines the time-course and topography of the influence of emotional expression on the N170 response to faces. Methods Dense-array ERPs were recorded in response to a set (n = 16 of fear and neutral faces. Stimuli were normalized on dimensions of shape, size and luminance contrast distribution. To minimize task effects related to facial or emotional processing, facial stimuli were irrelevant to a primary task of learning associative pairings between a subsequently presented visual character and a spoken word. Results N170 to faces showed a strong modulation by emotional facial expression. A split half analysis demonstrates that this effect was significant both early and late in the experiment and was therefore not associated with only the initial exposures of these stimuli, demonstrating a form of robustness against habituation. The effect of emotional modulation of the N170 to faces did not show significant interaction with the gender of the face stimulus, or hemisphere of recording sites. Subtracting the fear versus neutral topography provided a topography that itself was highly similar to the face N170. Conclusion The face N170 response can be influenced by emotional expressions contained within facial stimuli. The topography of this effect is consistent with the notion that fear stimuli exaggerates the N170 response itself. This finding stands in contrast to previous models suggesting that N170 processes linked to structural analysis of faces precede analysis of emotional expression, and instead may reflect early top-down modulation from neural systems involved in

  3. Is Happy Facial Expression Identified by the Left or Right Hemisphere?

    Institute of Scientific and Technical Information of China (English)

    Zhou Xiaolin; Shao Liping

    2005-01-01

    A critical difference between the right hemisphere hypothesis and valence hypothesis of emotion processing is whether the processing of happy facial expressions is lateralized to the right or left hemisphere. In this study participants from a Chinese sample were asked to classify happy or neutral facial expressions presented either bilaterally in both visual fields or unilaterally in the left visual field (LVF) or right visual field (RVF) . They were required to make the speeded responses using either the left or right hand. It was found that for both left and right hand responses, happy (and neutral) expressions presented in the LVF were identified faster than happy (and neutral) expressions presented in the RVF. Bilateral presentation showed no further advantage over LVF presentation.Moreover, left hand responses were generally faster than right hand responses, although this effect was more pronounced for neutral expression. These findings were interpreted as supporting the right hemisphere hypothesis, with happy expression being identified initially by the right hemisphere.

  4. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  5. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    Science.gov (United States)

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  6. Perceptions of social dominance through facial emotion expressions in euthymic patients with bipolar I disorder.

    Science.gov (United States)

    Kim, Sung Hwa; Ryu, Vin; Ha, Ra Yeon; Lee, Su Jin; Cho, Hyun-Sang

    2016-04-01

    The ability to accurately perceive dominance in the social hierarchy is important for successful social interactions. However, little is known about dominance perception of emotional stimuli in bipolar disorder. The aim of this study was to investigate the perception of social dominance in patients with bipolar I disorder in response to six facial emotional expressions. Participants included 35 euthymic patients and 45 healthy controls. Bipolar patients showed a lower perception of social dominance based on anger, disgust, fear, and neutral facial emotional expressions compared to healthy controls. A negative correlation was observed between motivation to pursue goals or residual manic symptoms and perceived dominance of negative facial emotions such as anger, disgust, and fear in bipolar patients. These results suggest that bipolar patients have an altered perception of social dominance that might result in poor interpersonal functioning. Training of appropriate dominance perception using various emotional stimuli may be helpful in improving social relationships for individuals with bipolar disorder.

  7. When a smile becomes a fist: the perception of facial and bodily expressions of emotion in violent offenders

    NARCIS (Netherlands)

    Kret, M.E.; de Gelder, B.

    2013-01-01

    Previous reports have suggested an enhancement of facial expression recognition in women as compared to men. It has also been suggested that men versus women have a greater attentional bias towards angry cues. Research has shown that facial expression recognition impairments and attentional biases t

  8. Social Adjustment, Academic Adjustment, and the Ability to Identify Emotion in Facial Expressions of 7-Year-Old Children

    Science.gov (United States)

    Goodfellow, Stephanie; Nowicki, Stephen, Jr.

    2009-01-01

    The authors aimed to examine the possible association between (a) accurately reading emotion in facial expressions and (b) social and academic competence among elementary school-aged children. Participants were 840 7-year-old children who completed a test of the ability to read emotion in facial expressions. Teachers rated children's social and…

  9. Emotional Facial and Vocal Expressions during Story Retelling by Children and Adolescents with High-Functioning Autism

    Science.gov (United States)

    Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen

    2013-01-01

    Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were 22…

  10. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: Normative data from healthy participants aged

    NARCIS (Netherlands)

    Kessels, R.P.C.; Montagne, B.; Hendriks, A.W.C.J.; Perrett, D.I.; Haan, E.H.F. de

    2014-01-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognitio

  11. Single-trial ERP analysis reveals facial expression category in a three-stage scheme.

    Science.gov (United States)

    Zhang, Dandan; Luo, Wenbo; Luo, Yuejia

    2013-05-28

    Emotional faces are salient stimuli that play a critical role in social interactions. Following up on previous research suggesting that the event-related potentials (ERPs) show differential amplitudes in response to various facial expressions, the current study used trial-to-trial variability assembled from six discriminating ERP components to predict the facial expression categories in individual trials. In an experiment involved 17 participants, fearful trials were differentiated from non-fearful trials as early as the intervals of N1 and P1, with a mean predictive accuracy of 87%. Single-trial features in the occurrence of N170 and vertex positive potential could distinguish between emotional and neutral expressions (accuracy=90%). Finally, the trials associated with fearful, happy, and neutral faces were completely separated during the window of N3 and P3 (accuracy=83%). These categorization findings elucidated the temporal evolution of facial expression extraction, and demonstrated that the spatio-temporal characteristics of single-trial ERPs can distinguish facial expressions according to a three-stage scheme, with "fear popup," "emotional/unemotional discrimination," and "complete separation" as processing stages. This work constitutes the first examination of neural processing dynamics beyond multitrial ERP averaging, and directly relates the prediction performance of single-trial classifiers to the progressive brain functions of emotional face discrimination.

  12. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    Science.gov (United States)

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar.

  13. Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations.

    Science.gov (United States)

    Wenzler, Sofia; Levine, Sarah; van Dick, Rolf; Oertel-Knöchel, Viola; Aviezer, Hillel

    2016-09-01

    According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary people's spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record

  14. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    Science.gov (United States)

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  15. Facial expressions of emotions: recognition accuracy and affective reactions during late childhood.

    Science.gov (United States)

    Mancini, Giacomo; Agnoli, Sergio; Baldaro, Bruno; Bitti, Pio E Ricci; Surcinelli, Paola

    2013-01-01

    The present study examined the development of recognition ability and affective reactions to emotional facial expressions in a large sample of school-aged children (n = 504, ages 8-11 years of age). Specifically, the study aimed to investigate if changes in the emotion recognition ability and the affective reactions associated with the viewing of facial expressions occur during late childhood. Moreover, because small but robust gender differences during late-childhood have been proposed, the effects of gender on the development of emotion recognition and affective responses were examined. The results showed an overall increase in emotional face recognition ability from 8 to 11 years of age, particularly for neutral and sad expressions. However, the increase in sadness recognition was primarily due to the development of this recognition in boys. Moreover, our results indicate different developmental trends in males and females regarding the recognition of disgust. Last, developmental changes in affective reactions to emotional facial expressions were found. Whereas recognition ability increased over the developmental time period studied, affective reactions elicited by facial expressions were characterized by a decrease in arousal over the course of late childhood.

  16. Investigating the brain basis of facial expression perception using multi-voxel pattern analysis.

    Science.gov (United States)

    Wegrzyn, Martin; Riehle, Marcel; Labudda, Kirsten; Woermann, Friedrich; Baumgartner, Florian; Pollmann, Stefan; Bien, Christian G; Kissler, Johanna

    2015-08-01

    Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised.

  17. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  18. Does Parkinson's disease lead to alterations in the facial expression of pain?

    Science.gov (United States)

    Priebe, Janosch A; Kunz, Miriam; Morcinek, Christian; Rieckmann, Peter; Lautenbacher, Stefan

    2015-12-15

    Hypomimia which refers to a reduced degree in facial expressiveness is a common sign in Parkinson's disease (PD). The objective of our study was to investigate how hypomimia affects PD patients' facial expression of pain. The facial expressions of 23 idiopathic PD patients in the Off-phase (without dopaminergic medication) and On-phase (after dopaminergic medication intake) and 23 matched controls in response to phasic heat-pain and a temporal summation procedure were recorded and analyzed for overall and specific alterations using the Facial Action Coding System (FACS). We found reduced overall facial activity in response to pain in PD patients in the Off which was less pronounced in the On. Especially the highly pain-relevant eye-narrowing occurred less frequently in PD patients than in controls in both phases while frequencies of other pain-relevant movements, like upper lip raise (in the On) and contraction of the eyebrows (in both phases), did not differ between groups. Moreover, opening of the mouth (which is often not considered as pain-relevant) was the most frequently displayed movement in PD patients, whereas eye-narrowing was the most frequent movement in controls. Not only overall quantitative changes in the degree of facial pain expressiveness occurred in PD patients but also qualitative changes were found. The latter refer to a strongly affected encoding of the sensory dimension of pain (eye-narrowing) while the encoding of the affective dimension of pain (contradiction of the eyebrows) was preserved. This imbalanced pain signal might affect pain communication and pain assessment.

  19. The role of spatial frequency information in the recognition of facial expressions of pain.

    Science.gov (United States)

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features.

  20. Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions.

    Science.gov (United States)

    Pitcher, David; Garrido, Lúcia; Walsh, Vincent; Duchaine, Bradley C

    2008-09-03

    Theories of embodied cognition propose that recognizing facial expressions requires visual processing followed by simulation of the somatovisceral responses associated with the perceived expression. To test this proposal, we targeted the right occipital face area (rOFA) and the face region of right somatosensory cortex (rSC) with repetitive transcranial magnetic stimulation (rTMS) while participants discriminated facial expressions. rTMS selectively impaired discrimination of facial expressions at both sites but had no effect on a matched face identity task. Site specificity within the rSC was demonstrated by targeting rTMS at the face and finger regions while participants performed the expression discrimination task. rTMS targeted at the face region impaired task performance relative to rTMS targeted at the finger region. To establish the temporal course of visual and somatosensory contributions to expression processing, double-pulse TMS was delivered at different times to rOFA and rSC during expression discrimination. Accuracy dropped when pulses were delivered at 60-100 ms at rOFA and at 100-140 and 130-170 ms at rSC. These sequential impairments at rOFA and rSC support embodied accounts of expression recognition as well as hierarchical models of face processing. The results also demonstrate that nonvisual cortical areas contribute during early stages of expression processing.

  1. Reduced expression of regeneration associated genes in chronically axotomized facial motoneurons.

    Science.gov (United States)

    Gordon, T; You, S; Cassar, S L; Tetzlaff, W

    2015-02-01

    Chronically axotomized motoneurons progressively fail to regenerate their axons. Since axonal regeneration is associated with the increased expression of tubulin, actin and GAP-43, we examined whether the regenerative failure is due to failure of chronically axotomized motoneurons to express and sustain the expression of these regeneration associated genes (RAGs). Chronically axotomized facial motoneurons were subjected to a second axotomy to mimic the clinical surgical procedure of refreshing the proximal nerve stump prior to nerve repair. Expression of α1-tubulin, actin and GAP-43 was analyzed in axotomized motoneurons using in situ hybridization followed by autoradiography and silver grain quantification. The expression of these RAGs by acutely axotomized motoneurons declined over several months. The chronically injured motoneurons responded to a refreshment axotomy with a re-increase in RAG expression. However, this response to a refreshment axotomy of chronically injured facial motoneurons was less than that seen in acutely axotomized facial motoneurons. These data demonstrate that the neuronal RAG expression can be induced by injury-related signals and does not require acute deprivation of target derived factors. The transient expression is consistent with a transient inflammatory response to the injury. We conclude that transient RAG expression in chronically axotomized motoneurons and the weak response of the chronically axotomized motoneurons to a refreshment axotomy provides a plausible explanation for the progressive decline in regenerative capacity of chronically axotomized motoneurons.

  2. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  3. A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives.

    Science.gov (United States)

    Martinez, Aleix; Du, Shichuan

    2012-05-01

    In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can

  4. Development and validation of an Argentine set of facial expressions of emotion

    NARCIS (Netherlands)

    Vaiman, M.; Wagner, M.A.; Caicedo, E.; Pereno, G.L.

    2017-01-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion

  5. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  6. Perception of facial expressions in obsessive-compulsive disorder : a dimensional approach

    NARCIS (Netherlands)

    Montagne, B.; Geus, F. de; Kessels, R.P.C.; Denys, D.; Haan, E.H.F. de; Westenberg, H.G.

    2008-01-01

    The study examined the perception of facial expressions of different emotional intensities in obsessive-compulsive disorder (OCD) subtypes. Results showed that the High Risk Assessment and Checking subtype was more sensitive in perceiving the emotions fear and happiness. This suggests that altered a

  7. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  8. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  9. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  10. Assessment of Learners' Attention to E-Learning by Monitoring Facial Expressions for Computer Network Courses

    Science.gov (United States)

    Chen, Hong-Ren

    2012-01-01

    Recognition of students' facial expressions can be used to understand their level of attention. In a traditional classroom setting, teachers guide the classes and continuously monitor and engage the students to evaluate their understanding and progress. Given the current popularity of e-learning environments, it has become important to assess the…

  11. The Effects of Early Institutionalization on the Discrimination of Facial Expressions of Emotion in Young Children

    Science.gov (United States)

    Jeon, Hana; Moulson, Margaret C.; Fox, Nathan; Zeanah, Charles; Nelson, Charles A., III

    2010-01-01

    The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care-as-Usual (institutional care) following a baseline assessment. Another group consisted of children…

  12. Cradling Side Preference Is Associated with Lateralized Processing of Baby Facial Expressions in Females

    Science.gov (United States)

    Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut

    2009-01-01

    Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…

  13. Effects of Context and Facial Expression on Imitation Tasks in Preschool Children with Autism

    Science.gov (United States)

    Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George

    2013-01-01

    The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with autism…

  14. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose a

  15. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  16. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  17. Processing of Facial Expressions of Emotions by Adults with Down Syndrome and Moderate Intellectual Disability

    Science.gov (United States)

    Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…

  18. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    Science.gov (United States)

    Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.

    2012-01-01

    Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…

  19. Inferior Frontal Gyrus Activity Triggers Anterior Insula Response to Emotional Facial Expressions

    NARCIS (Netherlands)

    Jabbi, Mbemba; Keysers, Christian

    2008-01-01

    The observation of movies of facial expressions of others has been shown to recruit similar areas involved in experiencing one's own emotions: the inferior frontal gyrus (IFG). the anterior insula and adjacent frontal operculum (IFO). The Causal link bet between activity in these 2 regions, associat

  20. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose

  1. Facial Expression of Affect in Children with Cornelia de Lange Syndrome

    Science.gov (United States)

    Collis, L.; Moss, J.; Jutley, J.; Cornish, K.; Oliver, C.

    2008-01-01

    Background: Individuals with Cornelia de Lange syndrome (CdLS) have been reported to show comparatively high levels of flat and negative affect but there have been no empirical evaluations. In this study, we use an objective measure of facial expression to compare affect in CdLS with that seen in Cri du Chat syndrome (CDC) and a group of…

  2. The recognition of facial expressions of emotion in Alzheimer's disease: a review of findings.

    Science.gov (United States)

    McLellan, Tracey; Johnston, Lucy; Dalrymple-Alford, John; Porter, Richard

    2008-10-01

    To provide a selective review of the literature on the recognition of facial expressions of emotion in Alzheimer's disease (AD), to evaluate whether these patients show variation in their ability to recognise different emotions and whether any such impairments are instead because of a general decline in cognition. A narrative review based on relevant articles identified from PubMed and PsycInfo searches from 1987 to 2007 using keywords 'Alzheimer's', 'facial expression recognition', 'dementia' and 'emotion processing'. Although the literature is as yet limited, with several methodological inconsistencies, AD patients show poorer recognition of facial expressions, with particular difficulty with sad expressions. It is unclear whether poorer performance reflects the general cognitive decline and/or verbal or spatial deficits associated with AD or whether the deficits reflect specific neuropathology. This under-represented field of study may help to extend our understanding of social functioning in AD. Future work requires more detailed analyses of ancillary cognitive measures, more ecologically valid facial displays of emotion and a reference situation that more closely approximates an actual social interaction.

  3. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  4. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  5. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  6. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  7. The Facial Expression of Anger in Seven-Month-Old Infants.

    Science.gov (United States)

    Stenberg, Craig R.; And Others

    1983-01-01

    Investigated whether, in a sample of 30 infants, anger could reliably be observed in facial expressions as early as seven months of age. Also considered was the influence of several variables on anger responses: infants' familiarity with the frustrator, repetition of trials, and sex of the child. (Author/RH)

  8. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...

  9. Development and validation of an Argentine set of facial expressions of emotion

    NARCIS (Netherlands)

    Vaiman, M.; Wagner, M.A.; Caicedo, E.; Pereno, G.L.

    2017-01-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion rese

  10. Inferior Frontal Gyrus Activity Triggers Anterior Insula Response to Emotional Facial Expressions

    NARCIS (Netherlands)

    Jabbi, Mbemba; Keysers, Christian

    2008-01-01

    The observation of movies of facial expressions of others has been shown to recruit similar areas involved in experiencing one's own emotions: the inferior frontal gyrus (IFG). the anterior insula and adjacent frontal operculum (IFO). The Causal link bet between activity in these 2 regions,

  11. Hemodynamic response of children with attention-deficit and hyperactive disorder (ADHD) to emotional facial expressions.

    Science.gov (United States)

    Ichikawa, Hiroko; Nakato, Emi; Kanazawa, So; Shimamura, Keiichi; Sakuta, Yuiko; Sakuta, Ryoichi; Yamaguchi, Masami K; Kakigi, Ryusuke

    2014-10-01

    Children with attention-deficit/hyperactivity disorder (ADHD) have difficulty recognizing facial expressions. They identify angry expressions less accurately than typically developing (TD) children, yet little is known about their atypical neural basis for the recognition of facial expressions. Here, we used near-infrared spectroscopy (NIRS) to examine the distinctive cerebral hemodynamics of ADHD and TD children while they viewed happy and angry expressions. We measured the hemodynamic responses of 13 ADHD boys and 13 TD boys to happy and angry expressions at their bilateral temporal areas, which are sensitive to face processing. The ADHD children showed an increased concentration of oxy-Hb for happy faces but not for angry faces, while TD children showed increased oxy-Hb for both faces. Moreover, the individual peak latency of hemodynamic response in the right temporal area showed significantly greater variance in the ADHD group than in the TD group. Such atypical brain activity observed in ADHD boys may relate to their preserved ability to recognize a happy expression and their difficulty recognizing an angry expression. We firstly demonstrated that NIRS can be used to detect atypical hemodynamic response to facial expressions in ADHD children.

  12. Exogenous cortisol shifts a motivated bias from fear to anger in spatial working memory for facial expressions.

    NARCIS (Netherlands)

    Putman, P.; Hermans, E.J.; Honk, J. van

    2007-01-01

    Studies assessing processing of facial expressions have established that cortisol levels, emotional traits, and affective disorders predict selective responding to these motivationally relevant stimuli in expression specific manners. For instance, increased attentional processing of fearful faces

  13. A New Approach to Measuring Individual Differences in Sensitivity to Facial Expressions: Influence of Temperamental Shyness and Sociability

    Directory of Open Access Journals (Sweden)

    Xiaoqing eGao

    2014-02-01

    Full Text Available To examine individual differences in adults’ sensitivity to facial expressions, we used a novel method that has proved revealing in studies of developmental change. Using static faces morphed to show different intensities of facial expressions, we calculated two measures: (1 the threshold to detect that a low intensity facial expression is different from neutral, and (2 accuracy in recognizing the specific facial expression in faces above the detection threshold. We conducted two experiments with young adult females varying in reported temperamental shyness and sociability - the former trait is known to influence the recognition of facial expressions during childhood. In both experiments, the measures had good split half reliability. Because shyness was significantly negatively correlated with sociability, we used partial correlations to examine the relation of each to sensitivity to facial expression. Sociability was negatively related to threshold to detect fear (Experiment 1 and to misidentify fear as another expression or happy expressions as fear (Experiment 2. Both patterns are consistent with hypervigilance by less sociable individuals. Shyness was positively related to misidentification of fear as another emotion (Experiment 2, a pattern consistent with a history of avoidance. We discuss the advantages and limitations of this new approach for studying individual differences in sensitivity to facial expression.

  14. Intranasal oxytocin increases facial expressivity, but not ratings of trustworthiness, in patients with schizophrenia and healthy controls.

    Science.gov (United States)

    Woolley, J D; Chuang, B; Fussell, C; Scherer, S; Biagianti, B; Fulford, D; Mathalon, D H; Vinogradov, S

    2017-05-01

    Blunted facial affect is a common negative symptom of schizophrenia. Additionally, assessing the trustworthiness of faces is a social cognitive ability that is impaired in schizophrenia. Currently available pharmacological agents are ineffective at improving either of these symptoms, despite their clinical significance. The hypothalamic neuropeptide oxytocin has multiple prosocial effects when administered intranasally to healthy individuals and shows promise in decreasing negative symptoms and enhancing social cognition in schizophrenia. Although two small studies have investigated oxytocin's effects on ratings of facial trustworthiness in schizophrenia, its effects on facial expressivity have not been investigated in any population. We investigated the effects of oxytocin on facial emotional expressivity while participants performed a facial trustworthiness rating task in 33 individuals with schizophrenia and 35 age-matched healthy controls using a double-blind, placebo-controlled, cross-over design. Participants rated the trustworthiness of presented faces interspersed with emotionally evocative photographs while being video-recorded. Participants' facial expressivity in these videos was quantified by blind raters using a well-validated manualized approach (i.e. the Facial Expression Coding System; FACES). While oxytocin administration did not affect ratings of facial trustworthiness, it significantly increased facial expressivity in individuals with schizophrenia (Z = -2.33, p = 0.02) and at trend level in healthy controls (Z = -1.87, p = 0.06). These results demonstrate that oxytocin administration can increase facial expressivity in response to emotional stimuli and suggest that oxytocin may have the potential to serve as a treatment for blunted facial affect in schizophrenia.

  15. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Tongran Liu

    Full Text Available The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8 and happy expressions were deviant stimuli (p = 0.2, and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8 and fearful expressions were deviant stimuli (p = 0.2. Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs were obtained during the tasks. The visual mismatch negativity (vMMN components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms, the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms, the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  16. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Science.gov (United States)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  17. Speech anxiety and rapid emotional reactions to angry and happy facial expressions.

    Science.gov (United States)

    Dimberg, Ulf; Thunberg, Monika

    2007-08-01

    The aim was to explore whether people high as opposed to low in speech anxiety react with a more pronounced differential facial response when exposed to angry and happy facial stimuli. High and low fear participants were selected based on their scores on a fear of public speaking questionnaire. All participants were exposed to pictures of angry and happy faces while facial electromyographic (EMG) activity from the Corrugator supercilii and the Zygomaticus major muscle regions was recorded. Skin conductance responses (SCR), heart rate (HR) and ratings were also collected. Participants high as opposed to low in speech anxiety displayed a larger differential corrugator responding, indicating a larger negative emotional reaction, between angry and happy faces. They also reacted with a larger differential zygomatic responding, indicating a larger positive emotional reaction, between happy and angry faces. Consistent with the facial reaction patterns, the high fear group rated angry faces as more unpleasant and as expressing more disgust, and further rated happy faces as more pleasant. There were no differences in SCR or HR responding between high and low speech anxiety groups. The present results support the hypothesis that people high in speech anxiety are disposed to show an exaggerated sensitivity and facial responsiveness to social stimuli.

  18. The Thatcher illusion reveals orientation dependence in brain regions involved in processing facial expressions.

    Science.gov (United States)

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-01-01

    Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face-when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus-a region directly linked to the processing of facial expressions-when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity.

  19. Facial expression recognition and histograms of oriented gradients: a comprehensive study.

    Science.gov (United States)

    Carcagnì, Pierluigi; Del Coco, Marco; Leo, Marco; Distante, Cosimo

    2015-01-01

    Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

  20. Comparison of facial expression in patients with obsessive-compulsive disorder and schizophrenia using the Facial Action Coding System: a preliminary study

    Directory of Open Access Journals (Sweden)

    Bersani G

    2012-12-01

    Full Text Available Giuseppe Bersani,1 Francesco Saverio Bersani,1,2 Giuseppe Valeriani,1 Maddalena Robiony,1 Annalisa Anastasia,1 Chiara Colletti,1,3 Damien Liberati,1 Enrico Capra,2 Adele Quartini,1 Elisa Polli11Department of Medical-Surgical Sciences and Biotechnologies, 2Department of Neurology and Psychiatry, Sapienza University of Rome, Rome, 3Department of Neuroscience and Behaviour, Section of Psychiatry, Federico II University of Naples, Naples, ItalyBackground: Research shows that impairment in the expression and recognition of emotion exists in multiple psychiatric disorders. The objective of the current study was to evaluate the way that patients with schizophrenia and those with obsessive-compulsive disorder experience and display emotions in relation to specific emotional stimuli using the Facial Action Coding System (FACS.Methods: Thirty individuals participated in the study, comprising 10 patients with schizophrenia, 10 with obsessive-compulsive disorder, and 10 healthy controls. All participants underwent clinical sessions to evaluate their symptoms and watched emotion-eliciting video clips while facial activity was videotaped. Congruent/incongruent feeling of emotions and facial expression in reaction to emotions were evaluated.Results: Patients with schizophrenia and obsessive-compulsive disorder presented similarly incongruent emotive feelings and facial expressions (significantly worse than healthy participants. Correlations between the severity of psychopathological condition (in particular the severity of affective flattening and impairment in recognition and expression of emotions were found.Discussion: Patients with obsessive-compulsive disorder and schizophrenia seem to present a similarly relevant impairment in both experiencing and displaying of emotions; this impairment may be seen as a chronic consequence of the same neurodevelopmental origin of the two diseases. Mimic expression could be seen as a behavioral indicator of affective

  1. Facial expressions, their communicatory functions and neuro-cognitive substrates.

    OpenAIRE

    Blair, R.J.R.

    2003-01-01

    Human emotional expressions serve a crucial communicatory role allowing the rapid transmission of valence information from one individual to another. This paper will review the literature on the neural mechanisms necessary for this communication: both the mechanisms involved in the production of emotional expressions and those involved in the interpretation of the emotional expressions of others. Finally, reference to the neuro-psychiatric disorders of autism, psychopathy and acquired sociopa...

  2. Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience.

    Science.gov (United States)

    Schrammel, Franziska; Pannasch, Sebastian; Graupner, Sven-Thomas; Mojzisch, Andreas; Velichkovsky, Boris M

    2009-09-01

    The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.

  3. Coding and quantification of a facial expression for pain in lambs.

    Science.gov (United States)

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five

  4. Event-related potentials to task-irrelevant changes in facial expressions

    Directory of Open Access Journals (Sweden)

    Astikainen Piia

    2009-07-01

    Full Text Available Abstract Background Numerous previous experiments have used oddball paradigm to study change detection. This paradigm is applied here to study change detection of facial expressions in a context which demands abstraction of the emotional expression-related facial features among other changing facial features. Methods Event-related potentials (ERPs were recorded in adult humans engaged in a demanding auditory task. In an oddball paradigm, repeated pictures of faces with a neutral expression ('standard', p = .9 were rarely replaced by pictures with a fearful ('fearful deviant', p = .05 or happy ('happy deviant', p = .05 expression. Importantly, facial identities changed from picture to picture. Thus, change detection required abstraction of facial expression from changes in several low-level visual features. Results ERPs to both types of deviants differed from those to standards. At occipital electrode sites, ERPs to deviants were more negative than ERPs to standards at 150–180 ms and 280–320 ms post-stimulus. A positive shift to deviants at fronto-central electrode sites in the analysis window of 130–170 ms post-stimulus was also found. Waveform analysis computed as point-wise comparisons between the amplitudes elicited by standards and deviants revealed that the occipital negativity emerged earlier to happy deviants than to fearful deviants (after 140 ms versus 160 ms post-stimulus, respectively. In turn, the anterior positivity was earlier to fearful deviants than to happy deviants (110 ms versus 120 ms post-stimulus, respectively. Conclusion ERP amplitude differences between emotional and neutral expressions indicated pre-attentive change detection of facial expressions among neutral faces. The posterior negative difference at 150–180 ms latency resembled visual mismatch negativity (vMMN – an index of pre-attentive change detection previously studied only to changes in low-level features in vision. The positive anterior difference in

  5. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  6. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  7. Suboptimal Exposure to Facial Expressions When Viewing Video Messages From a Small Screen: Effects on Emotion, Attention, and Memory

    Science.gov (United States)

    Ravaja, Niklas; Kallinen, Kari; Saari, Timo; Keltikangas-Jarvinen, Liisa

    2004-01-01

    The authors examined the effects of suboptimally presented facial expressions on emotional and attentional responses and memory among 39 young adults viewing video (business news) messages from a small screen. Facial electromyography (EMG) and respiratory sinus arrhythmia were used as physiological measures of emotion and attention, respectively.…

  8. Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions

    Science.gov (United States)

    Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri

    2012-01-01

    Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…

  9. Discrimination and Imitation of Facial Expressions by Neonates.

    Science.gov (United States)

    Field, Tiffany

    Findings of a series of studies on individual differences and maturational changes in expressivity at the neonatal stage and during early infancy are reported. Research results indicate that newborns are able to discriminate and imitate the basic emotional expressions: happy, sad, and surprised. Results show widened infant lips when the happy…

  10. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    Science.gov (United States)

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Attentional control and interpretation of facial expression after oxytocin administration to typically developed male adults.

    Science.gov (United States)

    Hirosawa, Tetsu; Kikuchi, Mitsuru; Okumura, Eiichi; Yoshimura, Yuko; Hiraishi, Hirotoshi; Munesue, Toshio; Takesaki, Natsumi; Furutani, Naoki; Ono, Yasuki; Higashida, Haruhiro; Minabe, Yoshio

    2015-01-01

    Deficits in attentional-inhibitory control have been reported to correlate to anger, hostility, and aggressive behavior; therefore, inhibitory control appears to play an important role in prosocial behavior. Moreover, recent studies have demonstrated that oxytocin (OT) exerts a prosocial effect (e.g., decreasing negative behaviors, such as aggression) on humans. However, it is unknown whether the positively valenced effect of OT on sociality is associated with enhanced attentional-inhibitory control. In the present study, we hypothesized that OT enhances attentional-inhibitory control and that the positively valenced effect of OT on social cognition is associated with enhanced attentional-inhibitory control. In a single-blind, placebo-controlled crossover trial, we tested this hypothesis using 20 healthy male volunteers. We considered a decrease in the hostility detection ratio, which reflects the positively valenced interpretation of other individuals' facial expressions, to be an index of the positively valenced effects of OT (we reused the results of our previously published study). As a measure of attentional-inhibitory control, we employed a modified version of the flanker task (i.e., a shorter conflict duration indicated higher inhibitory control). These results failed to demonstrate any significant behavioral effects of OT (i.e., neither a positively valenced effect on facial cognition nor an effect on attentional-inhibitory control). However, the enhancement of attentional-inhibitory control after OT administration significantly correlated to the positively valenced effects on the interpretation of uncertain facial cognition (i.e., neutral and ambiguous facial expressions).

  12. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this pa- per, we explore the generalizability of several findings to a non-American culture in the form of Danish...... subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...... ratings of anger to all emotions expressed by females. Furthermore, we demonstrate an effect of gender on the fear-surprise-confusion observed by Tomkins and McCarter (1964); females overpredict fear, while males overpredict surprise....

  13. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    Science.gov (United States)

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  14. Analysis, Interpretation, and Recognition of Facial Action Units and Expressions Using Neuro-Fuzzy Modeling

    CERN Document Server

    Khademi, Mahmoud; Manzuri-Shalmani, Mohammad T; Kiaei, Ali A

    2010-01-01

    In this paper an accurate real-time sequence-based system for representation, recognition, interpretation, and analysis of the facial action units (AUs) and expressions is presented. Our system has the following characteristics: 1) employing adaptive-network-based fuzzy inference systems (ANFIS) and temporal information, we developed a classification scheme based on neuro-fuzzy modeling of the AU intensity, which is robust to intensity variations, 2) using both geometric and appearance-based features, and applying efficient dimension reduction techniques, our system is robust to illumination changes and it can represent the subtle changes as well as temporal information involved in formation of the facial expressions, and 3) by continuous values of intensity and employing top-down hierarchical rule-based classifiers, we can develop accurate human-interpretable AU-to-expression converters. Extensive experiments on Cohn-Kanade database show the superiority of the proposed method, in comparison with support vect...

  15. Asymmetrical facial expressions in portraits and hemispheric laterality: a literature review.

    Science.gov (United States)

    Powell, W R; Schirillo, J A

    2009-11-01

    Studies of facial asymmetry have revealed that the left and the right sides of the face differ in emotional attributes. This paper reviews many of these distinctions to determine how these asymmetries influence portrait paintings. It does so by relating research involving emotional expression to aesthetic pleasantness in portraits. For example, facial expressions are often asymmetrical-the left side of the face is more emotionally expressive and more often connotes negative emotions than the right side. Interestingly, artists tend to expose more of their poser's left cheek than their right. This is significant, in that artists also portray more females than males with their left cheek exposed. Reasons for these psychological findings lead to explanations for the aesthetic leftward bias in portraiture.

  16. A facial expression image database and norm for Asian population: a preliminary report

    Science.gov (United States)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  17. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  18. Facial expressions, their communicatory functions and neuro-cognitive substrates.

    Science.gov (United States)

    Blair, R J R

    2003-03-29

    Human emotional expressions serve a crucial communicatory role allowing the rapid transmission of valence information from one individual to another. This paper will review the literature on the neural mechanisms necessary for this communication: both the mechanisms involved in the production of emotional expressions and those involved in the interpretation of the emotional expressions of others. Finally, reference to the neuro-psychiatric disorders of autism, psychopathy and acquired sociopathy will be made. In these conditions, the appropriate processing of emotional expressions is impaired. In autism, it is argued that the basic response to emotional expressions remains intact but that there is impaired ability to represent the referent of the individual displaying the emotion. In psychopathy, the response to fearful and sad expressions is attenuated and this interferes with socialization resulting in an individual who fails to learn to avoid actions that result in harm to others. In acquired sociopathy, the response to angry expressions in particular is attenuated resulting in reduced regulation of social behaviour.

  19. 人脸表情识别综述%Summary of facial expression recognition

    Institute of Scientific and Technical Information of China (English)

    王大伟; 周军; 梅红岩; 张素娥

    2014-01-01

    人脸表情识别作为情感计算的一个研究方向,构成了情感理解的基础,是实现人机交互智能的前提。人脸表情的极度细腻化消耗了大量的计算时间,影响了人机交互的时效性和体验感,所以人脸表情特征提取成为人脸表情识别的重要研究课题。总结了国内外近五年的人脸表情识别的稳固框架和新进展,主要针对人脸表情特征提取和表情分类方法进行了归纳,详细介绍了这两方面的主要算法及改进,并分析比较了各种算法的优势与不足。通过对国内外人脸表情识别应用中实际问题进行研究,给出了人脸表情识别方面仍然存在的挑战及不足。%As a research direction of the affective computing, facial expression recognition which constitutes the basis of emotion understanding, is the premise to complete human-computer interaction intelligent. Facial expression is so exqui-site that it consumes a large amount of computation time and influences the timeliness and experience feeling from human-computer interaction intelligent. Consequently, facial feature extraction has become an important research topic in the area of facial expression recognition. The progress and stable framework for facial expression recognition in recent five years are generalized, a serial of algorithms applied in feature extraction and expression classification are summarized, Then, the main algorithms and their improvement are described in detail, and advantages and disadvantages among in different algorithms are analyzed and compared. In the same time, comparison with the other algorithms is also introduced. The challenges and shortcomings are pointed out by the research of practical problems in facial expression recognition application.

  20. Detection of Nausea-Like Response in Rats by Monitoring Facial Expression.

    Science.gov (United States)

    Yamamoto, Kouichi; Tatsutani, Soichi; Ishida, Takayuki

    2016-01-01

    Patients receiving cancer chemotherapy experience nausea and vomiting. They are not life-threatening symptoms, but their insufficient control reduces the patients' quality of life. To identify methods for the management of nausea and vomiting in preclinical studies, the objective evaluation of these symptoms in laboratory animals is required. Unlike vomiting, nausea is defined as a subjective feeling described as recognition of the need to vomit; thus, determination of the severity of nausea in laboratory animals is considered to be difficult. However, since we observed that rats grimace after the administration of cisplatin, we hypothesized that changes in facial expression can be used as a method to detect nausea. In this study, we monitored the changes in the facial expression of rats after the administration of cisplatin and investigated the effect of anti-emetic drugs on the prevention of cisplatin-induced changes in facial expression. Rats were housed in individual cages with free access to food and tap water, and their facial expressions were continuously recorded by infrared video camera. On the day of the experiment, rats received cisplatin (0, 3, and 6 mg/kg, i.p.) with or without a daily injection of a 5-HT3 receptor antagonist (granisetron: 0.1 mg/kg, i.p.) or a neurokinin NK1 receptor antagonist (fosaprepitant: 2 mg/kg, i.p.), and their eye-opening index (the ratio between longitudinal and axial lengths of the eye) in the recorded video image was calculated. Cisplatin significantly and dose-dependently induced a decrease of the eye-opening index 6 h after the cisplatin injection, and the decrease continued for 2 days. The acute phase (day 1), but not the delayed phase (day 2), of the decreased eye-opening index was inhibited by treatment with granisetron; however, fosaprepitant abolished both phases of changes. The time-course of changes in facial expression are similar to clinical evidence of cisplatin-induced nausea in humans. These findings indicate